While it is surprising how effectively we can generate music using machine learning, it might be more fun if we had some control over the results. As a simple starting point, rather than allowing the algorithm to choose all the notes, we might wish to partially specify a piece of music. This page gives a taste of what can be done in this space.

Audio Examples

We started with the file mozart.qrtets.k160_midi1_01.mid.beamsearch from the MuseData corpus. With this track and our previously trained model as our starting point, we:

  • Chose a single voice (or midi track) to use as our guideline. You can hear this voice in isolation in the first track of the playlist below.
  • Fixed the rhythmic (or timing) information of the remaining three tracks to be the same as that of the original midi file.
  • Sampled the pitches of the remaining tracks condiitonal on the above information. Two such samples are included below. In these samples, the original Mozart line has the timbre of a grand piano, while the remaining (computer generated) lines have the timbre of a synthesised guitar. One sample is generated using conditional sampling, the other conditional probability maximisation (leading to two different characters of result).

The accompaniments are clearly responding to the fixed line from Mozart. It is sobering to compare the results with the sparkling playfulness of the original composition, however. The final track in the playlist demonstrates this, with the same instrument patches (timbres) as before, applied to the original notes of the input midi file.


For more details, please refer to the following document: