Music Generation |
Background and summary: Compared to text, music is a domain where many more combinations are
possible. For instance, a piano has 88 keys, which means there could be up to 2^{88} different
combinations of keys to press. Moreover, music has some interesting properties , such as melody and
harmony, rests and all chords, which makes composing music a very interesting and yet challenging
task. Music generation requires generative modeling. Some prior methodologies include the usage of
Restricted Boltzmann Machine (RBM) and Recurrent Neural Networks (RNNs), but you are welcomed to
introduce and explore other ways (e.g. GAN, VAE, etc.)! Moreover, besides using existing datasets
(warning: some of which may be too small for generation tasks), you can also prepare and process
your own music dataset from MIDI or MP3 files, and train a model on a specific music style (e.g. jazz, blues,
classic, etc.). Goal: Analysing current music and then generating similar music using machine learning methods. Input data: This dataset contains mainly classical piano music pieces. There are two types of datasets with different music formats: one MIDI and one MP3 so you can choose to work on music pieces in either format. However you are welcome to use whatever genre of music you are able to get your hands on. Data Included in this Project: Classical MP3's Classical MIDI files Relevant papers: Deep Jazz Deep Learning for Music Music Generation Using Deep Learning Deep Learning Techniques for Music Generation --- A Survey |