File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3409501.3409534
- Scopus: eid_2-s2.0-85090919747
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Coarse-To-Fine Framework For Music Generation via Generative Adversarial Networks
Title | Coarse-To-Fine Framework For Music Generation via Generative Adversarial Networks |
---|---|
Authors | |
Issue Date | 2020 |
Publisher | Association for Computing Machinery. |
Citation | Proceedings of the 2020 4th High Performance Computing and Cluster Technologies Conference & 2020 3rd International Conference on Big Data and Artificial Intelligence (HPCCT & BDAI 2020), Qingdao, China, 3-6 July 2020, p. 192-198 How to Cite? |
Abstract | Automatic music generation is highly related to Natural Language Processing (NLP). A current note in melody always depends on its context, just like a word in NLP. Yet the difference is that music is built upon a set of special chords that formulates the skeleton of the melody. To enhance automatic music generation, we propose a two-step adversarial procedure: Step 1 learns to generate chords via a chord generative adversarial networks (GANs); and step 2 trains a melody GAN to generate music for which the input is conditioned on the chords produced through the first step. Under such a two-step procedure, the chords generated in the first step formulate a basic framework of the music, which can theoretically and practically improve the performance of melody generation in the second step. Experiments demonstrate that such a cascading process is able to generate high-quality music samples with both acoustical and music theoretical guarantees. |
Persistent Identifier | http://hdl.handle.net/10722/294829 |
ISBN |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ma, D | - |
dc.contributor.author | Bin, L | - |
dc.contributor.author | Qiao, X | - |
dc.contributor.author | Cao, D | - |
dc.contributor.author | Yin, G | - |
dc.date.accessioned | 2020-12-21T11:49:09Z | - |
dc.date.available | 2020-12-21T11:49:09Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | Proceedings of the 2020 4th High Performance Computing and Cluster Technologies Conference & 2020 3rd International Conference on Big Data and Artificial Intelligence (HPCCT & BDAI 2020), Qingdao, China, 3-6 July 2020, p. 192-198 | - |
dc.identifier.isbn | 9781450375603 | - |
dc.identifier.uri | http://hdl.handle.net/10722/294829 | - |
dc.description.abstract | Automatic music generation is highly related to Natural Language Processing (NLP). A current note in melody always depends on its context, just like a word in NLP. Yet the difference is that music is built upon a set of special chords that formulates the skeleton of the melody. To enhance automatic music generation, we propose a two-step adversarial procedure: Step 1 learns to generate chords via a chord generative adversarial networks (GANs); and step 2 trains a melody GAN to generate music for which the input is conditioned on the chords produced through the first step. Under such a two-step procedure, the chords generated in the first step formulate a basic framework of the music, which can theoretically and practically improve the performance of melody generation in the second step. Experiments demonstrate that such a cascading process is able to generate high-quality music samples with both acoustical and music theoretical guarantees. | - |
dc.language | eng | - |
dc.publisher | Association for Computing Machinery. | - |
dc.relation.ispartof | Proceedings of the 2020 4th High Performance Computing and Cluster Technologies Conference & 2020 3rd International Conference on Big Data and Artificial Intelligence (HPCCT & BDAI 2020) | - |
dc.rights | Proceedings of the 2020 4th High Performance Computing and Cluster Technologies Conference & 2020 3rd International Conference on Big Data and Artificial Intelligence (HPCCT & BDAI 2020). Copyright © Association for Computing Machinery. | - |
dc.title | Coarse-To-Fine Framework For Music Generation via Generative Adversarial Networks | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Yin, G: gyin@hku.hk | - |
dc.identifier.authority | Yin, G=rp00831 | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1145/3409501.3409534 | - |
dc.identifier.scopus | eid_2-s2.0-85090919747 | - |
dc.identifier.hkuros | 320601 | - |
dc.identifier.spage | 192 | - |
dc.identifier.epage | 198 | - |
dc.publisher.place | New York, NY | - |