Skip to main content
Have a personal or library account? Click to login
Draw and Listen! A Sketch-Based System for Music Inpainting Cover
Open Access
|Nov 2022

References

  1. Adler, A., Emiya, V., Jafari, M. G., Elad, M., Gribonval, R., and Plumbley, M. D. (2012). Audio inpainting. IEEE Transactions on Audio, Speech and Language Processing, 20(3):922932. DOI: 10.1109/TASL.2011.2168211
  2. Arpege-Music (2013). Pizzicato notation software. http://www.arpegemusic.com/manual36/EN855.htm. Online; accessed 9 December 2021.
  3. Benetatos, C., VanderStel, J., and Duan, Z. (2020). BachDuet: A deep learning system for humanmachine counterpoint improvisation. In Proceedings of the International Conference on New Interfaces for Musical Expression, pages 635640.
  4. Berg, T., Chattopadhyay, D., Schedel, M., and Vallier, T. (2012). Interactive music: Human motion initiated music generation using skeletal tracking by Kinect. In Proceedings of the Conference of the Society for Electro-Acoustic Music in the United States.
  5. Chen, K., Wang, C.-i., Berg-Kirkpatrick, T., and Dubnov, S. (2020). Music sketchnet: Controllable music generation via factorized representations of pitch and rhythm. In Proceedings of the 21st International Society for Music Information Retrieval Conference, pages 7784. ISMIR.
  6. Coduys, T. and Ferry, G. (2004). Iannix aesthetical/symbolic visualisations for hypermedia composition. In Journees d’informatique musicale.
  7. Cuthbert, M. S. and Ariza, C. (2010). Music21: A toolkit for computer-aided musicology and symbolic music data. In Downie, J. S. and Veltkamp, R. C., editors, Proceedings of the International Society for Music Information Retrieval Conference, pages 637642.
  8. Dannenberg, R. B. and Raphael, C. (2006). Music score alignment and computer accompaniment. Communications of the ACM, 49(8):3843. DOI: 10.1145/1145287.1145311
  9. Donahue, C., Simon, I., and Dieleman, S. (2019). Piano Genie. In Proceedings of the 24th International Conference on Intelligent User Interfaces, pages 160164, New York, NY, USA. Association for Computing Machinery. DOI: 10.1145/3301275.3302288
  10. Dowling, W. J., Barbey, A., and Adams, L. (1999). Melodic and rhythmic contour in perception and memory. In Yi, S., editor, Music, Mind, and Science, pages 166188. Seoul National University Press.
  11. Farbood, M. M., Pasztor, E., and Jennings, K. (2004). Hyperscore: A graphical sketchpad for novice composers. IEEE Computer Graphics and Applications, 24(1):5054. DOI: 10.1109/MCG.2004.1255809
  12. Greshler, G., Shaham, T. R., and Michaeli, T. (2021). Catch-A-Waveform: Learning to generate audio from a single short example. arXiv preprint arXiv:2106.06426.
  13. Hadjeres, G. and Nielsen, F. (2020). Anticipation-RNN: Enforcing unary constraints in sequence generation, with application to interactive music generation. Neural Computing and Applications, 32(4):9951005. DOI: 10.1007/s00521-018-3868-4
  14. Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., and Lerchner, A. (2016). beta-VAE: Learning basic visual concepts with a constrained variational framework. In 5th International Conference on Learning Representations.
  15. Huang, A., Hawthorne, C., Roberts, A., Dinculescu, M., Wexler, J., Hong, L., and Howcroft, J. (2019). Bach Doodle: Approachable music composition with machine learning at scale. In Proceedings of the 20th International Society for Music Information Retrieval Conference (ISMIR).
  16. Kingma, D. and Ba, J. (2015). Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR).
  17. Kingma, D. P. and Welling, M. (2014). Auto-encoding variational Bayes. In Proceedings of the 2nd International Conference on Learning Representations.
  18. Kitahara, T., Giraldo, S., and Ramirez, R. (2018). JamSketch: Improvisation support system with GA-based melody creation from user’s drawing. In Aramaki, M., Davies, M. E. P., Kronland-Martinet, R., and Ystad, S., editors, Music Technology with Swing, pages 509521. Springer International Publishing. DOI: 10.1007/978-3-030-01692-0_34
  19. Krumhansl, C. L. and Kessler, E. J. (1982). Tracing the dynamic changes in perceived tonal organization in a spatial representation of musical keys. Psychological Review, 89(4):334. DOI: 10.1037/0033-295X.89.4.334
  20. Lewis, G. E. (2000). Too many notes: Computers, complexity and culture in Voyager. Leonardo Music Journal, pages 3339. DOI: 10.1162/096112100570585
  21. Mao, H. H., Shin, T., and Cottrell, G. (2018). DeepJ: Style-specific music generation. In 2018 IEEE 12th International Conference on Semantic Computing (ICSC), pages 377382. IEEE. DOI: 10.1109/ICSC.2018.00077
  22. Marafioti, A., Majdak, P., Holighaus, N., and Perraudin, N. (2020). GACELA: A generative adversarial context encoder for long audio inpainting of music. IEEE Journal of Selected Topics in Signal Processing, 15(1):120131. DOI: 10.1109/JSTSP.2020.3037506
  23. Pati, A., Lerch, A., and Hadjeres, G. (2019). Learning to traverse latent spaces for musical score inpainting. In Proceedings of the 20th International Society for Music Information Retrieval Conference, pages 343351. ISMIR.
  24. Sturm, B. L., Santos, J. F., Ben-Tal, O., and Korshunova, I. (2016). Music transcription modelling and composition using deep learning. Conference on Computer Simulation of Musical Creativity.
  25. Thiebaut, J.-B., Healey, P. G., and Bryan-Kinns, N. (2008). Drawing electroacoustic music. In International Computer Music Conference.
  26. U&I-Software (1997). Metasynth + Xx. https://uisoftware.com/metasynth/. Online; accessed 9 December 2021.
  27. Wuerkaixi, A., Benetatos, C., Duan, Z., and Zhang, C. (2021). Collagenet: Fusing arbitrary melody and accompaniment into a coherent song. In Proceedings of the 22nd International Society for Music Information Retrieval Conference.
  28. Xenakis, I. (1977). Upic. https://en.wikipedia.org/wiki/UPIC. Online; accessed 9 December 2021.
  29. Yang, R., Wang, D., Wang, Z., Chen, T., Jiang, J., and Xia, G. (2019). Deep music analogy via latent representation disentanglement. In Proceedings of the 20th International Society for Music Information Retrieval Conference, pages 596603. ISMIR.
  30. Yasuhara, A., Fujii, J., and Kitahara, T. (2019). Extending JamSketch: An improvisation support system. In 16th Sound and Music Computing Conference, pages 289290.
DOI: https://doi.org/10.5334/tismir.128 | Journal eISSN: 2514-3298
Language: English
Submitted on: Dec 22, 2022
Accepted on: Aug 4, 2022
Published on: Nov 2, 2022
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2022 Christodoulos Benetatos, Zhiyao Duan, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.