Academia.eduAcademia.edu

Outline

New interfaces for popular music performance

2007, Proceedings of the 7th international conference on New interfaces for musical expression - NIME '07

https://doi.org/10.1145/1279740.1279764

Abstract

Augmenting performances of live popular music with computer systems poses many new challenges. Here, "popular music" is taken to mean music with a mostly steady tempo, some improvisational elements, and largely predetermined melodies, harmonies, and other parts. The overall problem is studied by developing a framework consisting of constraints and subproblems that any solution should address. These problems include beat acquisition, beat phase, score location, sound synthesis, data preparation, and adaptation. A prototype system is described that offers a set of solutions to the problems posed by the framework, and future work is suggested.

FAQs

sparkles

AI

What are the key challenges in augmenting popular music performances?add

The paper identifies challenges like beat acquisition, score location maintenance, and sound synthesis as critical in augmenting popular music performances. Additionally, adapting to the flexible structure of live performances presents unique issues not found in standardized music formats.

How does the PMA framework address human-computer interaction issues?add

The PMA framework focuses on maintaining low cognitive load on musicians while ensuring reliable communication of beats and score locations. This approach allows musicians to engage fully in performance without becoming overwhelmed by the technology.

What role does computer accompaniment play in popular music according to this research?add

While computer accompaniment systems have been explored, they often fail to adapt to the improvisational nature of popular music. As noted, popular music's reliance on live spontaneity contrasts with the rigid structures of traditional accompaniment.

What technological innovations are proposed for real-time sound synthesis?add

Innovative sound synthesis techniques involve pre-recording trumpet lines and using time-stretching methods to adjust tempo without artifacts. This enables real-time performance without sacrificing sonic quality or requiring click tracks.

What experimental results support the reliability of beat detection in performance?add

Preliminary tests show a standard deviation of 28 ms in beat acquisition under controlled conditions, suggesting quick and reliable synchronization abilities. However, testing with the trumpet simultaneously may affect these results, indicating potential for further research.

References (13)

  1. REFERENCES
  2. Bell, T., Blizzard, D., Green, R., and Bainbridge, D. Design of a digital music stand. In Proceedings of the 6 th International Conference on Music Information Retrieval (ISMIR06) (London, Sep. 11-15, 2005), Queen Mary, University of London, 430-433.
  3. Brossier, P., Bello, J. P., and Plumbley, M. D. Real-time temporal segmentation of note objects in music signals. In Proceedings ICMC 2004, (Miami, Florida, Nov. 1-6, 2004), International Computer Music Association, 458-461.
  4. de Cheveigne, A., and Kawahara, H. YIN, a fundamental frequency estimator for speech and music. Journal of the Acoustical Society of America 111, 4 (Apr. 2002), 1917- 1930.
  5. Dannenberg, Machine tongues XIX: Nyquist, a language for composition and sound synthesis. Computer Music Journal, 21, 3 (Fall 1997), 50-60.
  6. Dannenberg, R., and Bookstein, K. Practical Aspects of a midi conducting program. In Proceedings of the 1991 International Computer Music Conference (October 1991). International Computer Music Association, San Francisco, CA, 1991, 537-540.
  7. Dannenberg, R., and Derenyi, I. Combining instrument and performance models for high-quality music synthesis. Journal of New Music Research, 27, 3 (Sep. 1998), 211-238.
  8. Dannenberg, R., and Raphael, C. Music score alignment and computer accompaniment. Communications of the ACM, 49, 8 (Aug. 2006), 39-43.
  9. Hu, N., and Dannenberg, R. Bootstrap learning for accurate onset detection. Machine Learning, 65, 2-3 (Dec. 2006), 457- 471.
  10. F. Gouyon, F., Klapuri, A., Dixon, S., Alonso, M., Tzanetakis, G., Uhle, C., and Cano, P. An Experimental Comparison of Audio Tempo Induction Algorithms. IEEE Transactions on Audio, Speech and Language Processing, 14, 5 (Sep. 2006), 1832-1844.
  11. Lee, E., and Borchers, J. The role of time in engineering computer music systems. In Proceedings of the 2005 International Conference on New Interfaces for Musical Expression (NIME05), Vancouver, BC, Canada, 2005, 204- 207.
  12. Machover, T. Dreaming a New Music,Chamber Music, 23, 5 (Oct. 2006), 46 -54.
  13. Shulgold, M. 'Virtual orchestra' breeds real rage from musicians. In Rocky Mountain News. Dec. 18, 2004. Online at: http://www.dmamusic.org/news/000198.php.