Hexaphonic guitar transcription and visualisation
2016
Sign up for access to the world's latest research
Abstract
Comunicacio presentada a: Second International Conference on Technologies for Music Notation and Representation (TENOR 2016) celebrat del 27 al 29 de maig de 2016 a Cambridge, Regne Unit.
Related papers
8th International Conference on Digital Audio …, 2005
The guitar is an instrument that gives the player great control over timbre. Different plucking techniques involve varying the finger position along the string, the inclination between the finger and the string, the inclination between the hand and the string and the degree of relaxation of the plucking finger. Guitarists perceive subtle variations of these parameters and they have developed a very rich vocabulary to describe the brightness, the colour, the shape and the texture of the sounds they produce on their instrument. Dark, bright, chocolatey, transparent, muddy, wooly, glassy, buttery, and metallic are just a few of those adjectives. The aim of this research is to conceive a computer tool producing the synthesis of the vocal imitation as well as the graphical representation of phonetic gestures underlying the description of the timbre of the classical guitar, as a function of the instrumental gesture parameters (mainly the plucking angle and distance from the bridge) and based on perceptual analogies between guitar and speech sounds. Similarly to the traditional teaching of tabla which uses onomatopeia to designate the different strokes, vocal imitation of guitar timbres could provide a common language to guitar performers, complementary to the mental imagery they commonly use to communicate about timbre, in a pedagogical context for example.
Guitar tablature transcription (GTT) aims at automatically generating symbolic representations from real solo guitar performances. Due to its applications in education and musicology, GTT has gained traction in recent years. However, GTT robustness has been limited due to the small size of available datasets. Researchers have recently used synthetic data that simulates guitar performances using pre-recorded or computer-generated tones, allowing for scalable and automatic data generation. The present study complements these efforts by demonstrating that GTT robustness can be improved by including synthetic training data created using recordings of real guitar tones played with different audio effects. We evaluate our approach on a new evaluation dataset with professional solo guitar performances that we composed and collected, featuring a wide array of tones, chords, and scales.
2013
Abstract—In this report, we present our methodology of analyzing music to extract and transcribe guitar notes. Our approach is divided into three steps, the first isolates the guitar track from the music using Independent Subspance Analysis (ISA). Meanwhile, the second step uses both frequency and time domain based approaches to transcribe the guitar track into the corresponding notes. A duration detection algorithm is then used to isolate the length of each note. We will apply this methodology to a series of test data ranging from diatonic scales to segments of classic rock songs. One important distinction is that we do not attempt transcription of guitar chords, focusing only on monophonic scores.
Musical transcription is a real challenge, and more so in the case of folk music. Signal visualization tools may be of interest for this kind of music. The present paper is a comparison between a musical transcription and two signal representations (pitch and rhythm) applied to a song taken from the Gwoka repertoire. The study aims at finding similarities and differences in pitch, rhythm and performance features between the transcription and the signal visualiza-tion. Signal visualization is based on vowel segmentation, and on extraction of pitch and duration information. Transcription provides general characteristics about the music (harmony, tonality and rhythmic structure), while signal visualization provides performance-related characteristics. The main conclusion is that both approaches are of great interest for understanding folk music.
2017
Computational modelling of expressive music performance has been widely studied in the past. While previous work in this area has been mainly focused on classical piano music, there has been very little work on guitar music, and such work has focused on monophonic guitar playing. In this work, we present a machine learning approach to automatically generate expressive performances from non expressive music scores for polyphonic guitar. We treated guitar as an hexaphonic instrument, obtaining a polyphonic transcription of performed musical pieces. Features were extracted from the scores and performance actions were calculated from the deviations of the score and the performance. Machine learning techniques were used to train computational models to predict the aforementioned performance actions. Qualitative and quantitative evaluations of the models and the predicted pieces were performed.
2021
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY
Musical Instruments in the 21st Century, 2016
This chapter explores the role of visual representation of sound in music software. Software design often remediates older technologies, such as common music notation, the analogue tape, outboard studio equipment, as well as applying metaphors from acoustic and electric instruments. In that context, the aim here will be study particular modes in which abstract shapes, symbols and innovative notations can be applied in systems for composition and live performance. Considering the practically infinite possibilities of representation of sound in digital systems-both in terms of visual display and mapping of gestural controllers to sound-the concepts of graphic design, notation and performance will be discussed in relation to four systems created by the author: ixi software, ixiQuarks, ixi lang, and the Threnoscope live coding environment. These will be presented as examples of limited systems that frame the musician's compositional thoughts providing a constrained palette of musical possibilities. What this software has in common is the integral use of visual elements in musical composition, equally as prescriptive and representative notation for musical processes. The chapter will present the development of musical software as a form of composition: it is an experimental activity that goes hand in hand with sound and music research, where the musician-programmer has to gain a formal understanding of diverse domains that before might have been tacit knowledge. The digital system's requirements for abstractions of the source domain, specifications of material, and completeness of definitions are all features that inevitably require a very strong understanding of the source domain.
This paper examines a range of methods of exploit- ing the inherent semantic qualities of graphical symbols, colour and visual communication. Moody’s Notations Theory is used as a starting point in the discussion of expanding the range of techniques for visualizing sound and instrumental notation. Recent findings in the understanding of semantic primes, visual language, perceptual met- aphors and “weak synaesthesia” are examined and connections to existing sound-based fields such as spectromorphology, action-based scores, graphical and animated notation. The potentials for the use of colour to represent timbre both for descriptive analytical and prescriptive compositional tool in electroacoustic music is explored. Works by Cathy Berbarian, Luciano Berio, Aaron Cassidy, Vinko Globokar, Juraj Kojs, Helmut Lachenmann, Ryan Ross Smith and the author are discussed.
2006
Sonic Visualiser is the name for an implementation of a system to assist study and comprehension of the contents of audio data, particularly of musical recordings. It is a C++ application with a Qt4 GUI that runs on Windows, Mac, and Linux. It embodies a number of concepts which are intended to improve interaction with audio data and features, most notably with respect to the representation of time-synchronous information. The architecture of the application allows for easy integration of third party algorithms for the extraction of low and mid-level features from musical audio data. This paper describes some basic principles and functionalities of Sonic Visualiser.
The scientific program (keynotes, presentations, posters, demos) will be held at the University library, which is located at the centre of the University of Oslo campus.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (11)
- REFERENCES
- A. P. Klapuri, "Automatic Music Transcription as We Know it Today," J. New Music Res., vol. 33, no. 3, pp. 269-282, 2004.
- I. Bergstrom, "Soma: live performance where congruent musical, visual, and proprioceptive stimuli fuse to form a combined aesthetic narrative," 2011.
- R. Ivry, "A Review of Synesthesia," Univ. California, Berkeley, 2000.
- M. N. Bain, "Real Time Music Visualization: A Study In The Visual Extension Of Music," The Ohio State University, 2008.
- "Magic Music Visuals: VJ Software, Music Visualizer & Beyond." [Online]. Available: https://magicmusicvisuals.com/. [Accessed: 14-Jun- 2015].
- "Guitar Hero Live Home | Official Site of Guitar Hero." [Online]. Available: https://www.guitarhero.com/. [Accessed: 01-Nov- 2015].
- "Rocksmith® 2014 | Página oficial de España | Ubisoft®." [Online]. Available: http://rocksmith.ubi.com/rocksmith/es-es/home/. [Accessed: 01-Nov-2015].
- "Yousician." [Online]. Available: https://get.yousician.com/. [Accessed: 01-Nov-2015].
- D. Bogdanov, N. Wack, E. Gómez, S. Gulati, P. Herrera, O. Mayor, G. Roma, J. Salamon, J. Zapata, and X. Serra, "ESSENTIA: an open-source library for sound and music analysis," Proc. ACM SIGMM Int. Conf. Multimed., 2013.
- C. Pramerdorfer, "An Introduction to Processing and Music Visualization."