Ontological Representation of Audio Features
2016, Lecture Notes in Computer Science
Abstract
Feature extraction algorithms in Music Informatics aim at deriving statistical and semantic information directly from audio signals. These may be ranging from energies in several frequency bands to musical information such as key, chords or rhythm. There is an increasing diversity and complexity of features and algorithms in this domain and applications call for a common structured representation to facilitate interoperability, reproducibility and machine interpretability. We propose a solution relying on Semantic Web technologies that is designed to serve a dual purpose (1) to represent computational workflows of audio features and (2) to provide a common structure for feature data to enable the use of Open Linked Data principles and technologies in Music Informatics. The Audio Feature Ontology is based on the analysis of existing tools and music informatics literature, which was instrumental in guiding the ontology engineering process. The ontology provides a descriptive framework for expressing different conceptualisations of the audio feature extraction domain and enables designing linked data formats for representing feature data. In this paper, we discuss important modelling decisions and introduce a harmonised ontology library consisting of modular interlinked ontologies that describe the different entities and activities involved in music creation, production and publishing.
References (8)
- G. Fazekas, Y. Raimond, K. Jakobson, and M. Sandler. An overview of semantic web activities in the OMRAS2 project. Journal of New Music Research (JNMR), 39(4), 2010.
- B. Fields, K. Page, D. De Roure, and T. Crawford. The segment ontology: Bridg- ing music-generic and domain-specific. In proc. IEEE International Conference on Multimedia and Expo, Barcelona, Spain, 11-15 July, 2011.
- E. J. Humphrey, J. Salamon, O. Nieto, J. Forsyth, R. Bittner, and J. P. Bello. JAMS: A JSON annotated music specification for reproducible MIR research. In Proc. 15th Int. Soc. for Music Info. Retrieval Conf., Taipei, Taiwan, 2014.
- M. Lanthaler and C. Gütl. On using JSON-LD to create evolvable RESTful services. In proc. 3rd International Workshop on RESTful Design at WWW'12, 2012.
- D. Mitrovic, M. Zeppelzauer, and C. Breiteneder. Features for content-based audio retrieval. Advances in Computers, 78:71-150, 2010.
- D Moffat, D Ronan, and J. D. Reiss. An evaluation of audio feature extraction toolboxes. In Proc. of the 18th Int. Conference on Digital Audio Effects (DAFx-15), Trondheim, Norway, 2015.
- Y Raimond, S. Abdallah, M. Sandler, and F. Giasson. The music ontology. In Proceedings of the 8th International Conference on Music Information Retrieval, ISMIR 2007, Vienna, Austria, September 23-27, 2007.
- Florian Thalmann, Alfonso Perez Carillo, György Fazekas, Geraint A. Wiggins, and Mark Sandler. The mobile audio ontology: Experiencing dynamic music objects on mobile devices. In proc. 10th IEEE International Conference on Semantic Comput- ing, Laguna Hills, CA, USA, 2016.