BEADS: A dataset of Binaural Emotionally Annotated Digital Sounds
2014, IISA 2014, The 5th International Conference on Information, Intelligence, Systems and Applications
Abstract
Emotion recognition from generalized sounds is an interdisciplinary and emerging field of research. A vital requirement for this kind of investigations is the availability of ground truth datasets. Currently, there are 2 freely available datasets of emotionally annotated sounds, which, however, do not include sound evenets (SEs) with manifestation of the spatial location of the source. The latter is an inherent natural component of SEs, since all sound sources in real-world conditions are physically located and perceived somewhere in the listener's surrounding space. In this work we present a novel emotionally annotated sounds dataset consisting of 32 SEs that are spatially rendered using appropriate binaural processing. All SEs in the dataset are available in 5 spatial positions corresponding to source/receiver angles equal to 0, 45, 90, 135 and 180 degrees. We have used the IADS dataset as the initial collection of SEs prior to binaural processing. The annotation measures obtained for the novel binaural dataset demonstrate a significant accordance with the existing IADS dataset, while small ratings declinations illustrate a perceptual adaptation imposed by the more realistic SEs spatial representation.
References (18)
- K. Drossos, A. Floros, and N.-G. Kanellopoulos, "Affective acoustic ecology: Towards emotionally enhanced sound events," in Proceedings of the 7th Audio Mostly Conference: A Conference on Interaction with Sound. ACM, 2012, pp. 109-116.
- M. Marcell, M. Malatanos, C. Leahy, and C. Comeaux, "Identifying, rating, and remembering environmental sound events," Behavior Research Methods, vol. 39, no. 3, pp. 561-569, 2007. [Online]. Available: http://dx.doi.org/10.3758/BF03193026
- W. W. Gaver, "What in the world do we hear? an ecological approach to auditory event perception," Ecological Psychology, vol. 5, no. 1, pp. 1-29, 1993.
- B. Y. Newman, "And now, acoustic ecology," Optometry -Journal of the American Optometric Association, vol. 76, no. 11, pp. 629-631, Nov. 2013.
- M. M. Bradley and P. J. Lang, "The international affective digitized sounds (2nd edition; iads-2): Affective ratings of sounds and instruction manual," NIMH Center for the Study of Emotion and Attention, Gainesville, Fl, Tech. Rep. B-3, 2007.
- K. Drossos, R. Kotsakis, G. Kalliris, and A. Floros, "Sound events and emotions: Investigating the relation of rhythmic characteristics and arousal," in Information, Intelligence, Systems and Applications (IISA), 2013 Fourth International Conference on, July 2013, pp. 1-6.
- F. Weninger, F. Eyben, B. W. Schuller, M. Mortillaro, and K. R. Scherer, "On the acoustics of emotion in audio: What speech, music and sound have in common," Frontiers in Psychology, vol. 4, May 2013.
- B. Schuller, S. Hantke, F. Weninger, W. Han, Z. Zhang, and S. Narayanan, "Automatic recognition of emotion evoked by general sound events," in Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on, March 2012, pp. 341-344.
- I. Ekman and R. Kajastila, "Localization cues affect emotional judg- ments -results from a user study on scary sound," in Audio Engineering Society Conference: 35th International Conference: Audio for Games, Feb 2009.
- B. Gardner and K. Martin, "Hrtf measurements of a kemar dummy- head microphone," MIT Media Lab Perceptual Computing, Tech. Rep. 280, 1994.
- M. M. Bradley and P. J. Lang, "Measuring emotion: The self-assessment manikin and the semantic differential," Journal of Behavior Therapy and Experimental Psychiatry, vol. 25, no. 1, pp. 49-59, 1994.
- The center for the study of emotion and attention. [Online]. Available: http://csea.phhp.ufl.edu/media/iadsmessage.html
- M. M. Bradley and P. J. Lang, "Affective reactions to acoustic stimuli," Psychophysiology, vol. 37, no. 2, pp. 204-215, 2000.
- J. A. Russell and A. Mehrabian, "Evidence for a three-factor theory of emotions," Journal of Research in Personality, vol. 11, no. 3, pp. 273 -294, 1977.
- Findsounds -search the web for sounds. [Online]. Available: www.findsounds.com
- M. Grimm, K. Kroschel, E. Mower, and S. Narayanan, "Primitives- based evaluation and estimation of emotions in speech," Speech Commun., vol. 49, no. 10-11, pp. 787-800, Oct. 2007. [Online]. Available: http://dx.doi.org/10.1016/j.specom.2007.01.010
- C. Stickel et al., "Emotion detection: Application of the valence arousal space for rapid biological usability testing to enhance universal access," in Universal Access in Human-Computer Interaction. Addressing Di- versity, ser. Lecture Notes in Computer Science, C. Stephanidis, Ed. Springer Berlin Heidelberg, 2009, vol. 5614, pp. 615-624.
- Auditory list home page. [Online]. Available: http://www.auditory.org