Papers by Sylvain Le Groux
Situated, perceptual, emotive and cognitive music systems: a psychologically grounded approach to interactive music composition
TDX (Tesis Doctorals en Xarxa), May 19, 2011
International Computer Music Conference, 2009
It is now widely acknowledged that music can deeply affect humans' emotional, cerebral and physio... more It is now widely acknowledged that music can deeply affect humans' emotional, cerebral and physiological states (20). Yet, the relationship between music features, emotional and physiological responses is still not clear. In this context, we introduce SiMS, a Situated Interactive Music System designed for the artistic and scientific exploration of the relationships between brain activity, physiology, emotion and musical features. SiMS provides the user with tools for the acquisition and processing of electroencephalogram (EEG) and heart-rate (HR) signal. We propose an original perceptually-grounded synthesis model, and the possibility to generate interesting musical structures in real-time as well as a mapping scheme based on psychophysiological evidence for the generation of affective music based on physiology input.

Music appears to deeply affect emotional, cerebral and physiological states, and its effect on st... more Music appears to deeply affect emotional, cerebral and physiological states, and its effect on stress and anxiety has been established using a variety of self-report, physiological, and observational means. Yet, the relationship between specific musical parameters and emotional responses is still not clear. One issue is that precise, replicable and independent control of musical parameters is often difficult to obtain from human performers. However, it is now possible to generate expressive musical material such as pitch, velocity, articulation, tempo, scale, mode, harmony and timbre using synthetic music systems. In this study, we use a synthetic music system called the SMuSe, to generate a set of wellcontrolled musical stimuli, and analyze the influence of musical structure, performance variations and timbre on emotional responses.The subjective emotional responses we obtained from a group of 13 participants on the scale of valence, arousal and dominance were similar to previous studies that used human-produced musical excerpts. This validates the use of a synthetic music system to evoke and study emotional responses in a controlled manner.
The eXperience Induction Machine: A New Paradigm for Mixed-Reality Interaction Design and Psychological Experimentation
Springer eBooks, Oct 15, 2009

Nearest-Neighbor Automatic Sound Classification with a WordNet Taxonomy
Journal of Intelligent Information Systems, 2005
Sound engineers need to access vast collections of sound efects for their film and video producti... more Sound engineers need to access vast collections of sound efects for their film and video productions. Sound efects providers rely on text-retrieval techniques to offer their collections. Currently, annotation of audio content is done manually, which is an arduous task. Automatic annotation methods, normally fine-tuned to reduced domains such as musical instruments or reduced sound effects taxonomies, are not mature enough for labeling with great detail any possible sound. A general sound recognition tool would require first, a taxonomy that represents the world and, second, thousands of classifiers, each specialized in distinguishing little details. We report experimental results on a general sound annotator. To tackle the taxonomy definition problem we use WordNet, a semantic network that organizes real world knowledge. In order to overcome the need of a huge number of classifiers to distinguish many different sound classes, we use a nearest-neighbor classifier with a database of isolated sounds unambiguously linked to WordNet concepts. A 30% concept prediction is achieved on a database of over 50.000 sounds and over 1600 concepts
The Smuse: an Embodied Cognition Approach to Interactive Music Composition
International Computer Music Conference, 2012
Knowledge and Perceptual Sound Effects Asset Management
Effects of Sound Features on the Affective State of Dementia Patients
Towards Emotion-Driven Interactive Sound Design: Bridging the Gaps Between Affect, Physiology and Sound Generation
The pictogram room is a set of educational video games for children and adults with autistic spec... more The pictogram room is a set of educational video games for children and adults with autistic spectrum disorder (ASD). Aspects like music and structured learning have been taken into account in the game designs, because many studies have indicated that such aspects improve the learning results among people with ASD. To define the educational goals of the project, specific difficulties in key development areas have been considered: corporal language, attention and imitation. There is already extensive knowledge on how to provide effective support to people with ASD through visual structure and music. Based on this knowledge, we have created a pedagogical proposal aimed at overcoming their difficulties while making use of their personal strong points and taking advantage of new technologies as well.
Nearest-neighbor generic sound classiˉcation with a wordnet-based taxonomy

Situated aesthetics: art beyond the skin
ABSTRACT Externalism considers the situatedness of the subject as a key ingredient in the constru... more ABSTRACT Externalism considers the situatedness of the subject as a key ingredient in the construction of experience. In this respect, with the development of novel real-time real-world expressive and creative technologies, the potential for externalist aesthetic experiences are enhanced. Most research in music perception and cognition has focused on tonal concert music of Western Europe and given birth to formal information-processing models inspired by linguistics (Lerdhal and Jackendoff 1983, Narmour 1990, Meyer 1956). These models do not take into account the situated aspect of music although recent developments in cognitive sciences and situated robotics have emphasized its fundamental role in the construction of representations in complex systems (Varela et al. 1991). Furthermore, although music is widely perceived as the "language of emotions", and appears to deeply affect emotional, cerebral and physiological states (Sacks 2008), emotional reactions to music are in fact rarely included as a component to music modeling. With the advent of new interactive and sensing technologies, computer-based music systems evolved from sequencers to algorithmic composers, to complex interactive systems which are aware of their environment and can automatically generate music. Consequently, the frontiers between composers, computers and autonomous creative systems have become more and more blurry, and the concepts of musical composition and creativity are being put into a new perspective. The use of sensate synthetic interactive music systems allows for the direct exploration of a situated approach to music composition. Inspired by evidence from situated robotics and neuroscience, we believe that in order to improve our understanding of compositional processes and to foster the expressivity and creativity of musical machines, it is important to take into consideration the principles of parallelism, emergence, embodiment and emotional feedback. We provide an in depth description of the evolution of interactive music systems, and propose a novel situated and interactive approach to music composition. This approach is illustrated by a sensate interactive music system called the the SMuSe (Situated Music Server).

Music appears to deeply a(ect emotional, cerebral and physiological states, and its e(ect on stre... more Music appears to deeply a(ect emotional, cerebral and physiological states, and its e(ect on stress and anxiety has been established using a variety of self-report, physiological, and observational means. Yet, the relationship between specific musical parameters and emotional responses is still not clear. One issue is that precise, replicable and independent control of musical parameters is often di)cult to obtain from human performers. However, it is now possible to generate expressive musical material such as pitch, velocity, articulation, tempo, scale, mode, harmony and timbre using synthetic music systems. In this study, we use a synthetic music system called the SMuSe, to generate a set of wellcontrolled musical stimuli, and analyze the influence of musical structure, performance variations and timbre on emotional responses.The subjective emotional responses we obtained from a group of 13 participants on the scale of valence, arousal and dominance were similar to previous studies that used human-produced musical excerpts. This validates the use of a synthetic music system to evoke and study emotional responses in a controlled manner.
This paper presents a new system that allows for intuitive control of an additive sound synthesis... more This paper presents a new system that allows for intuitive control of an additive sound synthesis model from perceptually relevant high-level sonic features. We suggest a general framework for the extraction, abstraction, reproduction and transformation of timbral characteristics of a sound analyzed from recordings. We propose a method to train, tune and evaluate our system in an automatic, consistent and reproducible fashion, and show that this system yields various original audio and musical applications.
Until recently, the sonification of Virtual Environments had often been reduced to its simplest e... more Until recently, the sonification of Virtual Environments had often been reduced to its simplest expression. Too often soundscapes and background music are predetermined, repetitive and somewhat predictable. Yet, there is room for more complex and interesting sonification schemes that can improve the sensation of presence in a Virtual Environment. In this paper we propose a system that automatically generates original background music in real-time called VR-RoBoser. As a test case we present the application of VR-RoBoser to a dynamic avatar that explores its environment. We show that the musical events are directly and continuously generated and influenced by the behavior of the avatar in three-dimensional virtual space, generating a context dependent sonification.
Human aural system is arguably one of the most refined sensor we posess. It is sensitive to such ... more Human aural system is arguably one of the most refined sensor we posess. It is sensitive to such highly complex stimuli as conversations or musical pieces. Be it a speaking voice or a band playing live, we are able to easily perceive relaxed or agitated states in an auditory stream. In turn, our own state of agitation can now be detected via electroencephalography technologies. In this paper we propose to explore both ideas in the form of a framework for conscious learning of relaxation through sonic feedback. After presenting the general paradigm of neurofeedback, we describe a set of tools to analyze electroencephalogram (EEG) data in realtime and we introduce a carefully designed, perceptually-grounded interactive music feedback system that helps the listener keeping track of and modulate her agitation state as measured by EEG.

Recent investigations aiming to identify which are the most influential parameters of graphical r... more Recent investigations aiming to identify which are the most influential parameters of graphical representations on human emotion have presented mixed results. In this study, we manipulated four emotionally relevant geometric and kinematic characteristics of non symbolic bidimensional shapes and anima- tions, and evaluated their specific influence in the affective state of human observers. The controlled modification of basic geometric and cinematic features of such shapes (i.e., angles, curvature, symmetry and motion) led to the generation of a variety of forms and animations that elicited significantly different self-reported affective states in the axes of valence and arousal. Curved shapes evoked more positive and less arousing emotional states than edgy shapes, while figures translating slowly were perceived as less arousing and more positive than those translating fast. In addition, we found significant interactions between angles and curvature factors both in the valence and t...

It is generally admitted that music is a powerful carrier of emotions [4, 21], and that audition ... more It is generally admitted that music is a powerful carrier of emotions [4, 21], and that audition can play an important role in enhancing the sensation of presence in Virtual Environments [5, 22]. In mixed-reality environments and interactive multi-media systems such as Massively Multiplayer Online Games (MMORPG), the improvement of the user’s perception of immersion is crucial. Nonetheless, the sonification of those environments is often reduced to its simplest expression, namely a set of prerecorded sound tracks. Background music many times relies on repetitive, predetermined and somewhat predictable musical material. Hence, there is a need for a sonification scheme that can generate context sensitive, adaptive, rich and consistent music in real-time. In this paper we introduce a framework for the sonification of spatial behavior of multiple human and synthetic characters in a Mixed-Reality environment. Previously we have used RoBoser [1] to sonify different interactive installatio...
Uploads
Papers by Sylvain Le Groux