Papers by Nadia Magnenat-halmann
User centric media in the future internet: trends and challenges
Proceedings of the 3rd international conference on Digital Interactive Media in Entertainment and Arts , Jan 1, 2008
Abstract The evolution of the Internet is being leaded by two main threads, one driven by industr... more Abstract The evolution of the Internet is being leaded by two main threads, one driven by industry through the evolution of networking infrastructures and the other driven by producers of content either in a professional or non professional way. Moreover, due to ...

International Journal of Computer Assisted Radiology and Surgery, 2015
Thinning of cartilage is a common manifestation of osteoarthritis. This study addresses the need ... more Thinning of cartilage is a common manifestation of osteoarthritis. This study addresses the need of measuring the focal femoral cartilage thickness at the weight bearing regions of the knee by developing a reproducible and automatic method from MR images. Methods 3D models derived from semi-automatic MR image segmentations were used in this study. Two different methods were examined for identifying the mechanical loading of the knee articulation. The first was based on a generic weight bearing regions definition, derived from gait characteristics and cadaver studies. The second used a physically based simulation to identify the patient-specific stress distribution of the femoral cartilage, taking into account the forces and movements of the knee. For this purpose, four different scenarios were defined in our 3D finite element (FE) simulations. The radial method was used to calculate the cartilage thickness in stress-based regions of interest and a study was performed to validate the accuracy and suitability of the radial thickness measurements.
The Visual Computer, 2008

The Visual Computer, 2008
In this paper, we introduce a European research project, interactive media with personal networke... more In this paper, we introduce a European research project, interactive media with personal networked devices (INTERMEDIA) in which we seek to progress beyond the home and device-centric convergence toward truly usercentric convergence of multimedia. Our vision is to make the user the multimedia center: the user as the point at which multimedia services and the means for interacting with them converge. This paper proposes the main research goals in providing users with a personalized interface and content independent of physical networked devices, and space and time. As a case study, we describe an indoors, mobile mixed reality guide system: Chloe@University. With a see-through head-mounted display (HMD) connected to a small wearable computing device, Chloe@University provides users with an efficient way to guide someone in a building. A 3D virtual character in front of the user guides him/her to the required destination.

lrec-conf.org
The present paper reports on a recent effort that resulted in the establishment of a unique multi... more The present paper reports on a recent effort that resulted in the establishment of a unique multimodal affect database, referred to as the PlayMancer database. This database was created in support of the research and development activities, taking place within the PlayMancer project, which aim at the development of a serious game environment in support of treatment of patients with behavioural and addictive disorders, such as eating disorders and gambling addictions. Specifically, for the purpose of data collection, we designed and implemented a pilot trial with healthy test subjects. Speech, video and bio-signals (pulse-rate, SpO 2 ) were captured synchronously, during the interaction of healthy people with a number of video games. The collected data were annotated by the test subjects (self-annotation), targeting proper interpretation of the underlying affective states. The broad-shouldered design of the PlayMancer database allows its use for the needs of research on multimodal affect-emotion recognition and multimodal human-computer interaction in serious games environment.
This paper reports the preliminary results of the architectural design of the HAPTEX system that ... more This paper reports the preliminary results of the architectural design of the HAPTEX system that will be developed in the framework of the IST FET (Future and Emerging Technologies) initiative. The aim of the EU funded RTD project is to realize a virtual reality system able to render, visually and haptically, the behavior of fabrics. The integration of force-feedback devices

Computers & Graphics, 1994
An interactive tool is proposed for the visualization, editing, and manipulation of multiple trac... more An interactive tool is proposed for the visualization, editing, and manipulation of multiple track sequences. Multiple tracks sequences can be associated with an articulated figure and may retain motion issued from different motion generators such as walking, inverse kinematics, and key framing within a unified framework. The TRACK system provides a large set of tools for track space manipulations and goaloriented corrections. This approach allows an incremental refinement design combining information and constraints from both the track space ( usually joints) and the Cartesian space. We have dedicated this system to the design and evaluation of human motions for the purpose of animation. For this reason, we also insure the real-time display of the 3D figure motion. The interface design and the interaction device integration are realized with the Fifth Dimension Toolkit.

Computer Graphics Forum, 1995
We describe the HUMANOID environment dedicated to human modeling and animation for general multim... more We describe the HUMANOID environment dedicated to human modeling and animation for general multimedia, VR, and CAD applications integrating virtual humans. We present the design of the system and the integration of the various features: generic modeling of a large class of entities with the BODY data structure, realistic skin deformation for body and hands, facial animation, collision detection, integrated motion control and parallelization of computation intensive tasks. keywords: Articulated Figure Modeling, Animated Deformation, Collision Detection, Parallelization * a flexible design and management of multiple humanoid entities * skin deformation of a human body, including the hands and the face * a multi-layer facial animation module * collision detection and correction between multiple humanoid entities * several motion generators and their blending: keyframing, inverse kinematics, dynamics, walking and grasping
The Visual Computer, 1991
This paper presents a human walking model built from experimental data based on a wide range of n... more This paper presents a human walking model built from experimental data based on a wide range of normalized velocities. The model is structured in two levels. At a first level, global spatial and temporal characteristics (normalized length and step duration) are generated. At the second level, a set of parameterized trajectories produce both the position of the body in the space and the internal body configuration . This is performed for a standard structure and an average configuration of the human body.
A sys-tem for the parallel integrated motion of multiple deformable human characters with collision detection
Computer Graphics Forum, 1995
Human Free-Walking Model For A Real-Time Interactive Design Of Gaits
This paper presents a human walking model built from experimental data based on a wide range of n... more This paper presents a human walking model built from experimental data based on a wide range of normalized velocities. The model is structured in two levels. At a first level, global spatial and temporal characteristics (normalized length and step duration) are generated. At the ...
New Trends in Animation and Visualization

Proc. Computer …, 1998
In this paper, we propose a multi-sensor based method of automatic grasping motion control for mu... more In this paper, we propose a multi-sensor based method of automatic grasping motion control for multiple synthetic actors. Despite the fact that it is described and implemented in our specific model, the method is general and can be applicable to other models. A heuristic method is defined to decide the different grasping strategies from object geometry, hand geometry and observation of real grasping. Inverse kinematics can derive the final posture of the arms in order to bring the hands around the object. Multi-sensor object detection decides the finger contact points on the object and determine their position and orientation. Then, a group of polynomials derived from Euler-Lagrange equation is used to interpolate between the initial and final arm postures resulting in a more realistic real-time motion than linear interpolation. We also present 3D interactive grasping examples involving multiple synthetic actors.

lrec-conf.org
The present paper reports on a recent effort that resulted in the establishment of a unique multi... more The present paper reports on a recent effort that resulted in the establishment of a unique multimodal affect database, referred to as the PlayMancer database. This database was created in support of the research and development activities, taking place within the PlayMancer project, which aim at the development of a serious game environment in support of treatment of patients with behavioural and addictive disorders, such as eating disorders and gambling addictions. Specifically, for the purpose of data collection, we designed and implemented a pilot trial with healthy test subjects. Speech, video and bio-signals (pulse-rate, SpO 2 ) were captured synchronously, during the interaction of healthy people with a number of video games. The collected data were annotated by the test subjects (self-annotation), targeting proper interpretation of the underlying affective states. The broad-shouldered design of the PlayMancer database allows its use for the needs of research on multimodal affect-emotion recognition and multimodal human-computer interaction in serious games environment.
While the animation and rendering techniques used in the domain of textile simulation have dramat... more While the animation and rendering techniques used in the domain of textile simulation have dramatically evolved during the last two decades, the ability to manipulate and modify virtual textiles intuitively using dedicated ergonomic devices has been definitely neglected. The project HAPTEX combines research in the field of textile simulation and haptic interfaces. HAPTEX aims to provide a virtual reality system allowing for multipoint haptic interaction with a piece of virtual fabric simulated in real-time. The fundamental research undertaken by the project ranges from the physics-based simulation of textiles to the design and development of novel tactile and force-feedback rendering strategies and interfaces.
This paper reports the preliminary results of the architectural design of the HAPTEX system that ... more This paper reports the preliminary results of the architectural design of the HAPTEX system that will be developed in the framework of the IST FET (Future and Emerging Technologies) initiative. The aim of the EU funded RTD project is to realize a virtual reality system able to render, visually and haptically, the behavior of fabrics. The integration of force-feedback devices
Coordinating the Generation of Signs in Multiple Modalities in an Affective Agent
Emotion-Oriented Systems, 2011
In order to be believable, embodied conversational agents (ECAs) must show expression of emotions... more In order to be believable, embodied conversational agents (ECAs) must show expression of emotions in a consistent and natural looking way across modalities. The ECA has to be able to display coordinated signs of emotion during realistic emotional behaviour. Such a capability requires one to study and represent emotions and coordination of modalities during non-basic realistic human behaviour, to define languages for representing such behaviours to be displayed by the ECA, to have access to mono-modal representations ...
Realistic Emotional Gaze and Head Behavior Generation Based on Arousal and Dominance Factors
Lecture Notes in Computer Science, 2010
Current state-of-the-art virtual characters fall far short of characters produced by skilled anim... more Current state-of-the-art virtual characters fall far short of characters produced by skilled animators in terms of behavioral adequacy. This is due in large part to the lack of emotional expressivity in physical behaviors. Our approach is to develop emotionally expressive gaze and head movement models that are driven parametrically in real-time by the instantaneous mood of an embodied conversational agent (ECA). A user study was conducted to test the perceived emotional expressivity of the facial animation sequences generated by these models. The results showed that changes in gaze and head behavior combined can be used to express changes in arousal and/or dominance level of the ECA successfully.
A model for personality and emotion simulation
... The mood is updated by a function Ψm(p, ωt,σt,a) that calculates the mood change, based on th... more ... The mood is updated by a function Ψm(p, ωt,σt,a) that calculates the mood change, based on the personality, the emotional state history, the mood history and the emotion influence. The mood is internally updated using a function Ωm(p, ωt,σt). ...

2006 International Conference on Cyberworlds, 2006
In this paper, we present a simple and robust Mixed Reality (MR) framework that allows for real-t... more In this paper, we present a simple and robust Mixed Reality (MR) framework that allows for real-time interaction with Virtual Humans in real and virtual environments under consistent illumination. We will look at three crucial parts of this system: interaction, animation and global illumination of virtual humans for an integrated and enhanced presence. The interaction system comprises of a dialogue module, which is interfaced with a speech recognition and synthesis system. Next to speech output, the dialogue system generates face and body motions, which are in turn managed by the virtual human animation layer. Our fast animation engine can handle various types of motions, such as normal key-frame animations, or motions that are generated on-the-fly by adapting previously recorded clips. All these different motions are generated and blended on-line, resulting in a flexible and realistic animation. Our robust rendering method operates in accordance with the previous animation layer, based on an extended for virtual humans Precomputed Radiance Transfer (PRT) illumination model, resulting in a realistic display of such interactive virtual characters in mixed reality environments. Finally, we present a scenario that illustrates the interplay and application of our methods, glued under a unique framework for presence and interaction in MR.
Uploads
Papers by Nadia Magnenat-halmann