abstract_bell_prize.pdf
Sign up for access to the world's latest research
Abstract
The following project concerns a new way human beings can communicate feelings without the filter of words. Our idea is based on information theory and data mining approaches. Thinking about a person as a point in a multidimensional space, we would like to develop a program consisting of two main steps: the first one dealing with the choice of the dimensions better able to represent a person; the second one concerning a classifier that can handle the multidimensional space previously chosen to give informations about the current emotional state of a person. The result will be a device able to interpret in real time, after an appropriate training, the emotions experienced by a person who receives an external input, measuring quantities, related to the spontaneous and unconscious brain and body reactions to the stimulus, such as heart beat rate, arterial pressure, facial expression and so on. The aim is not only to give people a better and deeper understanding of themselves and their loved one, but also a completely new way to communicate which only requires the act to " feel " and the willing to share. Moreover the device will be useful to detect personality disorders in early stages if users' data are collected in a big anonymous database because of the predictable behaviour that the machine learning techniques have.
Related papers
Information …, 2010
This study aims to predict different affective states via physiological measures with three types of computational models. An experiment was designed to elicit affective states with standardized affective pictures when multiple physiological signals were measured. Three data mining methods (i.e., decision rules, k-nearest neighbours, and decomposition tree) based on the rough set technique were then applied to construct prediction models from the extracted physiological features. We created three types of prediction models, i.e., gender-specific (male vs. female), culture-specific (Chinese vs. Indian vs. Western), and general models (participants with different genders and cultures as samples), and direct comparisons were made among these models. The best average prediction accuracies in terms of the F 1 measures (the harmonic mean of precision and recall) were 60.2%, 64.9%, 63.5% for the general models with 14, 21, and 42 samples, 78.0% for the female models, 75.1% for the male models, 72.0% for the Chinese models, 73.0% for the Indian models, and 76.5% for the Western models, respectively. These results suggested that the specific models performed better than did the general models.
2005 IEEE International Conference on Multimedia and Expo
Little attention has been paid so far to physiological signals for emotion recognition compared to audiovisual emotion channels, such as facial expressions or speech. In this paper, we discuss the most important stages of a fully implemented emotion recognition system including data analysis and classification. For collecting physiological signals in different affective states, we used a music induction method which elicits natural emotional reactions from the subject. Four-channel biosensors are used to obtain electromyogram, electrocardiogram, skin conductivity and respiration changes. After calculating a sufficient amount of features from the raw signals, several feature selection/reduction methods are tested to extract a new feature set consisting of the most significant features for improving classification performance. Three well-known classifiers, linear discriminant function, k-nearest neighbour and multilayer perceptron, are then used to perform supervised classification.
This commentary unveils over a century of literature that touches the core of affective computing but appears to be unknown to its community. To enable affective computing to flourish, knowledge on human’s physiology, on concepts such as emotions and moods, and on methods and techniques from signal processing and machine learning need to blend together. Two books that appeared approximately 50 years ago were selected, which originate from completely distinct branches of science, but both mark branches of research crucial to affective computing: i) Flanders Dunbar’s “Emotions and bodily changes: A survey of literature on psychosomatic interrelationships 1910-1953” (1954) and ii) Satosi Watanabe’s “Methodologies of Pattern Recognition (1969). When the vast and rich body of available relevant scientific literature will be cherished and the lessons learned so long ago will be embraced, affective computing can (finally) make its leap forward and probably will have a bright future!
2008 International Conference on Computational Intelligence for Modelling Control & Automation, 2008
One of the most important fields of affective computing is related to the hard problem of emotion recognition. At present, there are several approaches to the problem of automatic emotion recognition based on different methods, like Bayesian classifiers, Support Vector Machines, Linear Discriminant Analysis, Neural Networks or k-Nearest Neighbors, which classify emotions using several features obtained from facial expressions, body gestures, speech or different physiological signals. In this paper, we propose a Semantic Classifier as a new, simple and efficient approach to the problem of automatic emotion recognition. The implementation of the Semantic Classifier is based on the basic, natural principles used to decrease the complexity of problems found in n-dimensional spaces: discretization, structure identification and semantic optimization. The proposed classifier exhibits some self-organizing features and supports learning by repetition, generalization and specialization. It will be used to implement a distributed and robust system for emotion recognition.
IN: The National Seminar of Science …, 2004
Current state of the art in computer science is an attempt to build a system that understands us. Affective computing is one of the attempt made to build an information system that can detect, classify, and respond to human emotion. Affective computing is a combination of artificial intelligence and cognitive science that inspired researcher to build a computer system or robot that similar to Commander Data in Star Trek fiction movie. This paper discusses the general architecture and applications of affective computing.
2007
This paper describes ongoing work towards building a multimodal computer system capable of sensing the affective state of a user. Two major problem areas exist in the affective communication research. Firstly, affective states are defined and described in an inconsistent way. Secondly, the type of training data commonly used gives an oversimplified picture of affective expression. Most studies ignore the dynamic, versatile and personalised nature of affective expression and the influence that social setting, context and culture have on its rules of display. We present a novel approach to affective sensing, using a generic model of affective communication and a set of ontologies to assist in the analysis of concepts and to enhance the recognition process. Whilst the scope of the ontology provides for a full range of multimodal sensing, this paper focuses on spoken language and facial expressions as examples.
2007
This study aims at developing methods for recognising the affective user state with physiological signals in near real-time. A multimodal database has been collected in a simulated driving context. Relaxed and stressed states are elicited by giving the participant different tasks. The structured design of the experiment can be used to obtain a preliminary "ground truth"; a fine-grained manual annotation of the perceived stress level is currently being conducted. A data-driven, multi-resolution approach to feature extraction is taken. The classification module can deal with a dynamically varying number of input channels in the case of corrupted signals. For online, user-independent recognition of a relaxed or stressed state during the most clearly defined segments of the experiments, an accuracy of 88.8 % has been obtained using six physiological signals. Current work focuses on a reliable artefact detection, un-supervised user adaption and methods for evaluating the real-time properties of the classification.
International Journal on Computational Science & Applications, 2014
Emotions are an unstoppable and uncontrollable aspect of mental state of human. Some bad situations give stress and leads to different sufferings. One can't avoid situation but can have awareness when body feel stress or any other emotion. It becomes easy for doctors whose patient is not in condition to speak. In that case person's physiological parameters are measured to decide emotional status. While experiencing different emotion, there are also physiological changes taking place in the human body, like variations in the heart rate (ECG/HRV), skin conductance (GSR), breathing rate(BR), blood volume pulse(BVP),brain waves (EEG), temperature and muscle tension. These were some of the metrics to sense emotive coefficient. This research paper objective is to design and develop a portable, cost effective and low power embedded system that can predict different emotions by using Naïve Bayes classifiers which are based on probability models that incorporate class conditional independence assumptions. Inputs to this system are various physiological signals and are extracted by using different sensors. Portable microcontroller used in this embedded system is MSP430F2013 to automatically monitor the level of stress in computer. This paper reports on the hardware and software instrumentation development and signal processing approach used to detect the stress level of a subject.To check the device's performance, few experiments were done in which 20 adults (ten women and ten men) who completed different tests requiring a certain degree of effort, such as showing facing intense interviews in office.
2014
Emotion recognition through computational modeling and analysis of physiological signals has been widely investigated in the last decade. Most of the proposed emotion recognition systems require relatively long-time series of multivariate records and do not provide accurate real-time characterizations using short-time series. To overcome these limitations, we propose a novel personalized probabilistic framework able to characterize the emotional state of a subject through the analysis of heartbeat dynamics exclusively. The study includes thirty subjects presented with a set of standardized images gathered from the international affective picture system, alternating levels of arousal and valence. Due to the intrinsic nonlinearity and nonstationarity of the RR interval series, a specific point-process model was devised for instantaneous identification considering autoregressive nonlinearities up to the third-order according to the Wiener-Volterra representation, thus tracking very fast stimulus-response changes. Features from the instantaneous spectrum and bispectrum, as well as the dominant Lyapunov exponent, were extracted and considered as input features to a support vector machine for classification. Results, estimating emotions each 10 seconds, achieve an overall accuracy in recognizing four emotional states based on the circumplex model of affect of 79.29%, with 79.15% on the valence axis, and 83.55% on the arousal axis.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.