Towards a neural based theory of emotional dispositions
1999
Sign up for access to the world's latest research
Abstract
We present underpinning neural structures to represent various components of emotional dispositions. After a description of simple models of the dynamics, an even simpler multi-layer perceptron model is presented with three outputs. This leads to a high level of success in recognition of emotional dispositions from a database of faces, as well as an interpretation in terms of the underlying neural system. By extending the discrete classification approach to continuous variables in a three-dimensional state-space the recognition performance can be improved and the causes for classification errors can be studied. Key-Words: facial analysis, emotion recognition IMACS/IEEE CSCC'99 Proceedings, Pages:5341-5346
Related papers
Journal of cognitive …, 2002
There are two competing theories of facial expression recognition. Some researchers have suggested that facial expression recognition is an example of categorical perception. In this view, expression categories are considered to be discrete entities with sharp boundaries between them, and discrimination of similar pairs of expressive faces is enhanced when the faces are near those boundaries. Other researchers, however, suggest that facial expression perception is more graded and that facial expressions are best thought of as points in a continuous, low-dimensional space, where, for example, "Surprise" expressions are between "Happiness" and "Fear" expressions, due to their perceived similarity. In this paper, we show that a simple yet biologically plausible neural network model, trained to classify facial expressions into six basic emotions, can fit data used to support both theories. Without any attempt to fit the model to the data, the model matches a variety of psychological data on categorization, similarity, reaction times, discrimination, and recognition difficulty, both qualitatively and quantitatively. We thus explain many of the seemingly complex psychological phenomena related to facial expression perception as natural consequences of the task's implementation in the brain.
The performance of a neural network that categorizes facial expressions is compared with human subjects over a set of experiments using interpolated imagery. The experiments for both the human subjects and neural networks make use of interpolations of facial expressions from the Pictures of Facial Affect Database . The only difference in materials between those used in the human subjects experiments and our materials are the manner in which the interpolated images are constructedimage-quality morphs versus pixel averages. Nevertheless, the neural network accurately captures the categorical nature of the human responses, showing sharp transitions in labeling of images along the interpolated sequence. Crucially for a demonstration of categorical perception , the model shows the highest discrimination between transition images at the crossover point. The model also captures the shape of the reaction time curves of the human subjects along the sequences. Finally, the network matches human subjects' judgements of which expressions are being mixed in the images. The main failing of the model is that there are intrusions of "neutral" responses in some transitions, which are not seen in the human subjects. We attribute this difference to the difference between the pixel average stimuli and the image quality morph stimuli. These results show that a simple neural network classifier, with no access to the biological constraints that are presumably imposed on the human emotion processor, and whose only access to the surrounding culture is the category labels placed by American subjects on the facial expressions, can nevertheless simulate fairly well the human responses to emotional expressions.
Pattern Recognition and Image Analysis, 2015
Emotion categorization has become an increasingly important area of research due to the rising number of intelligent systems. Artificial classifiers have demonstrated limited competency in classifying different emotions and have widely been used in recent years to facilitate the task of emotion categorization. Conversely, it requires time and is sometimes hard for human classifiers to agree with each other on the facial expression categorization tasks. Hence, this thesis will consider how the combination of human and artificial classifiers can lead to improvements in emotion classification. Further, as emotions are not only communicative tools that are reflected on the face, this thesis will also investigate how emotions are reflected in the body and how that can affect the decision-making process. Existing methods of emotion categorization from visual data using deep learning algorithms analyze the emotion by representing knowledge in a homogeneous way. As a result, a small change ...
This paper investigates dimensional emotion prediction and classification from naturalistic facial expressions. Similarly to many pattern recognition problems, dimensional emotion classification requires generating multi-dimensional outputs. To date, classification for valence and arousal dimensions has been done separately, assuming that they are independent. However, various psychological findings suggest that these dimensions are correlated. We therefore propose a novel, multi-layer hybrid framework for emotion classification that is able to model inter-dimensional correlations. Firstly, we derive a novel geometric feature set based on the (a)symmetric spatio-temporal characteristics of facial expressions. Subsequently, we use the proposed feature set to train a multi-layer hybrid framework composed of a temporal regression layer for predicting emotion dimensions, a graphical model layer for modeling valence-arousal correlations, and a final classi cation and fusion layer exploiting informative statistics extracted from the lower layers. This framework (i) introduces the Auto-Regressive Coupled HMM (ACHMM), a graphical model specifically tailored to accommodate not only inter-dimensional correlations but also to exploit the internal dynamics of the actual observations, and (ii) replaces the commonly used Maximum Likelihood principle with a more robust final classification and fusion layer. Subject-independent experimental validation, performed on a naturalistic set of facial expressions, demonstrates the effectiveness of the derived feature set, and the robustness and flexibility of the proposed framework.
2006
Automatic facial expression analysis is an important aspect of Human Machine Interaction as the face is an important communicative medium. We use our face to signal interest, disagreement, intentions or mood through subtle facial motions and expressions. Work on automatic facial expression analysis can roughly be divided into the recognition of prototypic facial expressions such as the six basic emotional states and the recognition of atomic facial muscle actions (Action Units, AUs). Detection of AUs rather than emotions makes facial expression detection independent of culture-dependent interpretation, reduces the dimensonality of the problem and reduces the amount of training data required. Classic psychological studies suggest that humans consciously map AUs onto the basic emotion categories using a finite number of rules. On the other hand, recent studies suggest that humans recognize emotions unconsciously with a process that is perhaps best modeled by artificial neural networks (ANNs). This paper investigates these two claims. A comparison is made between detection of emotions directly from features vs a two-step approach where we first detect AUs and use the AUs as input to either a rulebase or an ANN to recognize emotions. The results suggest that the two-step approach is possible with a small loss of accuracy and that biologically inspired classification techniques outperfrom those that approach the classification problem from a logical perspective, suggesting that biologically inspired classifiers are more suitable for computer-based analysis of facial behaviour than logic inspired methods.
Proceedings of the Twentieth Annual Cognitive …, 1998
The performance of a neural network that categorizes facial expressions is compared with human subjects over a set of experiments using interpolated imagery. The experiments for both the human subjects and neural networks make use of interpolations of facial expressions from the Pictures of Facial Affect Database . The only difference in materials between those used in the human subjects experiments and our materials are the manner in which the interpolated images are constructedimage-quality morphs versus pixel averages. Nevertheless, the neural network accurately captures the categorical nature of the human responses, showing sharp transitions in labeling of images along the interpolated sequence. Crucially for a demonstration of categorical perception , the model shows the highest discrimination between transition images at the crossover point. The model also captures the shape of the reaction time curves of the human subjects along the sequences. Finally, the network matches human subjects' judgements of which expressions are being mixed in the images. The main failing of the model is that there are intrusions of "neutral" responses in some transitions, which are not seen in the human subjects. We attribute this difference to the difference between the pixel average stimuli and the image quality morph stimuli. These results show that a simple neural network classifier, with no access to the biological constraints that are presumably imposed on the human emotion processor, and whose only access to the surrounding culture is the category labels placed by American subjects on the facial expressions, can nevertheless simulate fairly well the human responses to emotional expressions.
International Conference on Software and Data Technologies, 2007
Automated facial expression classification is very important in the design of new human-computer interaction modes and multimedia interactive services and arises as a difficult, yet crucial, pattern recognition problem. Recently, we have been building such a system, called NEU-FACES, which processes multiple camera images of computer user faces with the ultimate goal of determining their affective state. In here, we present results from an empirical study we conducted on how humans classify facial expressions, corresponding error rates, and to which degree a face image can provide emotion recognition from the perspective of a human observer. This study lays related system design requirements, quantifies statistical expression recognition performance of humans, and identifies quantitative facial features of high expression discrimination and classification power.
Psychological, Cognitive and Neuroscientific Perspectives, 2011
The objective of this chapter is to introduce the reader to the recent advances in computer processing of facial expressions and communicated affect. Human facial expressions have evolved in tandem with human face recognition abilities, and show remarkable consistency across cultures. Consequently, it is rewarding to review the main traits of face recognition in humans, as well as consolidated research on the categorization of facial expressions. The bulk of the chapter focuses on the main trends in computer analysis of facial expressions, sketching out the main algorithms and exposing computational considerations for different settings. We then look at some recent applications and promising new projects to give the reader a realistic view of what to expect from this technology now and in near future.
2010
The interpretation of user facial expressions is a very useful method for emotional sensing and it constitutes an indispensable part of affective Human Computer Interface designs. Facial expressions are often classified into one of several basic emotion categories. This categorical approach seems poor to treat faces with blended emotions, as well as to measure the intensity of a given emotion. This paper presents an effective system for facial emotional classification, where facial expressions are evaluated with a psychological 2-dimensional continuous affective approach. At its output, an expressional face is represented as a point in a 2D space characterized by evaluation and activation factors. The proposed system first starts with a classification method in discrete categories based on a novel combination of classifiers, that is subsequently mapped in a 2D space in order to be able to consider intermediate emotional states. The system has been tested with an extensive universal database and human assessment has been taken into consideration in the evaluation of results.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.