There is much interest in the idea that musicians perform better than non-musicians in understand... more There is much interest in the idea that musicians perform better than non-musicians in understand- ing speech in background noise. Research in this area has often used energetic maskers, which have their effects primarily at the auditory periphery. However, masking interference can also occur at more central auditory levels, known as informational masking. This experiment extends existing research by using multiple maskers that vary in their informational content and similarity to speech, in order to examine differences in perception of masked speech between trained musicians (n 1⁄4 25) and non-musicians (n 1⁄4 25). Although musicians outperformed non-musicians on a measure of fre- quency discrimination, they showed no advantage in perceiving masked speech. Further analysis revealed that non-verbal IQ, rather than musicianship, significantly predicted speech reception thresholds in noise. The results strongly suggest that the contribution of general cognitive abilities needs to be taken into account in any investigations of individual variability for perceiving speech in noise,
Humans can generate mental auditory images of voices or songs, sometimes perceiving them almost a... more Humans can generate mental auditory images of voices or songs, sometimes perceiving them almost as vividly as perceptual experiences. The functional networks supporting auditory imagery have been described, but less is known about the systems associated with interindividual differences in auditory imagery. Combining voxel-based morphometry and fMRI, we examined the structural basis of interindividual differences in how auditory images are subjectively perceived, and explored associations between auditory imagery, sensory-based processing, and visual imagery. Vividness of auditory imagery correlated with gray matter volume in the supplementary motor area (SMA), parietal cortex, medial superior frontal gyrus, and middle frontal gyrus. An analysis of functional responses to different types of human vocalizations revealed that the SMA and parietal sites that predict imagery are also modulated by sound type. Using representational similarity analysis, we found that higher representational specificity of heard sounds in SMA predicts vividness of imagery, indicating a mechanistic link between sensory- and imagery-based processing in sensorimotor cortex. Vividness of imagery in the visual domain also correlated with SMA structure, and with auditory imagery scores. Altogether, these findings provide evidence for a signature of imagery in brain structure, and highlight a common role of perceptual–motor interactions for processing heard and internally generated auditory information.
It is well established that categorising the emotional content of facial expressions may differ d... more It is well established that categorising the emotional content of facial expressions may differ depending on contextual information. Whether this malleability is observed in the auditory domain and in genuine emotion expressions is poorly explored. We examined the perception of authentic laughter and crying in the context of happy, neutral and sad facial expressions. Participants rated the vocalisations on separate unipolar scales of happiness and sadness and on arousal. Although they were instructed to focus exclusively on the vocalisations, consistent context effects were found: For both laughter and crying, emotion judgements were shifted towards the information expressed by the face. These modulations were independent of response latencies and were larger for more emotionally ambiguous vocalisations. No effects of context were found for arousal ratings. These findings suggest that the automatic encoding of contextual information during emotion perception generalises across modalities, to purely non-verbal vocalisations, and is not confined to acted expressions.
We investigated how age and musical expertise
influence emotion recognition in music. Musically ... more We investigated how age and musical expertise
influence emotion recognition in music. Musically trained and untrained participants from two age cohorts, young and middle-aged adults (N 1⁄4 80), were presented with music excerpts expressing happiness, peacefulness, sadness, and fear/threat. Participants rated how much each excerpt expressed the four emotions on 10-point scales. The intended emotions were consis- tently perceived, but responses varied across groups. Advancing age was associated with selective decrements in the recognition of sadness and fear/threat, a finding consistent with previous research (Lima & Castro, 2011a); the recognition of happiness and peacefulness remained stable. Years of music training were associated with enhanced recognition accuracy. These effects were independent of domain-general cognitive abilities and personality traits, but they were echoed in differences in how efficiently music structural cues (e.g., tempo, mode) were relied upon. Thus, age and musical exper- tise are experiential factors explaining individual vari- ability in emotion recognition in music.
Background: The Institute of Cognitive Neurology (INECO) Frontal Screening (IFS) is a brief neuro... more Background: The Institute of Cognitive Neurology (INECO) Frontal Screening (IFS) is a brief neuropsychological tool recently devised for the evaluation of executive dysfunction in neurodegenerative conditions.
Objective: In this study we present a cross-cultural validation of the IFS for the Portuguese population, provide normative values from a healthy sample, determine how age and education affect performance, and inspect its clinical utility in the context of Alzheimer’s disease (AD). A comparison with the Frontal Assessment Battery (FAB) was undertaken, and correlations with other well-established executive functions measures were examined.
Methods: The normative sample included 204 participants varying widely in age (20–85 years) and education (3–21 years). The clinical sample (n = 21) was compared with a sample of age- and education-matched controls (n = 21). Healthy participants completed the IFS and the Mini-Mental State Examination (MMSE). In addition to these, the patients (and matched controls) completed the FAB and a battery of other executive tests.
Results: IFS scores were positively affected by education and MMSE, and negatively affected by age. Patients underperformed controls on the IFS, and correlations were found with the Clock Drawing Test, Stroop test, and the Zoo Map and Rule Shift Card tests of the Behavioral Assessment of the Dysexecutive Syndrome. A cut-off of 17 optimally differentiated patients from controls. While 88% of the IFS sub-tests discriminated patients from controls, only 67% of the FAB sub-tests did so. Conclusion: Age and education should be taken into account when interpreting performance on the IFS. The IFS is useful to detect executive dysfunction in AD, showing good discriminant and concurrent validities.
Humans express emotions in many different ways. Facial expressions, body postures, or vocal cues,... more Humans express emotions in many different ways. Facial expressions, body postures, or vocal cues, for instance, communicate a wealth of information about others' emotional states. While the adaptive significance of these multiple cues has long been acknowledged (Darwin, 1872(Darwin, /2009), facial expressions have historically received more research attention than expressions via other channels. The interest in vocal emotions is increasing, but mostly focused on speech prosody, i.e., voice modulations in speech. Vocal communication does, however, additionally encompass diverse nonverbal vocalizations, such as laughter, sobs, or screams. Accounting for these signals is crucial for a complete understanding of vocal emotions.
It is well established that emotion recognition of facial expressions declines with age, but evid... more It is well established that emotion recognition of facial expressions declines with age, but evidence for age-related differences in vocal emotions is more limited. This is especially true for nonverbal vocalizations such as laughter, sobs, or sighs. In this study, 43 younger adults (M = 22 years) and 43 older ones (M = 61.4 years) provided multiple emotion ratings of nonverbal emotional vocalizations. Contrasting with previous research, which often includes only one positive emotion (happiness) versus several negative ones, we examined 4 positive and 4 negative emotions: achievement/triumph, amusement, pleasure, relief, anger, disgust, fear, and sadness. We controlled for hearing loss and assessed general cognitive decline, cognitive control, verbal intelligence, working memory, current affect, emotion regulation, and personality. Older adults were less sensitive than younger ones to the intended vocal emotions, as indicated by decrements in ratings on the intended emotion scales and accuracy. These effects were similar for positive and negative emotions, and they were independent of age-related differences in cognitive, affective, and personality measures. Regression analyses revealed that younger and older participants' responses could be predicted from the acoustic properties of the temporal, intensity, fundamental frequency, and spectral profile of the vocalizations. The two groups were similarly efficient in using the acoustic cues, but there were differences in the patterns of emotion-specific predictors. This study suggests that ageing produces specific changes on the processing of nonverbal vocalizations. That decrements were not attenuated for positive emotions indicates that they cannot be explained by a positivity effect in older adults. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
RESUMO O presente trabalho pretende explorar as concepções dos professores do 1. º ciclo do Ensin... more RESUMO O presente trabalho pretende explorar as concepções dos professores do 1. º ciclo do Ensino Básico sobre a dislexia e compará-las com o que a investigação tem dado a conhecer. Participaram neste estudo 20 professores. Construiu-se um questionário de auto-resposta com questões abertas e fechadas. Os resultados mostram que 45% dos professores já lidaram com casos de dislexia formalmente diagnosticada e que apenas 15% reporta ter formação específica na área.
Resumo No presente estudo, trinta e cinco jovens adultos (M= 20.9 anos, DP= 1.69) falantes nativo... more Resumo No presente estudo, trinta e cinco jovens adultos (M= 20.9 anos, DP= 1.69) falantes nativos do Português Europeu foram testados numa tarefa de decisão lexical auditiva com o objectivo de avaliar os efeitos da frequência, densidade de vizinhança, e frequência de vizinhança no reconhecimento de palavras faladas. Um total de 160 estímulos dissilábicos acentuados na primeira sílaba e com estruturaCV. CV (80 palavras e 80 não-palavras) foi apresentado através de auscultadores.
Journal of Clinical and Experimental Neuropsychology
Does emotion processing in music and speech prosody recruit common neurocognitive mechanisms? To ... more Does emotion processing in music and speech prosody recruit common neurocognitive mechanisms? To examine this question, we implemented a cross-domain comparative design in Parkinson’s disease (PD). Twenty-four patients and 25 controls performed emotion recognition tasks for music and spoken sentences. In music, patients had impaired recognition of happiness and peacefulness, and intact recognition of sadness and fear; this pattern was independent of general cognitive and perceptual abilities. In speech, patients had a small global impairment, which was significantly mediated by executive dysfunction. Hence, PD affected differently musical and prosodic
emotions. This dissociation indicates that the mechanisms underlying the two domains are partly independent.
When voices get emotional: A corpus of nonverbal vocalizations for research on emotion processing
Behavior Research Methods
Nonverbal vocal expressions, such as laughter, sobbing, and screams, are an important source of e... more Nonverbal vocal expressions, such as laughter, sobbing, and screams, are an important source of emotional information in social interactions. However, the investigation of how we process these vocal cues entered the research agenda only recently. Here, we introduce a new corpus of nonverbal vocalizations, which we recorded and submitted to perceptual and acoustic validation. It consists of 121 sounds expressing four positive emotions (achievement/triumph, amusement, sensual pleasure, and relief) and four negative ones (anger, disgust, fear, and sadness), produced by two female and two male speakers. For perceptual validation, a forced choice task was used (n = 20), and ratings were collected for the eight emotions, valence, arousal, and authenticity (n = 20). We provide these data, detailed for each
vocalization, for use by the research community. High recognition
accuracy was found for all emotions (86 %, on average), and the sounds were reliably rated as communicating the intended expressions. The vocalizations were measured for acoustic cues related to temporal aspects, intensity, fundamental frequency (f0), and voice quality. These cues alone provide sufficient information to discriminate between emotion categories, as indicated by statistical classification procedures; they are also predictors of listeners’ emotion ratings, as indicated by multiple regression analyses. This set of stimuli seems a valuable addition to currently available expression corpora for research on emotion processing. It is suitable for behavioral and neuroscience research and might as well be used in clinical settings for the assessment of neurological and psychiatric patients. The corpus can be downloaded from Supplementary Materials.
Language and music are closely related in our minds. Does musical expertise enhance the recogniti... more Language and music are closely related in our minds. Does musical expertise enhance the recognition of emotions in speech prosody? Forty highly trained musicians were compared with 40 musically untrained adults (controls) in the recognition of emotional prosody. For purposes of generalization, the participants were from two age groups, young (18–30 years) and middle adulthood (40–60 years). They were presented with short sentences expressing six emotions—anger, disgust, fear, happiness, sadness, surprise—and neutrality, by prosody alone. In each trial, they performed a forced-choice identification of the expressed emotion (reaction times, RTs, were collected) and an intensity judgment. General intelligence, cognitive control, and personality traits were also assessed. A robust effect of expertise was found: musicians were more accurate than controls, similarly across emotions and age groups. This effect cannot be attributed to socioeducational background, general cognitive or personality characteristics, because these did not differ between musicians and controls; perceived intensity and RTs were also similar in both groups. Furthermore, basic acoustic properties of the stimuli like fundamental frequency and duration were predictive of the participants’ responses, and musicians and controls were similarly efficient in using them. Musical expertise was thus associated with cross-domain benefits to emotional prosody. These results indicate that emotional processing in music and in language engages shared resources.
n comparison with other modalities, the recognition of emotion in music has received little atten... more n comparison with other modalities, the recognition of emotion in music has received little attention. An unexplored question is whether and how emotion recognition in music changes as a function of ageing. In the present study, healthy adults aged between 17 and 84 years (N=114) judged the magnitude to which a set of musical excerpts (Vieillard et al., 2008) expressed happiness, peacefulness, sadness and fear/threat. The results revealed emotion-specific age-related changes: advancing age was associated with a gradual decrease in responsiveness to sad and scary music from middle age onwards, whereas the recognition of happiness and peacefulness, both positive emotional qualities, remained stable from young adulthood to older age. Additionally, the number of years of music training was associated with more accurate categorisation of the musical emotions examined here. We argue that these findings are consistent with two accounts on how ageing might influence the recognition of emotions: motivational changes towards positivity and, to a lesser extent, selective neuropsychological decline.
A set of semantically neutral sentences and derived pseudosentences was produced by two native Eu... more A set of semantically neutral sentences and derived pseudosentences was produced by two native European Portuguese speakers varying emotional prosody in order to portray anger, disgust, fear, happiness, sadness, surprise, and neutrality. Accuracy rates and reaction times in a forced-choice identification of these emotions as well as intensity judgments were collected from 80 participants, and a database was constructed with the utterances reaching satisfactory accuracy (190 sentences and 178 pseudosentences). High accuracy (mean correct of 75% for sentences and 71% for pseudosentences), rapid recognition, and high-intensity judgments were obtained for all the portrayed emotional qualities. Sentences and pseudosentences elicited similar accuracy and intensity rates, but participants responded to pseudosentences faster than they did to sentences. This database is a useful tool for research on emotional prosody, including cross-language studies and studies involving Portuguese-speaking participants, and it may be useful for clinical purposes in the assessment of brain-damaged patients. The database is available for download from http://brm.psychonomic-journals.org/content/supplemental.
"Background
The Frontal Assessment Battery (FAB) is a short tool for the assessment of execut... more "Background
The Frontal Assessment Battery (FAB) is a short tool for the assessment of executive functions consisting of six subtests that explore different abilities related to the frontal lobes. Several studies have indicated that executive dysfunction is the main neuropsychological feature in Parkinson’s disease (PD).
Goals
To evaluate the clinical usefulness of the FAB in identifying executive dysfunction in PD; to determine if FAB scores in PD are correlated with formal measures of executive functions; and to provide normative data for the Portuguese version of the FAB.
Methods
The study involved 122 healthy participants and 50 idiopathic PD patients. We compared FAB scores in normal controls and in PD patients matched for age, education and Mini-Mental State Examination (MMSE) score. In PD patients, FAB results were compared to the performance on tests of executive functioning.
Results
In the healthy subjects, FAB scores varied as a function of age, education and MMSE. In PD, FAB scores were significantly decreased compared to normal controls, and correlated with measures of executive functions such as phonemic and semantic verbal fluency tests, Wisconsin Card Sorting Test and Trail Making Test Part A and Part B.
Conclusion
The FAB is a useful tool for the screening of executive dysfunction in PD, showing good discriminant and concurrent validities. Normative data provided for the Portuguese version of this test improve the accuracy and confidence in the clinical use of the FAB."
This paper examines the role of grapheme-phoneme conversion for skilled reading in an orthography... more This paper examines the role of grapheme-phoneme conversion for skilled reading in an orthography of intermediate depth, Portuguese. The effects of word length in number of letters were determined in two studies. Mixed lists of five- and six-letter words and nonwords were presented to young adults in lexical decision and reading aloud tasks in the first study; in the second one, the length range was increased from four to six letters and an extra condition was added where words and nonwords were presented in separate, or blocked, lists. Reaction times were larger for longer words and nonwords in lexical decision, and in reading aloud mixed lists, but no effect of length was observed when reading words in blocked lists. The effect of word length is thus modulated by list composition. This is evidence that grapheme-phoneme conversion is not as predominant for phonological recoding in intermediate orthographies as it is in shallow ones, and suggests that skilled reading in those orthographies is highly responsive to tasks conditions because readers may switch from smaller segment-by-segment decoding to larger unit or lexicon-related processing.
Uploads
Papers by César Lima
influence emotion recognition in music. Musically trained and untrained participants from two age cohorts, young and middle-aged adults (N 1⁄4 80), were presented with music excerpts expressing happiness, peacefulness, sadness, and fear/threat. Participants rated how much each excerpt expressed the four emotions on 10-point scales. The intended emotions were consis- tently perceived, but responses varied across groups. Advancing age was associated with selective decrements in the recognition of sadness and fear/threat, a finding consistent with previous research (Lima & Castro, 2011a); the recognition of happiness and peacefulness remained stable. Years of music training were associated with enhanced recognition accuracy. These effects were independent of domain-general cognitive abilities and personality traits, but they were echoed in differences in how efficiently music structural cues (e.g., tempo, mode) were relied upon. Thus, age and musical exper- tise are experiential factors explaining individual vari- ability in emotion recognition in music.
Objective: In this study we present a cross-cultural validation of the IFS for the Portuguese population, provide normative values from a healthy sample, determine how age and education affect performance, and inspect its clinical utility in the context of Alzheimer’s disease (AD). A comparison with the Frontal Assessment Battery (FAB) was undertaken, and correlations with other well-established executive functions measures were examined.
Methods: The normative sample included 204 participants varying widely in age (20–85 years) and education (3–21 years). The clinical sample (n = 21) was compared with a sample of age- and education-matched controls (n = 21). Healthy participants completed the IFS and the Mini-Mental State Examination (MMSE). In addition to these, the patients (and matched controls) completed the FAB and a battery of other executive tests.
Results: IFS scores were positively affected by education and MMSE, and negatively affected by age. Patients underperformed controls on the IFS, and correlations were found with the Clock Drawing Test, Stroop test, and the Zoo Map and Rule Shift Card tests of the Behavioral Assessment of the Dysexecutive Syndrome. A cut-off of 17 optimally differentiated patients from controls. While 88% of the IFS sub-tests discriminated patients from controls, only 67% of the FAB sub-tests did so. Conclusion: Age and education should be taken into account when interpreting performance on the IFS. The IFS is useful to detect executive dysfunction in AD, showing good discriminant and concurrent validities.
emotions. This dissociation indicates that the mechanisms underlying the two domains are partly independent.
vocalization, for use by the research community. High recognition
accuracy was found for all emotions (86 %, on average), and the sounds were reliably rated as communicating the intended expressions. The vocalizations were measured for acoustic cues related to temporal aspects, intensity, fundamental frequency (f0), and voice quality. These cues alone provide sufficient information to discriminate between emotion categories, as indicated by statistical classification procedures; they are also predictors of listeners’ emotion ratings, as indicated by multiple regression analyses. This set of stimuli seems a valuable addition to currently available expression corpora for research on emotion processing. It is suitable for behavioral and neuroscience research and might as well be used in clinical settings for the assessment of neurological and psychiatric patients. The corpus can be downloaded from Supplementary Materials.
The Frontal Assessment Battery (FAB) is a short tool for the assessment of executive functions consisting of six subtests that explore different abilities related to the frontal lobes. Several studies have indicated that executive dysfunction is the main neuropsychological feature in Parkinson’s disease (PD).
Goals
To evaluate the clinical usefulness of the FAB in identifying executive dysfunction in PD; to determine if FAB scores in PD are correlated with formal measures of executive functions; and to provide normative data for the Portuguese version of the FAB.
Methods
The study involved 122 healthy participants and 50 idiopathic PD patients. We compared FAB scores in normal controls and in PD patients matched for age, education and Mini-Mental State Examination (MMSE) score. In PD patients, FAB results were compared to the performance on tests of executive functioning.
Results
In the healthy subjects, FAB scores varied as a function of age, education and MMSE. In PD, FAB scores were significantly decreased compared to normal controls, and correlated with measures of executive functions such as phonemic and semantic verbal fluency tests, Wisconsin Card Sorting Test and Trail Making Test Part A and Part B.
Conclusion
The FAB is a useful tool for the screening of executive dysfunction in PD, showing good discriminant and concurrent validities. Normative data provided for the Portuguese version of this test improve the accuracy and confidence in the clinical use of the FAB."