MoodyLyrics: A Sentiment Annotated Lyrics Dataset
ACM 2017 International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence, Hong Kong, Mar 2017
Music emotion recognition and recommendations today are changing the way people find and listen t... more Music emotion recognition and recommendations today are changing the way people find and listen to their preferred musical tracks. Emotion recognition of songs is mostly based on feature extraction and learning from available datasets. In this work we take a different approach utilizing content words of lyrics and their valence and arousal norms in affect lexicons only. We use this method to annotate each song with one of the four emotion categories of Russell's model, and also to construct MoodyLyrics, a large dataset of lyrics that will be available for public use. For evaluation we utilized another lyrics dataset as ground truth and achieved an accuracy of 74.25 %. Our results confirm that valence is a better discriminator of mood than arousal. The results also prove that music mood recognition or annotation can be achieved with good accuracy even without subjective human feedback or user tags, when they are not available.
Uploads
Papers by E C
design of collaborative games and web services or the use of Mechanical Turk, are becoming popular in the literature. They provide a cheap, quick and efficient method, contrary to professional labeling of songs which is expensive and does not scale for creating large datasets. In this paper we discuss the viability of various crowdsourcing instruments providing examples from research works. We also share our own experience, illustrating the steps we followed using tags collected from Last.fm for the creation of two music mood datasets which are rendered public. While processing affect tags of Last.fm, we observed that they tend to be biased towards positive emotions; the resulting dataset thus contain more positive songs than negative ones.