
Nuanced Music Emotion Recognition via a Semi‑Supervised Multi‑Relational Graph Neural Network
Abstract
Music emotion recognition (MER) seeks to understand the complex emotional landscapes elicited by music, acknowledging music’s profound social and psychological roles beyond traditional tasks such as genre classification or content similarity. MER relies heavily on high‑quality emotional annotations, which serve as the foundation for training models to recognize emotions. However, collecting these annotations is both complex and costly, leading to limited availability of large‑scale datasets for MER. Recent efforts in MER for automatically extracting emotion have focused on learning track representations in a supervised manner. However, these approaches mainly use simplified emotion models due to limited datasets or a lack of necessity for sophisticated emotion models and ignore hidden inter‑track relations, which are beneficial in a semi‑supervised learning setting. This paper proposes a novel approach to MER by constructing a multi‑relational graph that encapsulates different facets of music. We leverage graph neural networks to model intricate inter‑track relationships and capture structurally induced representations from user data, such as listening histories, genres, and tags. Our model, the semi‑supervised multi‑relational graph neural network for emotion recognition (SRGNN‑Emo), innovates by combining graph‑based modeling with semi‑supervised learning, using rich user data to extract nuanced emotional profiles from music tracks. Through extensive experimentation, SRGNN‑Emo demonstrates significant improvements in R2 and root mean squared error metrics for predicting the intensity of nine continuous emotions (Geneva Emotional Music Scale), demonstrating its superior capability in capturing and predicting complex emotional expressions in music.
© 2025 Andreas Peintner, Marta Moscati, Yu Kinoshita, Richard Vogl, Peter Knees, Markus Schedl, Hannah Strauss, Marcel Zentner, Eva Zangerle, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.