


These models can be broadly categorized into two distinct learning paradigms: zero-shot learning, and fine-tuned models. To this end, we train several models that leverage song lyrics alone, and not audio. Lyrics and mood descriptors: We start by studying the relationship between song lyrics and mood descriptors. Then, we performed an analysis of the different models, and compared their results. To understand the contribution of lyrics, acoustics, or combination of lyrics and acoustics to the mood of a song, we used several ML classifiers to predict mood descriptors, each trained on features extracted from different modalities: acoustic, lyrics, and hybrid.
#Spotify music moods full#
In this blog post we summarize some of the most relevant ones we performed in the scope of this problem, and for more details, we invite you to read the full paper linked at the end of this post. We tackled a number of experiments aimed at studying the contribution of lyrics and acoustics to the mood of a song. More specifically, these relationships were derived from Spotify playlists’ titles and descriptions, by measuring the co-occurrence of a given song in a playlist, and the target mood descriptor in its title or description. The association between a song and a mood descriptor was calculated using collaborative data, by “wisdom of the crowd”. They are not limited to a specific part-of-speech, covering adjectives (“sad”, “somber”, etc), nouns (“motivation”, “love”, etc.) and verbs (“reminisce”, “fantasize”, etc.). The mood descriptors for this set of songs included terms like “chill”, “sad”, “happy”, “love”, and “exciting”. In this work we used a set of just under 1 million songs. Additionally, from the recommendations side, we want to be able to recommend new songs to users that provide similar sets of moods users might already like.Īt the same time, this work is driven by the research question, “ How much do the lyrics and acoustics of a song each contribute to understanding of the song’s mood?”. From the search and discovery perspective, we want to enable search based on mood descriptors in the Spotify app, for example by allowing users to search for “happy songs”. This work is motivated by our desire to improve the Spotify experience, specifically in relation to music search, discovery and recommendations. To this end, we conduct a data driven analysis using state-of-the-art machine learning (ML), and natural language processing (NLP) techniques, to compare how lyrics contribute to the understanding of mood, as defined collaboratively by the playlisting behavior of Spotify users. the terms that describe affectual qualities of a song. In our recent ICWSM paper, we set out to investigate the association between song lyrics and mood descriptors, i.e. This scenario is not an exception, in fact, a recent analysis of the lyrics and acoustics of popular music identifies a trend where song lyrics have been getting sadder in the last three decades, while at the same time, the songs also become more “danceable” and “relaxed”. Let’s imagine an example of a song, where its lyrics talk about the end of a relationship, and suggest moods related to sadness, longing and heartbreak, while at the same time, its acoustics have a familiar chord progression, and somewhat high tempo, suggesting calm and upbeat moods. In some cases, these two components – lyrics and acoustics – work together to establish a cohesive mood and in others, each component provides its own contribution to the overall mood of the song. They influence the perceived mood of a song, alongside the acoustic contents (including the rhythm, harmony, and melody) of the song. Song lyrics make an important contribution to the musical experience, providing us with rich stories and messages that artists want to convey through their music.
