Celebrate Your Worth

The Machine that Listens to Classical Music

Machine Learning is now being used to analyse modality in Western Classical Music

The École Polytechnique Fédérale de Lausanne (EPFL), one of Switzerland’s leading research institutes in the natural sciences and engineering, has been at the forefront of technological breakthroughs in the world over the past several years. Recently, researchers at the EPFL’s Digital and Cognitive Musicology Lab (DCML) in the College of Humanities have developed a method in machine learning that has helped us almost ‘retrace’ the evolution of classical music over the last several centuries. An unsupervised ML model that has ‘listened to’ and analysed over 13,000 pieces of Western Classical music has provided superlative understanding of the evolution of ‘modes’ (major or minor) in music: and how they have changed (throughout) the course of history.

The Major and the Minor of it

“Many people may not be able to define what a minor mode is in music, but most would almost certainly recognize a piece played in a minor key. That’s because we intuitively differentiate the set of notes belonging to the minor scale – which tend to sound dark, tense, or sad – from those in the major scale, which more often connote happiness, strength, or lightness. But throughout history, there have been periods when multiple other modes were used in addition to major and minor – or when no clear separation between modes could be found at all.” (EPFL)

Researchers Daniel Harasim, Matthias Ramirez, Martin Rohrmeier and Fabian Moss of the Digital and Cognitive Musicology Lab set out on a recent study to understand just this – the intricate differences in patterns and visualisations of all these different modes that have been observed over time. In an article posted in the Humanities and Social Sciences Communications, they discuss a machine learning-based model that analysed over 13,000 pieces of classical music ranging from the 15th to the 19thy century – with styles spanning the Classical, Baroque, Renaissance, early-Romantic and late-Romantic musical periods.

According to Harasim, “We already knew that in the Renaissance [1400-1600], for example, there were more than two modes. But for periods following the Classical era [1750-1820], the distinction between the modes blurs together. We wanted to see if we could nail down these differences more concretely.”

Learning through ‘Listening’

With the use of mathematical modelling using machine learning, inferences were drawn regarding both the number and characteristics of the modes of the five selected Western Classical music time periods. Stark differences in the style (reflected through the usage of different modes) were observed between works of the Renaissance period by artists like Giovanni Pierluigi da Palestrina (who tended to use four modes) and Baroque composers such as Johann Sebastian Bach.

Image: Visualisation of the four distinct ‘modes’ in Western Classical Music, depicted by the four colours;
Source: DCML/EPFL

This approach used by the DCML was, of course, unique, as this was the first time that unlabelled data was used to analyse modes. Previously, all pieces of music reflected on a dataset had its modes previously categorised by humans. This was also perhaps why the complex music written by Late Romantic composers such as Frank Liszt could not be clearly separated into modes, the way we know it.

According to Harasim, the computer was given the chance to analyse the data without any human bias. Unsupervised machine learning methods were used, where the computer ‘listened’ to the music and classified modes accordingly, without any metadata labels. Although making the analysis much more complex, the results were regarded as much more ‘cognitively plausible’ than how humans interpret music.

“We know that musical structure can be very complex, and that musicians need years of training. But at the same time, humans learn about these structures unconsciously, just as a child learns a native language. That’s why we developed a simple model that reverse engineers this learning process, using a class of so-called Bayesian models that are used by cognitive scientists, so that we can also draw on their research.”

Given the rather interesting findings of this study, the researchers now wish to incorporate similar analyses to jazz music, where tonality is regarded to be much more complex than just two modes.

Reference: Harasim, D., Moss, F.C., Ramirez, M., Rohrmeier, M. Exploring the foundations of tonality: statistical cognitive modeling of modes in the history of Western classical music. Humanit Soc Sci Commun 8, 5 (2021). https://doi.org/10.1057/s41599-020-00678-6

Data and code: https://github.com/DCMLab/HistoryModes_DataCode

Leave a Reply

Your email address will not be published. Required fields are marked *

*

Admission Test Dates for 2021

PGP in Data Science for 2021 intake

Dates: 13-03-21,27-03-21

PGP in Data Engineering for 2021 intake

Dates: 06-03-21,12-03-21,13-03-21

PG Diploma in Management for 2021 intake

Dates: 05-03-21, 11-03-21, 18-03-21, 25-03-21

Admission Test Dates for 2021 intake