The AIM Sept 2021 entry call is now open!

The Centre for Doctoral Training in AI and Music (AIM CDT) has opened its call for September 2021 entry.

Application deadline: Thursday 27 January 2021

At least 14 new PhD students will be selected to join the AIM 2021 cohort. If you are willing to move to London and work with us at the AIM CDT that is part of the Centre for Digital Music (C4DM), a world-leading research group in the area of music and audio technology, we welcome your application.

A leading PhD research programme aimed at the Music/Audio Technology and Creative Industries, the UKRI Centre for Doctoral Training in Artificial Intelligence and Music (AIM CDT) will train a new generation of researchers in the areas of Music Understanding, Intelligent Instruments and Interfaces, and Computational Creativity.

The AIM CDT takes a cohort-based approach; all students start in September and undertake a bespoke programme of taught modules in years 1-2 (6 modules to complete 90 credits). Another pillar of the programme and the cohort building is the Researcher skills development training throughout years 1-4 aimed at addressing academic and industry professional needs of the students.

You will be able to select your supervisory team from a list of over 30 academics based at C4DM and as a PhD student you will undertake a personalised programme of research; there will also be opportunities for industry and other placements, and international exchanges.

Who should apply?

You have a willingness to pursuit and complete a PhD in the intersection of AI and music, you have showed engagement and work to get great results, you are committed to getting upper marks in your studies and want to develop your critical thinking skills to undergo research. Programming skills are highly desirable, but not essential, if you can show complementary strengths. Equally, musical training of any kind is desirable, but not a prerequisite.

You must hold or be completing a Masters degree at distinction or first class level, or equivalent, in Computer Science, Electronic Engineering, Music/Audio Technology, Physics, Mathematics, or Psychology.

Visit our website and Contact us

 For more information about the application process, please visit: https://www.aim.qmul.ac.uk/about/

For any enquiries, contact us at aim-enquiries@qmul.ac.uk. Alternatively, feel free to contact any of the supervisors or C4DM academics with any questions you may have.

Current AIM supervisors list: https://www.aim.qmul.ac.uk/supervisors/
2021 AIM PhD research topics: https://www.aim.qmul.ac.uk/phd-topics/

We are waiting for your application and we hope you will join us next September!


Sonification of Air Pollution Data In Times of Covid-19

Amidst the recent pandemic, I was confronted with some works that tried to translate into sound, via a technique called sonification, data concerning the number of deaths and active cases related with Covid-19. Personally, the whole journey was proving to be deeply saddening and depressing per se, and emphasising morbid figures via sound seemed to somehow increase that feeling. Nonetheless, I still wanted to contribute to this corpus of projects, closely related to my PhD topic, which focuses on sonifying smart city data. I thus decided to address one of the so called “positive” impacts of the imposed lockdown due to the virus: mitigation of air pollution effects. Recent reports point towards a positive impact in air pollution levels due to the lockdown policies related with COVID-19. I wanted to inspect if information about such impacts could be conveyed by leveraging the audio modality. It is important to notice that causal relations between lockdown and air pollution quality are not possible to assess within the scope of this work, as it represents solely an auditory depiction of data, which values, differences and variations might have occurred due other external factors.

Specifically, this project entails an aural comparison of hourly air pollution levels (NO2 readings) on Mile End Road, London, from a week in April of 2019 and 2020, retrieved from London Air, a tool developed at King’s College London.

Hourly mean NO2 readings on Mile End Road from a week of April 2019 (left) and 2020 (right).

Sonification is conducted by applying a spectral delay effect to Air on the G String, Wilhelmj’s arrangement of a Bach composition. Spectral delay is achieved by a total of 10 bandpass filters with different cutoff frequencies delayed in time by different amounts and with different gains. Levels of NO2 are mapped to feedback and gain of every delay line, as well as to movement of cutoff frequencies.
Higher values of NO2 correspond to:

  • Higher delay feedback for each line/filter band;
  • Higher gain of delayed content for each line/filter band;
  • Lower cutoff frequencies for each line/filter band;

In summary, low pollution/NO2 levels approximate the output to a “clean” rendition of the piece, whilst high pollution/NO2 levels “pollute” it with delay. Holding a button allows to switch from the 2020 scenario (no button pressed) to the 2019 scenario (button pressed and held). The whole implementation was done using the Bela board.

Further information, including code, is available here.

This work was done as a final assignment for the module Music and Audio Programming (ECS7012P), at Queen Mary University of London.

*******************

Pedro Pereira Sarmento is currently based at the Centre 4 Digital Music (C4DM), Queen Mary University of London. He is part of the CDT AIM programme, doing his PhD under the topic of Musical Smart City, in which he is studying new ways of interpreting city data through music.


AIM Research Posters – DMRN 2019

The first cohort to enter the AIM programme presented their research directions in a poster session at the Digital Music Research Network (DMRN) 2019 workshop. The workshop is hosted at QMUL on an annual basis by the Centre for Digital Music, bringing together digital music researchers from across the world to discuss a diverse range of topics within sound and music computing.

Generating Emotionally Responsive Music using Artificial Intelligence – Berker Banar

Automatic music transcription with end-to-end deep neural networks – Lele Liu

Deep learning and multi-modal models for the music industry – Ilaria Manco

Real-Time Gesture Classification on an Augmented Acoustic Guitar using Deep Learning to Improve Extended-Range and Percussive Solo Playing – Andrea Martelloni

Polyphonic Music Transcription using Deep Learning – Mary Pilataki-Manika
New perspectives in instrument-based audio source separation – Saurjya Sarkar

Musical Smart City – Pedro Sarmento

Optical music recognition using deep learning – Elona Shatri

Perceptual end to end learning for music understanding – Cyrus Vahidi