First AIM cohort students:

PhD StudentProject title

Adan Benito
Beyond the fret: gesture analysis on fretted instruments and its applications to instrument augmentation

Berker Banar
Generating emotional music using AI

Marco Comunità
Machine learning applied to sound synthesis models

David Foster
Modelling the Creative Process of Jazz Improvisation

Lele Liu
Automatic music transcription with end-to-end deep neural networks

Ilaria Manco
Deep learning and multi-modal models for the music industry

in collaboration with Universal Music Group

Andrea Martelloni
Real-Time Gesture Classification on an Augmented Acoustic Guitar using Deep Learning to Improve Extended-Range and Percussive Solo Playing

Mary Pilataki-Manika
Polyphonic Music Transcription using Deep Learning

in collaboration with Apple

Saurjya Sarkar
New perspectives in instrument-based audio source separation

Pedro Sarmento
Musical smart city

in collaboration with Holonic Systems Oy.

Elona Shatri
Optical music recognition using deep learning

in collaboration with Steinberg Media Technologies GmbH

Cyrus Vahidi
Perceptual end to end learning for music understanding

in collaboration with MUSIC Tribe Brands UK Limited

Second AIM cohort students:

PhD StudentProject title

Benjamin Hayes
Perceptually motivated deep learning approaches to creative sound synthesis

Christian Steinmetz
End-to-end generative modeling of multitrack mixing with non-parallel data and adversarial networks

Corey Ford
Artificial Intelligence for Supporting Musical Creativity and Engagement in Child-Computer Interaction

Eleanor Row
Automatic micro-composition for professional/novice composers using generative models as creativity support tools

Harnick Khera
Informed source separation for multi-mic production
in collaboration with BBC

Jiawen Huang
Lyrics Alignment For Polyphonic Music

Jingjing Tang
End-to-End System Design for Music Style Transfer with Neural Networks

Lewis Wolstanholme
Real-time instrument transformation and augmentation with deep learning

Luca Marinelli
Gender-coded sound: A multimodal data-driven analysis of gender encoding strategies in sound and music for advertising

Madeline Hamilton
Improving AI-generated Music with Pleasure Models

Max Graf
PERFORM-AI (Provide Extended Realities for Musical Performance using AI)

Shubhr Singh
Audio Applications of Novel Mathematical Methods in Deep Learning

Xavier Riley
Digging Deeper - expanding the “Dig That Lick” corpus with new sources and techniques

Yixiao Zhang
Machine Learning Methods for Artificial Musicality
in collaboration with Apple

Yin-Jyun Luo
Industry-scale Machine Listening for Music and Audio Data
in collaboration with Spotify