AIM CDT Cohorts
First Cohort - 2019-2024
Second Cohort - 2020-2025
Third Cohort - 2021-2026
Fourth Cohort - 2022-2027
Fifth Cohort - 2023-2028

 

First AIM cohort students (2019-2024):

PhD StudentProject title

Adan Benito
Beyond the fret: gesture analysis on fretted instruments and its applications to instrument augmentation

Berker Banar
Towards Composing Contemporary Classical Music using Generative Deep Learning

Marco Comunità
Machine learning applied to sound synthesis models

David Foster
Modelling the Creative Process of Jazz Improvisation

Lele Liu
Automatic music transcription with end-to-end deep neural networks

Ilaria Manco
Deep learning and multi-modal models for the music industry

in collaboration with Universal Music Group

Andrea Martelloni
Real-Time Gesture Classification on an Augmented Acoustic Guitar using Deep Learning to Improve Extended-Range and Percussive Solo Playing

Mary Pilataki-Manika
Polyphonic Music Transcription using Deep Learning

in collaboration with Apple

Saurjya Sarkar
New perspectives in instrument-based audio source separation

Pedro Sarmento
Guitar-Oriented Neural Music Generation in Symbolic Format

in collaboration with Holonic Systems Oy.

Elona Shatri
Optical music recognition using deep learning

in collaboration with Steinberg Media Technologies GmbH

Cyrus Vahidi
Perceptual end to end learning for music understanding

in collaboration with MUSIC Tribe Brands UK Limited

Second AIM cohort students (2020-2025):

PhD StudentProject title

Corey Ford
Artificial Intelligence for Supporting Musical Creativity and Engagement in Child-Computer Interaction

Max Graf
AI-Based Musical Co-Creation in Extended Realities: PERFORM-AI

Madeline Hamilton
Improving AI-generated Music with Pleasure Models

Benjamin Hayes
Perceptually motivated deep learning approaches to creative sound synthesis

Jiawen Huang
Lyrics Alignment For Polyphonic Music

Harnick Khera
Informed source separation for multi-mic production
in collaboration with BBC

Yin-Jyun Luo
Industry-scale Machine Listening for Music and Audio Data
in collaboration with Spotify

Luca Marinelli
Gender-coded sound: A multimodal data-driven analysis of gender encoding strategies in sound and music for advertising

Xavier Riley
Digging Deeper - expanding the “Dig That Lick” corpus with new sources and techniques

Eleanor Row
Automatic micro-composition for professional/novice composers using generative models as creativity support tools

Shubhr Singh
Audio Applications of Novel Mathematical Methods in Deep Learning

Christian Steinmetz
Deep learning for high-fidelity audio and music production

Jingjing Tang
End-to-End System Design for Music Style Transfer with Neural Networks

Lewis Wolstanholme
Real-time instrument transformation and augmentation with deep learning

Yixiao Zhang
Machine Learning Methods for Artificial Musicality
in collaboration with Apple

Third AIM cohort students (2021-2026):

PhD StudentProject title

Katarzyna Adamska
Predicting hit songs: multimodal and data-driven approach

Sara Cardinale
Character-based adaptive generative music for film and video games using Deep Learning and Hidden Markov Models

Franco Caspe
AI-assisted FM synthesis for sound design and control mapping

Ruby Crocker
Time-based mood recognition in film music

Carlos De La Vega Martin
Neural Drum Synthesis

Bleiz MacSen Del Sette
The Sound of Care: researching the use of deep learning and sonification for the daily support of people with Chronic Pain

Rodrigo Mauricio Diaz Fernandez
Hybrid Neural Methods for Sound Synthesis

Andrew Edwards
Computational Models for Jazz Piano: Transcription, Analysis, and Generative Modeling

Oluremi Samuel Oladotun Falawo
Embodiment in Intelligent Musical Systems

Mariam Fayaz Torshizi
Music mood modelling using Knowledge graphs and Graph Neural Nets

Yazhou Li
Virtual Placement of Objects in Acoustic Scenes
in collaboration with Sonos

Jackson Loth
Time to vibe together: cloud-based guitar and intelligent agent
in collaboration with Hyvibe

Teresa Pelinski Ramos
Sensor mesh as performance interface
in collaboration with Bela

Soumya Sai Vanka
Smart Channel strip using Neural audio processing
in collaboration with Steinberg

Chris Winnard
Music Interestingness in the Brain

Xiaowan Yi
Composition-aware music recommendation system for music production
in collaboration with Focusrite

Huan Zhang
Computational Modeling of Expressive Piano Performance

Fourth AIM cohort students (2022-2027):

PhD StudentProject title

James Bolt
Intelligent audio and music editing with deep learning

Carey Bunks
Cover Song Identification


in collaboration with Apple

Adam Garrow
A computational model of music cognition using statistical learning of structures

Ashley Noel-Hirst
Latent Spaces for Human-AI music generation

Alexander Williams
User-driven deep music generation in digital audio workstations


in collaboration with Sony

Yinghao Ma
Self-supervision in machine listening


in collaboration with Bytedance

Jordan Shier
Real-time timbral mapping for synthesized percussive performance


in collaboration with Ableton

David Südholt
Machine learning of physical models for voice synthesis


in collaboration with Nemisindo

Tyler McIntosh
Expressive Performance Rendering for Music Generation Systems


in collaboration with DAACI

Christopher Mitcheltree
Deep Learning for Time-varying Audio and Parameter Modulations

Ioannis Vasilakis
Active learning for interactive music transcription

Chin-Yun Yu
Neural audio synthesis with expressiveness control

Ningzhi Wang
Generative models for music audio representation and understanding


in collaboration with Spotify

Aditya Bhattacharjee
Self-supervision in Audio Fingerprinting

Fifth AIM cohort students (2023-2028):

PhD StudentProject title

Bradley Aldous
Advancing music generation via accelerated deep learning

Keshav Bhandari
Neuro-Symbolic Automated Music Composition

Louis Bradshaw
Neuro-symbolic music models

Julien Guinot
Beyond Supervised Learning for Musical Audio (in collaboration with Universal Music Group)

Zixun (Nicolas) Guo
nality-Aware Music Understanding: Modeling Complex Tonal Harmony

Adam He
Neuro-evolved Heuristics for Meta-composition (in collaboration with DAACI)

Gregor Meehan
Representation learning for musical audio using graph neural network-based recommender engines

Marco Pasini
Fast and Controllable Music Generation (in collaboration with Sony SCL)

Christos Plachouras
Deep learning for low-resource music

Haokun Tian
Timbre Tools for the Digital Instrument Maker

Qing Wang
Multi-modal Learning for Music Understanding

Yifan Xie
Film score composer AI assistant: generating expressive mockups (in collaboration with Spitfire Audio)

Ece Yurdakul
Emotion-based Personalised Music Recommendation (in collaboration with Deezer)

Farida Yusuf
Information-theoretic neural networks for online perception of auditory objects

Qiaoxi Zhang
Multimodal AI for musical collaboration in immersive environments (in collaboration with PatchXR)

Shuoyang Zheng
Explainability of AI Music Generation