Our current supervisors’ list includes (in alphabetical order):

Supervisor
Mathieu Barthet
Emmanouil Benetos
Nick Bryan-Kinns
Simon Colton
Simon Dixon
George Fazekas
Andrew McPherson
Josh Reiss
Mark Sandler

 

 



Mathieu Barthet

Mathieu Barthet (PhD) is a Lecturer in Digital Media at Queen Mary University of London (QMUL) and oversees the Industry Partnerships for the Centre for Doctoral Training in AI & Music. He also acts as Programme Coordinator of the MSc in Media and Arts Technology by Research (http://www.mat.qmul.ac.uk/) and Technical Director of the qMedia Studios. He received MSc degrees in Electronics and Computer Science (Paris VI University/Ecole Polytechnique de Montréal, 2003) and in Acoustics (Aix-Marseille II University/Ecole Centrale Marseille, 2004). He was awarded a PhD in Acoustics, Signal Processing and Computer Science applied to Music from Aix-Marseille II University and CNRS-LMA in 2008, and joined the Centre for Digital Music at QMUL in 2009. He has published over 100 academic papers in the fields of of Music Information Research, New Interfaces for Musical Expression and Music Perception. He was Co-Investigator of the EU H2020 project ‘The Audio Commons Initiative’ (http://www.audiocommons.org/) and Principal Investigator of the EU H2020 project ‘Towards the Internet of Musical Things’ (http://www.iomut.eu/). He was General Chair of the Computer Music Modeling and Retrieval symposium in 2012 and Program and Paper Chair of the Audio Mostly conference in 2017. He is a Guest Editor of the Journal of the Audio Engineering Society. His research on interactive musical experience has lead to performances in venues such as Barbican Arts Centre, Wilton’s Music Hall, Strasbourg’s Cathedral, CHI, ACII, CMMR, and BBC’s ‘Sound: Now and Next’ conference. He is a guitar player and performed with the (Im)Possibilities ensemble (Guildhall School of Music & Drama) at EFG London Jazz Festivals.



Emmanouil Benetos

Research interests: Machine listening, music information retrieval, computational sound scene analysis, machine learning for audio analysis, language models for music and audio, computational musicology

Emmanouil Benetos is Senior Lecturer and Royal Academy of Engineering Research Fellow at the School of Electronic Engineering and Computer Science of Queen Mary University of London (QMUL) and Turing Fellow at The Alan Turing Institute, UK. His research interests include signal processing and machine learning methods for audio analysis, as well as applications of these methods to music information retrieval, environmental sound analysis, and computational musicology, having authored/co-authored over 100 papers in the aforementioned fields. He has been primary investigator and co-investigator for several audio-related research projects funded by the EPSRC, AHRC, RAEng, European Commission, SFI, and the industry. Website: http://www.eecs.qmul.ac.uk/~emmanouilb/



Nick Bryan-Kinns

Research interests in key areas of AI and music: AI and human co-creativity, AI and human music improvisation and live performance, AI applied to understanding human creativity.

Nick Bryan-Kinns’ research explores new approaches to the creation of interactive technologies for the Creative Industries through Interaction Design techniques. It impacts the areas of Human-Computer Interaction, Creative Computing, and Design. Nick’s research has made audio engineering more accessible and inclusive, championed the design of sustainable and ethical IoT and wearables, and engaged rural and urban communities with physical computing through craft and cultural heritage.

Nick has led major projects which contribute to national and EU research priorities by applying Interaction Design techniques to explore creative practice and addressing societal challenges. Products of his research have been exhibited internationally and reported in public media outlets including the New Scientist and the BBC World Service.

For more details, see: http://eecs.qmul.ac.uk/~nickbk/



Simon Colton

Simon Colton is a Professor of Computational Creativity, AI and Games in EECS at Queen Mary University of London, and also holds a part-time Professorship in the SensiLab of Monash University. He was previously an EPSRC leadership fellow at Imperial College and Goldsmiths College, and held an ERA Chair at Falmouth University. He is an AI researcher with around 200 publications whose work has won national and international awards, and has led numerous EPSRC and EU-funded projects. He focuses specifically on questions of Computational Creativity, where researchers study how to engineer systems to take on creative responsibilities in arts and science projects. Prof. Colton has built and experimented with systems such as the HR automated software engineer, The Painting Fool, The WhatIf Machine and the Wevva iOS app, undertaking creative applications with these systems to graphic design, the visual arts, fictional ideation, mathematical discovery and videogame design. This has enabled him to take a holistic view of the field, and he has written about the philosophy of Computational Creativity and led numerous public engagement projects. He is currently branching out into generative music, while still focusing on automated code generation and the building of casual creators for game design and the visual arts.



Simon Dixon

Prof. Simon Dixon (PI) is Director of Graduate Studies (since 2013) in EECS, member of QMUL’s Research Degrees and Programmes Examination Board (2015-2018), and Deputy Director of C4DM. He coordinates the MSCA Innovative Training Network “New Frontiers in Music Information Processing”, is PI for the Trans-Atlantic Programme Digging into Data Challenge project “Dig that lick: Analysing large-scale data for melodic patterns in jazz performances” (2017-19), and has led a range of other RCUK, EU, Innovate UK, JISC, and industry-funded projects. He was President of ISMIR, the International Society for Music Information Retrieval (2014-15), is founding Editor of the Transactions of ISMIR, and is a member of the EPSRC Peer Review College and the Editorial Board of the Journal of New Music Research. He was Programme Chair for ISMIR 2007 and 2013 and General Co-chair of the 2011 Dagstuhl Seminar on Multimodal Music Processing and the CMMR 2012 conference.

Research Interests: Simon Dixon’s research interests are in the fields of music informatics (music information retrieval), computer music, music signal processing, digital audio, artificial intelligence and music cognition. In recent years he has mainly worked on the extraction of musical content from audio signals, with a focus on rhythm, harmony and intonation. For example, he has worked on beat tracking, audio alignment, chord and note transcription, and singing intonation, using signal processing approaches, probabilistic models, and deep learning.



George Fazekas

Dr George Fazekas is a Senior Lecturer at QMUL in the Centre for Digital Music. He is a co-investigator of the AIM CDT responsible for researcher development training and taught curriculum. His research focusses on semantic audio technologies at the confluence of audio signal processing, machine learning and ontology / knowledge-based reasoning, as well as their applications to creative music production, semantic web and music data science. He has published over 130 papers in the fields of Music Information Retrieval, Semantic Audio, Semantic Web and Deep Learning, including an award winning ISMIR paper on transfer learning. He was QMUL’s Principal Investigator of the €2.9M EU/H2020 funded AudioCommons project, developing technologies to facilitate the use of Creative Commons audio in music, media and games production. He was CI of three Semantic Media funded projects looking at, for instance, the use of semantic audio technologies in intelligent music production, co-author of the £5.2M ‘FAST IMPACt’ (Fusing Audio and Semantic Technologies for Intelligent Music Production and Consumption) Programme Grant, and the £245k Making Musical Mood Metadata (InnovateUK) projects, where he worked with BBC R&D to create mood-based music recommendation systems. His work was demonstrated at large public events including BBC Sound Now and Next, Digital Shoreditch 2015 and the SONAR+D 2016 music festival in Barcelona, Spain. He has run trials of mood-driven musical improvisation at the exhibitronic#2 festival in France (2012) and two events at the Barbican centre in 2014 with over 150 and 70 active participants and hundreds more in the audience. He was Chair of ISWC SAAM 2018 and IEEE ISAI 2018 workshops. He was general chair of the Intl. Audio Mostly conference (ACM-SIGCHI in-coop., 2017), papers co-chair of the AES 53rd Semantic Audio conference (2014), guest editor of two Journal of the Audio Engineering Society (JAES) issues, as well as guest editor for Applied Sciences and the Journal of Web Semantics. He is a member if the IEEE, ACM and AES, serving on the Semantic Audio Analysis Technical Committee (TC-SAA) as well as the Audio Mostly steering committee. More information about his research, projects and teaching can be found at: http://semanticaudio.net



Andrew McPherson

Andrew McPherson is a Reader in Digital Media in the Centre for Digital Music. A composer (PhD U.Penn.) and electronic engineer (MEng MIT), he leads the Augmented Instruments Laboratory (http://instrumentslab.org), a research team working on digital musical instruments, embodied interaction and embedded hardware systems. He holds an EPSRC Fellowship (“Design for virtuosity”) which explores creating new instruments that repurpose the existing expert skills of trained performers. Commercial spinouts from his lab include Bela (http://bela.io), an open-source embedded hardware platform for creating interactive audio systems, and TouchKeys (http://touchkeys.co.uk), an augmented keyboard using finger position on the key surfaces to enable novel forms of musical control.



Josh Reiss

Josh Reiss is a Professor of Audio Engineering with the Centre for Digital Music at Queen Mary University of London. He has published more than 200 scientific papers (including over 50 in premier journals and 5 best paper awards), and co-authored the book Intelligent Music Production, and the textbook Audio Effects: Theory, Implementation and Application. His research has been featured in dozens of original articles and interviews, including Scientific American, New Scientist, Guardian, Forbes magazine, La Presse and on BBC Radio 4, BBC World Service, Channel 4, Radio Deutsche Welle, LBC and ITN, among others.

He is a former Governor of the Audio Engineering Society (AES), chair of their Publications Policy Committee, and co-chair of the Technical Committee on High-resolution Audio. His Royal Academy of Engineering Enterprise Fellowship resulted in founding the high-tech spin-out company, LandR, which currently has over a million and a half subscribers and is valued at over £30M. He also recently co-founded the company FXive, delivering procedurally generated sound effects.

He has investigated psychoacoustics, sound synthesis, multichannel signal processing, intelligent music production, and digital audio effects. His primary focus of research, which ties together many of the above topics, is on the use of state-of-the-art signal processing techniques for sound engineering and design. He maintains a popular blog, YouTube channel and twitter feed for scientific education and dissemination of research activities.



Mark Sandler

Mark Sandler is Professor of Signal Processing at Queen Mary University of London, and is also Director of the Centre for Digital Music. He gained a BSc (Hons) in Electronic Engineering from University of Essex in 1978, following that with a PhD, also from Essex in 1983 on digital audio power amplification.

Research Interests: Digital Signal Processing, especially for Audio and Music; Digital Music; Semantic Web for Music and Audio; Digital Power Amplifiers; Fine Grain Scalable Audio Coding; Audio Segmentation; Music and Audio Ontologies; Audio Features; Semantic Audio; Semantic Media.