Our current supervisors’ list for the 2019/20, 2020/21, 2021/22, 2022/23 and 2023/2024  academic years includes (in alphabetical order):

Supervisor
Mathieu Barthet
Emmanouil Benetos
Nick Bryan-Kinns
Simon Colton
Simon Dixon
George Fazekas
Katja Ivanova
Andrew McPherson
Johan Pauwels
Marcus Pearce
Huy Phan
Josh Reiss
Mark Sandler
Charalampos Saitis
Ahmed Sayed
Tony Stockman
Lin Wang
Geraint Wiggins
Anna Xambó Sedó
Shanxin Yuan


Mathieu Barthet

Mathieu Barthet (PhD) is a Lecturer in Digital Media at Queen Mary University of London (QMUL) and oversees the Industry Partnerships for the Centre for Doctoral Training in AI & Music. He also acts as Programme Coordinator of the MSc in Media and Arts Technology by Research (http://www.mat.qmul.ac.uk/) and Technical Director of the qMedia Studios. He received MSc degrees in Electronics and Computer Science (Paris VI University/Ecole Polytechnique de Montréal, 2003) and in Acoustics (Aix-Marseille II University/Ecole Centrale Marseille, 2004). He was awarded a PhD in Acoustics, Signal Processing and Computer Science applied to Music from Aix-Marseille II University and CNRS-LMA in 2008, and joined the Centre for Digital Music at QMUL in 2009. He has published over 100 academic papers in the fields of of Music Information Research, New Interfaces for Musical Expression and Music Perception. He was Co-Investigator of the EU H2020 project ‘The Audio Commons Initiative’ (http://www.audiocommons.org/) and Principal Investigator of the EU H2020 project ‘Towards the Internet of Musical Things’ (http://www.iomut.eu/). He was General Chair of the Computer Music Modeling and Retrieval symposium in 2012 and Program and Paper Chair of the Audio Mostly conference in 2017. He is a Guest Editor of the Journal of the Audio Engineering Society. His research on interactive musical experience has lead to performances in venues such as Barbican Arts Centre, Wilton’s Music Hall, Strasbourg’s Cathedral, CHI, ACII, CMMR, and BBC’s ‘Sound: Now and Next’ conference. He is a guitar player and performed with the (Im)Possibilities ensemble (Guildhall School of Music & Drama) at EFG London Jazz Festivals.



Emmanouil Benetos

Research interests: Machine listening, music information retrieval, computational sound scene analysis, machine learning for audio analysis, language models for music and audio, computational musicology

Emmanouil Benetos is Reader in Machine Listening at the School of Electronic Engineering and Computer Science of Queen Mary University of London (QMUL) and Turing Fellow at The Alan Turing Institute, UK. His research interests include signal processing and machine learning methods for audio analysis, as well as applications of these methods to music information retrieval, environmental sound analysis, and computational musicology, having authored/co-authored over 150 papers in the aforementioned fields. He has been primary investigator and co-investigator for several audio-related research projects funded by the EPSRC, AHRC, Royal Academy of Engineering, Royal Society, European Commission, and the industry. Website: http://www.eecs.qmul.ac.uk/~emmanouilb/



Nick Bryan-Kinns

Research interests in key areas of AI and music: AI and human co-creativity, AI and human music improvisation and live performance, AI applied to understanding human creativity.

Nick Bryan-Kinns’ research explores new approaches to the creation of interactive technologies for the Creative Industries through Interaction Design techniques. It impacts the areas of Human-Computer Interaction, Creative Computing, and Design. Nick’s research has made audio engineering more accessible and inclusive, championed the design of sustainable and ethical IoT and wearables, and engaged rural and urban communities with physical computing through craft and cultural heritage.

Nick has led major projects which contribute to national and EU research priorities by applying Interaction Design techniques to explore creative practice and addressing societal challenges. Products of his research have been exhibited internationally and reported in public media outlets including the New Scientist and the BBC World Service.

For more details, see: http://eecs.qmul.ac.uk/~nickbk/



Simon Colton

Simon Colton is a Professor of Computational Creativity, AI and Games in EECS at Queen Mary University of London, and also holds a part-time Professorship in the SensiLab of Monash University. He was previously an EPSRC leadership fellow at Imperial College and Goldsmiths College, and held an ERA Chair at Falmouth University. He is an AI researcher with around 200 publications whose work has won national and international awards, and has led numerous EPSRC and EU-funded projects. He focuses specifically on questions of Computational Creativity, where researchers study how to engineer systems to take on creative responsibilities in arts and science projects. Prof. Colton has built and experimented with systems such as the HR automated software engineer, The Painting Fool, The WhatIf Machine and the Wevva iOS app, undertaking creative applications with these systems to graphic design, the visual arts, fictional ideation, mathematical discovery and videogame design. This has enabled him to take a holistic view of the field, and he has written about the philosophy of Computational Creativity and led numerous public engagement projects. He is currently branching out into generative music, while still focusing on automated code generation and the building of casual creators for game design and the visual arts.



Simon Dixon

Prof. Simon Dixon (PI) is Director of the UKRI Centre for Doctoral Training in Artificial Intelligence and Music; Deputy Director of C4DM. He coordinates the MSCA Innovative Training Network “New Frontiers in Music Information Processing”, is PI for the Trans-Atlantic Programme Digging into Data Challenge project “Dig that lick: Analysing large-scale data for melodic patterns in jazz performances” (2017-19), and has led a range of other RCUK, EU, Innovate UK, JISC, and industry-funded projects. He was President of ISMIR, the International Society for Music Information Retrieval (2014-15), is founding Editor of the Transactions of ISMIR, and is a member of the EPSRC Peer Review College and the Editorial Board of the Journal of New Music Research. He was Programme Chair for ISMIR 2007 and 2013 and General Co-chair of the 2011 Dagstuhl Seminar on Multimodal Music Processing and the CMMR 2012 conference.

Research Interests: Simon Dixon’s research interests are in the fields of music informatics (music information retrieval), computer music, music signal processing, digital audio, artificial intelligence and music cognition. In recent years he has mainly worked on the extraction of musical content from audio signals, with a focus on rhythm, harmony and intonation. For example, he has worked on beat tracking, audio alignment, chord and note transcription, and singing intonation, using signal processing approaches, probabilistic models, and deep learning.



George Fazekas

Dr George Fazekas is a Senior Lecturer at QMUL in the Centre for Digital Music. He is a co-investigator of the AIM CDT responsible for researcher development training and taught curriculum. His research focusses on semantic audio technologies at the confluence of audio signal processing, machine learning and ontology / knowledge-based reasoning, as well as their applications to creative music production, semantic web and music data science. He has published over 130 papers in the fields of Music Information Retrieval, Semantic Audio, Semantic Web and Deep Learning, including an award-winning ISMIR paper on transfer learning. He was QMUL’s Principal Investigator of the €2.9M EU/H2020 funded AudioCommons project, developing technologies to facilitate the use of Creative Commons audio in music, media and games production. He was CI of three Semantic Media funded projects looking at, for instance, the use of semantic audio technologies in intelligent music production, co-author of the £5.2M ‘FAST IMPACt’ (Fusing Audio and Semantic Technologies for Intelligent Music Production and Consumption) Programme Grant, and the £245k Making Musical Mood Metadata (InnovateUK) projects, where he worked with BBC R&D to create mood-based music recommendation systems. His work was demonstrated at large public events including BBC Sound Now and Next, Digital Shoreditch 2015 and the SONAR+D 2016 music festival in Barcelona, Spain. He has run trials of mood-driven musical improvisation at the exhibitronic#2 festival in France (2012) and two events at the Barbican centre in 2014 with over 150 and 70 active participants and hundreds more in the audience. He was Chair of ISWC SAAM 2018 and IEEE ISAI 2018 workshops. He was general chair of the Intl. Audio Mostly conference (ACM-SIGCHI in-coop., 2017), papers co-chair of the AES 53rd Semantic Audio conference (2014), guest editor of two Journal of the Audio Engineering Society (JAES) issues, as well as guest editor for Applied Sciences and the Journal of Web Semantics. He is a member of the IEEE, ACM and AES, serving on the Semantic Audio Analysis Technical Committee (TC-SAA) as well as the Audio Mostly steering committee. More information about his research, projects and teaching can be found at: http://semanticaudio.net



Ekaterina Ivanova

Research Interests: To develop assistive multimodal robotic systems and communication strategies with a focus on human users by considering and integrating factors from robotics, clinical rehabilitation and neuroscience. Ekaterina’s human-centred robotics approach is based on experimental and data-driven methods to quantify user’s ability and develop new technologies. She want to develop user-friendly systems providing users optimal perception, learning and performance in various applications.

Ekaterina (Katja) Ivanova is a Lecturer (Assistant Professor) in Human-Computer Interaction. Her main research interest is in multimodal human-robot interaction and haptic communication between agents as part of user-centred robotics. Ekaterina long-term research goal is to develop robotic systems for medical applications and solutions for robot-assisted motor learning in diverse fields that are designed with humans and for humans. To achieve this goal, she focus on human users by considering and integrating factors from robotics, data science, neuroscience, psychology and clinical expertise. In her research, Ekaterina follows an experimental approach that is data-driven in developing new technology and quantifying user ability.


 


Andrew McPherson

Andrew McPherson is a Reader in Digital Media in the Centre for Digital Music. A composer (PhD U.Penn.) and electronic engineer (MEng MIT), he leads the Augmented Instruments Laboratory (http://instrumentslab.org), a research team working on digital musical instruments, embodied interaction and embedded hardware systems. He holds an EPSRC Fellowship (“Design for virtuosity”) which explores creating new instruments that repurpose the existing expert skills of trained performers. Commercial spinouts from his lab include Bela (http://bela.io), an open-source embedded hardware platform for creating interactive audio systems, and TouchKeys (http://touchkeys.co.uk), an augmented keyboard using finger position on the key surfaces to enable novel forms of musical control.

 



Johan Pauwels

Johan Pauwels is a Lecturer with the Centre for Digital Music at Queen Mary University of London. He received Master of Science degrees in Electrical/Electronics Engineering (KU Leuven ’06) and Artificial Intelligence (KU Leuven ’07). In 2016, he obtained a PhD from Ghent University on the topic of automatic harmony recognition from audio. He has further held research positions at IRCAM and Imperial College of London and has also taught at City, University of London and the University of West London.

Johan’s aim is to make machines understand music to the level of a trained professional, such that new tools can be developed to assist musicians performing live or in the studio, listeners navigating large music collections and learners studying music. To that end, he uses a combination of machine learning, signal processing, data science and music theory. In recent years, he has been working on narrowing the gap between academic research and user-centric applications, web-based music services and the personalisation of spatial and immersive audio.

 



Marcus Pearce

Educated in experimental psychology and artificial intelligence at the Universities of Oxford and Edinburgh, Marcus Pearce received his PhD from City, University of London, before continuing as a post-doctoral research fellow at Goldsmiths and University College London (UCL). He is currently Senior Lecturer in Sound and Music Processing at Queen Mary University of London where he is the leader of the Music Cognition Lab and co-director of the EEG Laboratory. He has published widely on computational, psychological and neuroscientific approaches to music cognition, with a focus on learning the syntax of musical styles, predictive processing of musical structure, and aesthetic experiences of music.

For more information click here.



Josh Reiss

Josh Reiss is a Professor of Audio Engineering with the Centre for Digital Music at Queen Mary University of London. He has published more than 200 scientific papers (including over 50 in premier journals and 5 best paper awards), and co-authored the book Intelligent Music Production, and the textbook Audio Effects: Theory, Implementation and Application. His research has been featured in dozens of original articles and interviews, including Scientific American, New Scientist, Guardian, Forbes magazine, La Presse and on BBC Radio 4, BBC World Service, Channel 4, Radio Deutsche Welle, LBC and ITN, among others.

He is a former Governor of the Audio Engineering Society (AES), chair of their Publications Policy Committee, and co-chair of the Technical Committee on High-resolution Audio. His Royal Academy of Engineering Enterprise Fellowship resulted in founding the high-tech spin-out company, LandR, which currently has over a million and a half subscribers and is valued at over £30M. He also recently co-founded the company FXive, delivering procedurally generated sound effects.

He has investigated psychoacoustics, sound synthesis, multichannel signal processing, intelligent music production, and digital audio effects. His primary focus of research, which ties together many of the above topics, is on the use of state-of-the-art signal processing techniques for sound engineering and design. He maintains a popular blog, YouTube channel and twitter feed for scientific education and dissemination of research activities.



Mark Sandler

Mark Sandler is Professor of Signal Processing at Queen Mary University of London, and is also Director of the Centre for Digital Music. He gained a BSc (Hons) in Electronic Engineering from University of Essex in 1978, following that with a PhD, also from Essex in 1983 on digital audio power amplification.

Research Interests: Digital Signal Processing, especially for Audio and Music; Digital Music; Semantic Web for Music and Audio; Digital Power Amplifiers; Fine Grain Scalable Audio Coding; Audio Segmentation; Music and Audio Ontologies; Audio Features; Semantic Audio; Semantic Media.



Charalampos Saitis

Charalampos (Charis) Saitis is Lecturer in Digital Music Processing at the School of Electronic Engineering and Computer Science (EECS) of Queen Mary University of London (QMUL), where he studies audio psychoacoustics and crossmodal perception, and their application to digital music/audio interface design. His research has received funding from the Austrian Science Fund, British Academy and Leverhulme Trust. Within EECS, Charis is primary faculty at the Centre for Digital Music (C4DM) and associate member of the Music Cognition Lab and the Cognitive Science Research Group. He coordinates the C4DM Special Interest Group in Neural Audio Synthesis (SIGNAS), a research forum at the crossroads of timbre perception, sound synthesis, and deep learning, and is co-investigator and deputy director of the QMUL & BBC sponsored CDT in Data-informed Audience-centric Media Engineering (2021–2025). He is also a member of the EECS Equality, Diversity & Inclusion Committee and the QMUL Racial Equality Action Group.

Charis is active in the SIG for Communication and Room Acoustics of the EPSRC funded UK Acoustics Network and an external collaborator of the Analysis, Creation, and Teaching of ORchestration (ACTOR) international partnership led by Prof Stephen McAdams at McGill University.



Ahmed Sayed

He is a Lecturer (Assistant Professor) and the Director of the MSC Big Data Science Programme at School of EECS, Queen Mary University of London, UK. He leads SAYED Systems Group where we strive to design and build Scalable Adaptive Yet Efficient Distributed systems of the Future. He is the Principal Investigator of a grant funded by UKRI-EPSRC New Investigator Award for Project KUber in partnership with major industrial players (i.e., Nokia Bell Labs, Samsung AI, IBM Research). Ahmed has a PhD in Computer Science and Engineering from the Hong Kong University of Science and Technology (HKUST) advised by Brahim Bensaou. He held the positions of Senior Researcher at Future Networks Lab, Huawei Research, Hong Kong and Research Scientist at SANDS Lab, KAUST, Saudi Arabia working with Marco Canini. His early research involved optimizing networked systems to improve the performance of applications in both wireless and data-center networks and proposing efficient and practical systems for distributed machine learning. His current research focus involves designing and prototyping Networked and Distributed Systems of the Future. In particular, he is interested in developing methods and techniques to enhance the performance of networked and distributed systems. He am currently focusing on developing scalable and efficient systems supporting distributed machine Learning (esp., distributed privacy-preserving ML aka. Federated Learning).

 



Tony Stockman

He is a senior lecturer in Database Systems, Interaction Design and Semi-structured Data and Advanced Data Modelling. His research interests include Human Computer Interaction, Auditory Displays, data sonification. Dr Tony Stockman has over 30 years of experience with Assistive Technology as a user, developer and evaluator of new assistive products.

 



Lin Wang

Ling Wang is a lecturer in Applied Data Science and Signal Processing at QMUL. He is a member of the Centre for Intelligent Sensing (CIS), a member of Institute of Coding (IoC), and an associate member of the Machine Listening Lab, and an associate member of the Centre for Advanced Robotics (ARQ). Previously he was a Postdoc at QMUL (2014-17) and a Postdoc at University of Sussex (2017-18), and was Alexander von Humboldt Fellow at University of Oldenburg Germany (2011-13). He obtained my PhD in Signal Processing at Dalian University of Technology China (2010). He is Associate Editor of IEEE ACCESS (since 2018). I am FHEA (Fellow of Higher Education Academy, UK).

Lin’s research focuses on audio and visual signal processing, robotic perception and machine learning. I developed microphone array techniques for sound enhancement, source localizaiton and blind source separtation. I developed audio-visual signal processing techniques for acoustic sensing from flying robots (mini-drones). I applied machine learning techniques to human activity and context recogntiion from wearable sensors (motion, GPS, sound and image).

 



Geraint Wiggins

Geraint is Professor of Computational Creativity at the VUB and at Queen Mary University of London, UK. He was one of the founders of the research field of computational creativity, which provides an alternative approach to the simulation intelligent and creative human activities on computers. He has worked in creative AI and cognitive science for around 30 years, serving as chair of the SSAISB from 2000 to 2004, and then chair of the Association for Computational Creativity (ACC) from 2007 to 2014; he is now chair of the General Assembly of the ACC. He is editor-in-chief of the new Journal for Computational Creativity, and an editorial board member (or equivalent) of Musicae Scientiae, Music Perception and the Journal of New Music Research. He moved to the VUB in January 2018.

You can find Geraint’s Google Scholar listing here.

 



Anna Xambó Sedó

Anna’s research contributes to the fields of HCI and sound & music computing (SMC) and has three foci: Technology, Design and Experience.

Anna is currently a Senior Lecturer in Sound and Music Computing at the Centre for Digital Music (C4DM), and the principal investigator (PI) for the AHRC Early Career Research project “Sensing the Forest”.

For four years (2020-2023), Anna have been a Senior Lecturer in Music and Audio Technology at the Leicester Media School (LMS), Faculty of Computing, Engineering and Media (CEM) at De Montfort University (DMU) and member of the Music, Technology and Innovation – Institute of Sonic Creativity (MTI^2). She has also worked as the PI for the EPSRC HDI Network Plus funded project “MIRLCAuto: A Virtual Agent for Music Information Retrieval in Live Coding” (04/2020-10/2021). Since 2019, Anna is an Associate Fellow at the Higher Education Academy. I have been the programme leader of the BSc Digital Music Technology programme (2020-2022). During the period 2019-2022, I have been the officer of Women in New Interfaces for Musical Expression (WiNIME).

 



Shanxin Yuan

Shanxin is a Lecturer in Digital Environment at School of Electronic Engineering and Computer ScienceQueen Mary University of London. Previously, he was a Senior Research Scientist in Computer Vision at Huawei Noah’s Ark Lab, London Research Center, UK. The techniques he has developed/involved have been shipped to several products. Shanxin received the PhD degree from Imperial College London, where he worked on hand pose estimation. His research interests are machine learning and computer vision, particularly 3D digital humans and computational photography. He is an Associate Editor for The Visual Computer journal. Shanxin regularly review for major computer vision conferences (CVPR, ICCV, ECCV, and NeurIPS) and related journals (TPAMI, IJCV and TIP).