AIM at Sónar+D 2025

AIM at Sónar+D 2025 Project Area. Photos taken by Anna Xambó and Shuoyang Zheng

Sónar is a pioneering festival that’s reflected the evolution and expansion of electronic music and digital culture since its first edition in 1994. The interactive exhibition space, Project Area at Sónar+D, showcases state-of-the-art technology, innovative design, radical thinking, and cutting-edge research side-by-side in the heart of the music festival Sónar by Day.

At Sónar+D Project Area, AIM members Shuoyang Zheng and Franco Caspe joined the AI & Music exhibition area powered by S+T+ARTS to present their innovative tools for AI-driven sound creation. In addition, AIM members Christopher Mitcheltree represented Neutone to present the cutting-edge audio plugins.

Franco Caspe presented BRAVE, a timbre transfer tool that allows performers to play an AI model as an instrument, transforming timbre in real-time. Shuoyang Zheng presented Latent Terrain Synthesis, an innovative method to explore sonic landscapes dissected from the internal space of a generative AI model.

Photo by Xavi Bové

The AI Performance Playground took place between 11th and 14th June as part of Sónar+D 2025, co-organised by C4DM Senior Lecturer Anna Xambó, powered by S+T+ARTS, with support from La Salle-URL. This collaborative hacklab brought together artists, coders, musicians, DIY creators, and creative technologists to explore and deepen their use of machine learning tools, AI, and other related technologies for musical performance. AIM member Teresa Pelinski participated in the hacklab and joined a collaborative performance at SonarÀgora – open to the general public at Sónar by Day.

AIM members Christopher Mitcheltree and Shuoyang Zheng, together with Rebecca Fiebrink (University of the Arts London) and Nao Tokui (Neutone) joined the enlightening talk panel during the hacklab with Ben Cantil (DataMind) to discuss the challenges and opportunities of being an artist using AI tools.


CfP: First AES International Conference on AI and Machine Learning for Audio (AIMLA 2025)

AIMLA 2025 poster, with conference name, location, and dates.First AES International Conference on Artificial Intelligence and Machine Learning for Audio (AIMLA 2025), Queen Mary University of London, Sept. 8-10, 2025, Call for contributions

The Audio Engineering Society and the Centre for Digital Music invite audio researchers and practitioners, from academia and industry to participate in the first AES conference dedicated to artificial intelligence and machine learning, as it applies to audio. This 3 day event, aims to bring the community together, educate, demonstrate and advance the state of the art. It will feature keynote speakers, workshops, tutorials, challenges and cutting-edge peer-reviewed research.

The scope is wide – expecting attendance from all types of institutions, including academia, industry, and pure research, with diverse disciplinary perspectives – but tied together by a focus on artificial intelligence and machine learning for audio.

For more information on the Calls for Papers, Special Sessions, Tutorials, and Challenges, please visit the conference website.

Three AIM PhD students are part of the organising committee, with Soumya Sai Vanka and Franco Caspe serving as Special Sessions Co-Chairs, and Farida Yusuf serving as Sponsorship Chair.


AIM at NeurIPS 2023

On 10-16 December, several AIM researchers will participate at the Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS 2023), taking place in New Orleans, USA. NeurIPS is a world-leading conference in AI, machine learning, and computational neuroscience, attracting more than ten thousand attendees annually. The AI and Music centre for doctoral training will have a strong presence at NeurIPS 2023.

In the Main Conference and specifically its Datasets and Benchmarks track, the following paper is authored by AIM members:

  • MARBLE: Music Audio Representation Benchmark for Universal Evaluation (Ruibin Yuan, Yinghao Ma, Yizhi Li, Ge Zhang, Xingran Chen, Hanzhi Yin, Zhuo Le, Yiqi Liu, Jiawen Huang, Zeyue Tian, Binyue Deng, Ningzhi Wang, Chenghua Lin, Emmanouil Benetos, Anton Ragni, Norbert Gyenge, Roger Dannenberg, Wenhu Chen, Gus Xia, Wei Xue, Si Liu, Shi Wang, Ruibo Liu, Yike Guo, Jie Fu)

In the NeurIPS Machine Learning for Audio Workshop:

  • AIM PhD student Ben Hayes is giving an invited talk on Differentiable digital signal processing (DDSP).

And the following papers are authored by AIM members:

See you at NeurIPS!