Call for Tutorials: AES International Conference on AI and Machine Learning for Audio (AIMLA 2025)

AIMLA conference logoThe AES International Conference on Artificial Intelligence and Machine Learning for Audio (AIMLA 2025), hosted at the Centre for Digital Music of Queen Mary University of London and taking place on Sept. 8-10, 2025 is calling for Tutorial submissions.

We are seeking proposals for 120-minute hands-on tutorials on the conference topics. The proposal should include a title, an abstract (60-120 words), a list of topics, and a description (up to 500 words). Additionally, the submission should include presenters’ names, qualifications, and technical requirements (sound requirements during the presentation, such as stereo, multichannel, etc.). We encourage tutorials to be supported by an elaborate collation of discussed content and code to support learning and building resources for a given topic. The deadline for tutorial proposals is on October 25, 2024 and the tutorial proposal submission portal can be found at: https://easychair.org/conferences/?conf=2025aesaimlapaneltut

For more information on the Calls for Papers, Special Sessions, Tutorials, and Challenges, please visit the conference website: https://aes2.org/events-calendar/2025-aes-international-conference-on-artificial-intelligence-and-machine-learning-for-audio/


Call for Challenges: AES International Conference on AI and Machine Learning for Audio (AIMLA 2025)

AIMLA conference logoThe AES International Conference on Artificial Intelligence and Machine Learning for Audio (AIMLA 2025), hosted at the Centre for Digital Music of Queen Mary University of London and taking place on Sept. 8-10, 2025 is calling for proposal submissions for Challenges.

The conference promotes knowledge sharing among researchers, professionals, and engineers in AI and audio. Special Sessions include pre-conference challenges hosted by industry or academic teams to drive technology improvements and explore new research directions. Each team manages the organization, data provision, participation instructions, mentoring, scoring, summaries, and results presentation.

Challenges are selected based on their scientific and technological significance, data quality and relevance, and proposal feasibility. Collaborative proposals from different labs are encouraged and prioritized. We expect an initial expression of interest via email to special-sessions-aimla@qmul.ac.uk by October 15, 2024, followed by a full submission on EasyChair by the final submission deadline.

For more information on the Calls for Papers, Special Sessions, Tutorials, and Challenges, please visit the conference website: https://aes2.org/events-calendar/2025-aes-international-conference-on-artificial-intelligence-and-machine-learning-for-audio/


AIM at Interspeech 2024

Logo of Interspeech 2024 conferenceOn 31 August to 5 September, AIM PhD students will participate in Interspeech 2024 and its satellite events. Interspeech is the premier international conference for research on the science and technology of spoken language processing.

Chin-Yun Yu will present his paper “Differentiable Time-Varying Linear Prediction in the Context of End-to-End Analysis-by-Synthesis“. The paper introduces improvements to the GOLF voice synthesizer, by implementing a joint filtering approach for noise and harmonics using a single LP (Linear Prediction) filter, resembling a classic source-filter model, and replacing frame-wise approximation with sample-by-sample LP processing, implemented efficiently in C++ and CUDA. These modifications result in smoother spectral envelopes, reduced artefacts, and improved performance in listening tests compared to other baselines. More information can be found here.

Farida Yusuf is part of the programme committee for the Young Female Researchers in Speech Workshop (YFRSW). The workshop is designed for Bachelor’s and Master’s students currently engaged in speech science and technology research, aiming to promote interest in the field among those who haven’t yet committed to pursuing a PhD. It features panel discussions, student poster presentations, and mentoring sessions, providing participants with opportunities to showcase their research and engage with PhD students and senior researchers in the field.

See you at Interspeech!