Archives: 1st May 2026

Call for Papers: TISMIR Special Collection on Language-Centric Music Information Retrieval

We are pleased to announce a Call for Papers for a new Special Collection in the Transactions of the International Society for Music Information Retrieval (TISMIR) titled: “Language-Centric Music Information Retrieval”.

This special collection focuses on Music Information Retrieval (MIR) research informed by language-centered modeling. We invite contributions that explore how concepts and methods from Natural Language Processing (NLP) and large-scale language models can support the analysis, representation, retrieval, and generation of music.

Topics of interest include (but are not limited to):
– Tokenization and representations for symbolic music and audio
– NLP for music-related text (lyrics, metadata, reviews, etc.)
– Language-informed tagging, classification, and semantic understanding
– Retrieval and recommendation, including query-by-description and conversational search
– Music generation and co-creation, including text-conditioned generation and iterative editing workflows
– Language-guided audio and music production, such as mixing, mastering, and sound design
– Knowledge resources for MIR, including ontologies, knowledge graphs, and entity linking
– Evaluation and human factors, including quality assessment, human feedback, creativity, bias, and cultural representation
– Trust, ethics, and transparency, including synthetic content detection and copyright-related considerations
– Long-context modeling of musical structure and form
– Multimodal methods involving text, symbolic music, and audio (as relevant to the collection’s focus)
 
Guest Editors:
– Anna Kruspe (Lead Editor), Munich University of Applied Sciences
– SeungHeon Doh, KAIST
– Elena Epure, Idiap Research Institute
– Yinghao Ma, Queen Mary University of London
– Arthur Flexer, Johannes Kepler Universität Linz
– Li Su, Institute of Information Science
– Ruibin Yuan, Hong Kong University of Science and Technology
Submission Guidelines:
– Submission Link: https://transactions.ismir.net
– Note: Please specify in your cover letter that the submission is for the Special Collection “Language-Centric Music Information Retrieval”.
– Word Limit: Maximum 8,000 words.
– Pre-notification: If you plan to submit, please let us know via email at anna.kruspe@hm.edu to assist our planning.

For detailed formatting guidelines and information regarding extensions of previously published workshop research, please refer to the TISMIR website. We look forward to receiving your innovative contributions!

Best regards,
On behalf of the Guest Editors

AIM at ICASSP 2026

On 4-8 May 2026, several AIM researchers will participate at the 2026 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2026). ICASSP is the leading conference in the field of signal processing and the flagship event of the IEEE Signal Processing Society.

As in previous years, AIM will have a strong presence at the conference, both in terms of numbers and overall impact. The papers below, authored or co-authored by AIM members, will be presented at the main ICASSP 2026 track:

See you in Barcelona!


AIM at ICLR 2026

Logo of the ICLR conferenceOn 23-27 April, AIM researchers will participate at the Fourteenth International Conference on Learning Representations (ICLR 2026), taking place in Rio de Janeiro, Brazil. ICLR is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence called representation learning, but generally referred to as deep learning.

AIM members will be presenting the following papers at the main track of ICLR 2026:

  • SCRAPL: scattering transform with random paths for machine learning, by Christopher Mitcheltree, Vincent Lostanlen, Emmanouil Benetos, Mathieu Lagrange
  • OmniVideoBench: towards audio-visual understanding evaluation for omni MLLMs, by Caorui Li, Yu Chen, Yiyan Ji, Jin Xu, Zhenyu Cui, Shihao Li, Yuanxing Zhang, Zhenghao Song, Dingling Zhang, Heying, Haoxiang Liu, Yuxuan Wang, Qiufeng Wang, Jiafu Tang, Zhenhe Wu, Jiehui Luo, Zhiyu Pan, Weihao Xie, Chenchen Zhang, Zhaohui Wang, Jiayi Tian, Yanghai Wang, Zhe Cao, Minxin Dai, ke wang, Runzhe Wen, Yinghao Ma, Yaning Pan, Sungkyun Chang, Termeh Taheri, Haiwen Xia, Christos Plachouras, Emmanouil Benetos, Yizhi Li, Ge Zhang, Jian Yang, Tianhao Peng, Zili Wang, Minghao Liu, Junran Peng, Zhaoxiang Zhang, Jiaheng Liu
  • YuE: scaling open foundation models for long-form music generation, by Ruibin Yuan, Hanfeng Lin, Shuyue Guo, Ge Zhang, Jiahao Pan, Yongyi Zang, Haohe Liu, Yiming Liang, Wenye Ma, Xingjian Du, Xeron Du, Zhen Ye, Tianyu Zheng, Zhengxuan Jiang, Yinghao Ma, Minghao Liu, Zeyue Tian, Ziya Zhou, Liumeng Xue, Xingwei Qu, Yizhi Li, Shangda Wu, Tianhao Shen, Ziyang Ma, Jun Zhan, Chunhui Wang, Yatian Wang, Xiaowei Chi, Xinyue Zhang, Zhenzhu Yang, XiangzhouWang, Shansong Liu, Lingrui Mei, Peng Li, Junjie Wang, Jianwei Yu, Guojian Pang, Xu Li, Zihao Wang, Xiaohuan Zhou, Lijun Yu, Emmanouil Benetos, Yong Chen, Chenghua Lin, Xie Chen, Gus Xia, Zhaoxiang Zhang, Chao Zhang, Wenhu Chen, Xinyu Zhou, Xipeng Qiu, Roger Dannenberg, Jiaheng Liu, Jian Yang, Wenhao Huang, Wei Xue, Xu Tan, Yike Guo

See you all at ICLR!