Accepted papers

The papers will fit in the scientific program, to be announced in some weeks.

Paper TitleAuthors
TriAD: Capturing harmonics with 3D ConvolutionsMiguel Perez Fernandez (Universitat Pompeu Fabra; Huawei)*; Holger Kirchhoff (Huawei); Xavier Serra (Universitat Pompeu Fabra )
Data Collection in Music Generation Training Sets: A Critical AnalysisFabio Morreale (University of Auckland)*; Megha Sharma (University of Tokyo); I-Chieh Wei (University of Auckland)
A Review of Validity and its Relationship to Music Information ResearchBob L. T. Sturm (KTH Royal Institute of Technology); Arthur Flexer (Johannes Kepler University Linz)*
Segmentation and Analysis of Taniavartanam in Carnatic Music ConcertsGowriprasad R (IIT Madras)*; Srikrishnan Sridharan (Carnatic Percussionist); R Aravind (Indian Institute of Technology Madras); Hema A Murthy (IIT Madras)
SingStyle111: A Multilingual Singing Dataset With Style TransferShuqi Dai (Carnegie Mellon University)*; Siqi Chen (University of South California); Yuxuan Wu (Carnegie Mellon University); Roy Huang (Carnegie Mellon University); Roger B. Dannenberg (School of Computer Science, Carnegie Mellon University)
TapTamDrum: A Dataset for Dualized Drum PatternsBehzad Haki (Universitat Pompeu Fabra)*; Błażej Kotowski (MTG); Cheuk Lun Isaac Lee (Universitat Pompeu Fabra ); Sergi Jordà (Universitat Pompeu Fabra)
Collaborative Song Dataset (CoSoD): An annotated dataset of multi-artist collaborations in popular musicMichèle Duguay (Harvard University)*; Kate Mancey (Harvard University); Johanna Devaney (Brooklyn College)
Efficient Notation Assembly in Optical Music RecognitionCarlos Penarrubia (University of Alicante); Carlos Garrido-Munoz (University of Alicante); Jose J. Valero-Mas (Universitat Pompeu Fabra); Jorge Calvo-Zaragoza (University of Alicante)*
Impact of time and note duration tokenizations on deep learning symbolic music modelingNathan Fradet (LIP6 - Sorbonne University)*; Nicolas Gutowski (University of Angers); Fabien Chhel (Groupe ESEO); Jean-Pierre Briot (CNRS)
Chromatic Chords in Theory and PracticeMark R H Gotham (Durham)*
A Few-shot Neural Approach for Layout Analysis of Music Score ImagesFrancisco J. Castellanos (University of Alicante)*; Antonio Javier Gallego (Universidad de Alicante); Ichiro Fujinaga (McGill University)
Real-time Percussive Technique Recognition and Embedding Learning for the Acoustic GuitarAndrea Martelloni (Queen Mary University of London)*; Andrew McPherson (QMUL); Mathieu Barthet (Queen Mary University of London)
Polyffusion: A Diffusion Model for Polyphonic Score Generation with Internal and External ControlsLejun Min (Shanghai Jiao Tong University)*; Junyan Jiang (New York University Shanghai); Gus Xia (New York University Shanghai); Jingwei Zhao (National University of Singapore)
Introducing DiMCAT for processing and analyzing notated music on a very large scaleJohannes Hentschel (École Polytechnique Fédérale de Lausanne)*; Andrew McLeod (Fraunhofer IDMT); Yannis Rammos (EPFL); Martin A Rohrmeier (Ecole Polytechnique Fédérale de Lausanne)
Symbolic Music Representations for Classification Tasks: A Systematic EvaluationHuan Zhang (Queen Mary University of London)*; Emmanouil Karystinaios (Johannes Kepler University); Simon Dixon (Queen Mary University of London); Gerhard Widmer (Johannes Kepler University); Carlos Eduardo Cancino-Chacón (Johannes Kepler University Linz)
A dataset and Baselines for Measuring and Predicting the Music Piece MemorabilityLi-Yang Tseng (National Yang Ming Chiao Tung University); Tzu-Ling Lin (National Yang Ming Chiao Tung University); Hong-Han Shuai (National Yang Ming Chiao Tung University)*; JEN-WEI HUANG (NYCU); Wen-Whei Chang (National Yang Ming Chiao Tung University)
Human-AI Music Creation: Understanding the Perceptions and Experiences of Music Creators for Ethical and Productive CollaborationMichele Newman (University of Washington)*; Lidia J Morris (University of Washington); Jin Ha Lee (University of Washington)
White Box Search over Audio Synthesizer ParametersYuting Yang (Princeton University)*; Zeyu Jin (Adobe Research); Adam Finkelstein (Princeton University); Connelly Barnes (Adobe Research)
Decoding drums, instrumentals, vocals, and mixed sources in music using human brain activity with fMRIVincent K.M. Cheung (Sony Computer Science Laboratories, Inc.)*; Lana Okuma (RIKEN); Kazuhisa Shibata (RIKEN); Kosetsu Tsukuda (National Institute of Advanced Industrial Science and Technology (AIST)); Masataka Goto (National Institute of Advanced Industrial Science and Technology (AIST)); Shinichi Furuya (Sony Computer Science Laboratories Inc.)
Exploring the correspondence of melodic contour with gesture in raga alap singingShreyas M Nadkarni (Indian Institute of Technology Bombay); Sujoy Roychowdhury (Indian Institute of Technology Bombay); Preeti Rao (Indian Institute of Technology Bombay)*; Martin Clayton (Durham University)
Dual Attention-based Multi-scale Feature Fusion Approach for Dynamic Music Emotion RecognitionLiyue Zhang ( Xi’an Jiaotong University)*; Xinyu Yang (Xi’an Jiaotong University); Yichi Zhang (Xi’an Jiaotong University); Jing Luo (Xi’an Jiaotong University)
Automatic Piano Transcription with Hierarchical Frequency-Time TransformerKeisuke Toyama (Sony Group Corporation)*; Taketo Akama (Sony CSL); Yukara Ikemiya (Sony Research); Yuhta Takida (Sony Group Corporation); WeiHsiang Liao (Sony Group Corporation); Yuki Mitsufuji (Sony Group Corporation)
A Cross-Version Approach to Audio Representation Learning for Orchestral MusicMichael Krause (International Audio Laboratories Erlangen)*; Christof Weiß (University of Würzburg); Meinard Müller (International Audio Laboratories Erlangen)
IteraTTA: An interface for exploring both text prompts and audio priors in generating music with text-to-audio modelsHiromu Yakura (University of Tsukuba)*; Masataka Goto (National Institute of Advanced Industrial Science and Technology (AIST))
Weakly Supervised Multi-Pitch Estimation Using Cross-Version AlignmentMichael Krause (International Audio Laboratories Erlangen)*; Sebastian Strahl (International Audio Laboratories Erlangen); Meinard Müller (International Audio Laboratories Erlangen)
Polyrhythmic modelling of non-isochronous and microtiming patternsGeorge Sioros (University of Plymouth)*
On the Performance of Optical Music Recognition in the Absence of Specific Training DataJuan Carlos Martinez-Sevilla (University of Alicante)*; Adrián Roselló (Universidad de Alicante); David Rizo (Universidad de Alicante); Jorge Calvo-Zaragoza (University of Alicante)
Predicting Music Hierarchies with a Graph-Based Neural DecoderFrancesco Foscarin (Johannes Kepler University Linz)*; Daniel Harasim (École Polytechnique Fédérale de Lausanne); Gerhard Widmer (Johannes Kepler University)
Roman Numeral Analysis with Graph Neural Networks: Onset-wise Predictions from Note-wise FeaturesEmmanouil Karystinaios (Johannes Kepler University)*; Gerhard Widmer (Johannes Kepler University)
CLaMP: Contrastive Language-Music Pre-training for Cross-Modal Symbolic Music Information RetrievalShangda Wu (Central Conservatory of Music); Dingyao Yu (Peking University); Xu Tan (Microsoft Research Asia); Maosong Sun (Tsinghua University)*
The Batik-plays-Mozart Corpus: Linking Performance to Score to Musicological AnnotationsPatricia Hu (Johannes Kepler University)*; Gerhard Widmer (Johannes Kepler University)
Musical Micro-Timing for Live CodingMax Johnson (University of Cambridge); Mark R H Gotham (Durham)*
Optimizing Feature Extraction for Symbolic MusicFederico Simonetta (Instituto Complutense de Ciencias Musicales)*; Ana Llorens (Universidad Complutense de Madrid); Martín Serrano (Instituto Complutense de Ciencias Musicales); Eduardo García-Portugués (Universidad Carlos III de Madrid); Álvaro Torrente (Instituto Complutense de Ciencias Musicale - Universidad Complutense de Madrid)
Mono-to-stereo through parametric stereo generationJoan Serra (Dolby Laboratories)*; Davide Scaini (Dolby Laboratories); Santiago Pascual (Dolby Laboratories); Daniel Arteaga (Dolby Laboratories); Jordi Pons (Dolby Laboratories); Jeroen Breebaart (Dolby Laboratories); Giulio Cengarle (Dolby Laboratories)
From West to East: Who can understand the music of the others better?Charilaos Papaioannou (School of ECE, National Technical University of Athens)*; Emmanouil Benetos (Queen Mary University of London); Alexandros Potamianos (National Technical University of Athens)
The Coordinated Corpus of Popular Musics (CoCoPops): A Meta-Dataset of Melodic and Harmonic TranscriptionsClaire Arthur (Georgia Institute of Technology)*; Nathaniel Condit-Schultz (Georgia Institute of Technology)
Composer’s Assistant: An Interactive Transformer for Multi-Track MIDI InfillingMartin E Malandro (Sam Houston State University)*
The FAV Corpus: An audio dataset of favorite pieces and excerpts, with formal analyses and music theory descriptorsEthan Lustig (Ethan Lustig)*; David Temperley (Eastman School of Music)
LyricWhiz: Robust Multilingual Lyrics Transcription by Whispering to ChatGPTLe Zhuo (Beihang University); Ruibin Yuan (CMU)*; Jiahao Pan (HKBU); Yinghao MA (Queen Mary University of London); Yizhi Li (The University of Sheffield); Ge Zhang (University of Michigan); Si Liu (Beihang University); Roger B. Dannenberg (School of Computer Science, Carnegie Mellon University); Jie Fu (BAAI); Chenghua Lin (University of Sheffield); Emmanouil Benetos (Queen Mary University of London); Wenhu Chen (University of Waterloo); Wei Xue (HKUST); Yike Guo (Hong Kong University of Science and Technology)
Sounds out of place? Score independent detection of conspicouous mistakes in piano performancesAlia Morsi (Universitat Pompeu Fabra)*; Kana Tatsumi (Nagoya Institute of Technology); Akira Maezawa (Yamaha Corporation); Takuya Fujishima (Yamaha Corporation); Xavier Serra (Universitat Pompeu Fabra )
VampNet: Music Generation via Masked Acoustic Token ModelingHugo F Flores Garcia (Northwestern University)*; Prem Seetharaman (Northwestern University); Rithesh Kumar (Descript); Bryan Pardo (Northwestern University)
Expert and Novice Evaluations of Piano Performances: Criteria for Computer-Aided FeedbackYucong Jiang (University of Richmond)*
Stabilizing Training with Soft Dynamic Time Warping: A Case Study for Pitch Class Estimation with Weakly Aligned TargetsJohannes Zeitler (International Audio Laboratories Erlangen)*; Simon Deniffel (International Audio Laboratories Erlangen); Michael Krause (International Audio Laboratories Erlangen); Meinard Müller (International Audio Laboratories Erlangen)
Repetition-Structure Inference with Formal PrototypesChristoph Finkensiep (EPFL)*; Matthieu Haeberle (EPFL); Friedrich Eisenbrand (EPFL); Markus Neuwirth (Anton Bruckner Privatuniversität Linz); Martin A Rohrmeier (Ecole Polytechnique Fédérale de Lausanne)
Algorithmic Harmonization of Tonal Melodies using Weighted Pitch Context VectorsPeter Van Kranenburg (Utrecht University; Meertens Institute)*; Eoin J Kearns (Meertens Instituut)
Text-to-lyrics generation with image-based semantics and reduced risk of plagiarismKento Watanabe (National Institute of Advanced Industrial Science and Technology (AIST))*; Masataka Goto (National Institute of Advanced Industrial Science and Technology (AIST))
BPS-Motif: A Dataset for Repeated Pattern Discovery of Polyphonic Symbolic MusicYO-WEI HSIAO (Academia Sinica); TZU-YUN Hung (National Taiwan Normal University); Tsung-Ping Chen (Academia Sinica); Li Su (Academia Sinica)*
A Repetition-based Triplet Mining Approach for Music SegmentationMorgan Buisson (Telecom-Paris)*; Brian McFee (New York University); Slim Essid (Telecom Paris - Institut Polytechnique de Paris); Helene-Camille Crayencour (CNRS)
Contrastive Learning for Cross-modal Artist RetrievalAndres Ferraro (Pandora/SiriusXM)*; Jaehun Kim (Pandora / SiriusXM); Andreas Ehmann (Pandora); Sergio Oramas (Pandora/SiriusXM); Fabien Gouyon (Pandora/SiriusXM)
Finding Tori: Self-supervised Learning for Analyzing Korean Folk SongDanbinaerin Han (Sogang Univ.); Rafael Caro Repetto (Kunstuniversität Graz); Dasaem Jeong (Sogang University)*
Transformer-based beat tracking with low-resolution encoder and high-resolution decoderTian Cheng (National Institute of Advanced Industrial Science and Technology (AIST))*; Masataka Goto (National Institute of Advanced Industrial Science and Technology (AIST))
Towards computational music analysis for music therapyAnja Volk (Utrecht University)*; Tinka Veldhuis (Utrecht University); Katrien Foubert (LUCA School of Arts); Jos De Backer (LUCA School of Arts)
On the effectiveness of speech self-supervised learning for musicYinghao MA (Queen Mary University of London)*; Ruibin Yuan (CMU); Yizhi Li (The University of Sheffield); Ge Zhang (University of Michigan); Chenghua Lin (University of Sheffield); Xingran Chen (University of Michigan); Anton Ragni (University of Sheffield); Hanzhi Yin (Carnegie Mellon University); Emmanouil Benetos (Queen Mary University of London); Norbert Gyenge (Sheffield University); Ruibo Liu (Dartmouth College); Gus Xia (New York University Shanghai); Roger B. Dannenberg (School of Computer Science, Carnegie Mellon University); Yike Guo (Hong Kong University of Science and Technology); Jie Fu (BAAI)
Adding Descriptors to Melodies Improves Pattern Matching: A Study on Slovenian Folk SongsVanessa Nina Borsan (Université de Lille)*; Mathieu Giraud (CNRS, Université de Lille); Richard Groult (Université de Rouen Normandie); Thierry Lecroq (Université de Rouen Normandie )
The Games We Play: Exploring The Impact of ISMIR on MusicologyVanessa Nina Borsan (Université de Lille)*; Mathieu Giraud (CNRS, Université de Lille); Richard Groult (Université de Rouen Normandie)
How Control and Transparency for Users Could Improve Artist Fairness in Music Recommender SystemsKarlijn Dinnissen (Utrecht University)*; Christine Bauer (Paris Lodron University Salzburg)
MoisesDB: A Dataset For Source Separation Beyond 4 StemsIgor G. Pereira (Moises.AI)*; Felipe Araujo (Moises.AI); Filip Korzeniowski (Moises.AI); Richard Vogl (moises.ai)
Sequence-to-Sequence Network Training Methods for Automatic Guitar Transcription with Tokenized OutputsSehun Kim (Nagoya University)*; Kazuya Takeda (Nagoya University); Tomoki Toda (Nagoya University)
Towards a New Interface for Music Listening: A User Experience Study on YouTubeAhyeon Choi (Seoul National University)*; Eunsik Shin (Seoul National University); Haesun Joung (Seoul National University); Joongseek Lee (Seoul National University); Kyogu Lee (Seoul National University)
Modeling Bends in Popular Music Guitar TablaturesAlexandre D’Hooge (Université de Lille)*; Louis Bigo (Université de Lille); Ken Déguernel (CNRS)
Comparing Texture in Piano ScoresLouis Couturier (MIS, Université de Picardie Jules Verne)*; Louis Bigo (Université de Lille); Florence Leve (Université de Picardie Jules Verne - Lab. MIS - Algomus)
Supporting musicological investigations with information retrieval tools: an iterative approach to data collectionDavid Lewis (University of Oxford eResearch Centre)*; Elisabete Shibata (Beethoven-Haus Bonn); Andrew Hankinson (RISM Digital); Johannes Kepper (Paderborn University); Kevin R Page (University of Oxford); Lisa Rosendahl (Paderborn University); Mark Saccomano (Paderborn University); Christine Siegert (Beethoven-Haus Bonn)
Similarity evaluation of violin directivity patterns for musical instrument retrievalMirco Pezzoli (Politecnicno di Milano)*; Raffaele Malvermi (Politecnico di Milano); Fabio Antonacci (Politecnico di Milano); Augusto Sarti (Politecnico di Milano)
Carnatic Singing Voice Separation Using Cold Diffusion on Training Data with BleedingGenís Plaja-Roglans (Music Technology Group)*; Marius Miron (Universitat Pompeu Fabra); Adithi Shankar (Universitat Pompeu Fabra); Xavier Serra (Universitat Pompeu Fabra )
Unveiling the Impact of Musical Factors in Judging a Song on First Listen: Insights from a User SurveyKosetsu Tsukuda (National Institute of Advanced Industrial Science and Technology (AIST))*; Tomoyasu Nakano (National Institute of Advanced Industrial Science and Technology (AIST)); Masahiro Hamasaki (National Institute of Advanced Industrial Science and Technology (AIST)); Masataka Goto (National Institute of Advanced Industrial Science and Technology (AIST))
Towards Building a Phylogeny of Gregorian Chant MelodiesJan Hajič, jr. (Charles University)*; Gustavo Ballen (dos Reis research group, School of Biological and Behavioural Sciences, Queen Mary University of London); Klára Mühlová (Institute of Musicology, Faculty of Arts, Masaryk University); Hana Vlhová-Wörner (Masaryk Institute and Archives, Czech Academy of Sciences)
Audio Embeddings as Teachers for Music ClassificationYiwei Ding (Georgia Institute of Technology)*; Alexander Lerch (Georgia Institute of Technology)
ScorePerformer: Expressive Piano Performance Rendering with Fine-Grained ControlIlya Borovik (Skolkovo Institute of Science and Technology)*; Vladimir Viro (Peachnote)
Timbre Transfer using Image-to-Image Denoising Diffusion Implicit ModelsLuca Comanducci (Politecnico di Milano)*; Fabio Antonacci (Politecnico di Milano); Augusto Sarti (Politecnico di Milano)
Semi-Automated Music Catalog Curation Using Audio and MetadataBrian Regan (Spotify)*; Desislava Hristova (Spotify); Mariano Beguerisse-Díaz (Spotify)
Crowd’s Performance on Temporal Activity Detection of Musical Instruments in Polyphonic MusicIoannis Petros Samiotis (Delft University of Technology)*; Alessandro Bozzon (Delft University of Technology); Christoph Lofi (TU Delft)
Singer Identity Representation Learning using Self-Supervised TechniquesBernardo Torres (Telecom Paris, Institut polytechnique de Paris)*; Stefan Lattner (Sony CSL); Gaël Richard (Telecom Paris, Institut polytechnique de Paris)
PESTO: Pitch Estimation with Self-supervised Transposition-equivariant ObjectiveAlain Riou (Télécom Paris, IP Paris, Sony CSL)*; Stefan Lattner (Sony CSL); Gaëtan Hadjeres (Sony CSL); Geoffroy Peeters (LTCI - Télécom Paris, IP Paris)
Music as flow: a formal representation of hierarchical processes in musicZeng Ren (EPFL)*; Wulfram Gerstner (EPFL); Martin A Rohrmeier (Ecole Polytechnique Fédérale de Lausanne)
Online Symbolic Music Alignment with Offline Reinforcement LearningSilvan Peter (JKU)*
InverSinthII: Sound matching via self-supervised synthesizer-proxy and inference-time finetuningOren Barkan (Microsoft); Shlomi Shvartzamn (Tel Aviv University ); Noy Uzrad (Tel Aviv University ); Moshe Laufer (Tel Aviv University); Almog Elharar (Tel Aviv University); Noam Koenigstein (Tel Aviv University)*
A Semi-Supervised Deep Learning Approach to Dataset Collection for Query-by-Humming TaskAmantur Amatov (Higher School of Economics)*; Dmitry Lamanov (Huawei Noah’s Ark Lab); Maksim Titov (Huawei Noah’s Ark Lab); Ivan Vovk (Huawei Noah’s Ark Lab); Ilya Makarov (AI Center, NUST MISiS); Mikhail Kudinov (Huawei Noah’s Ark Lab)
Towards Improving Harmonic Sensitivity and Prediction Stability for Singing Melody ExtractionKeren Shao (UCSD)*; Ke Chen (University of California San Diego); Taylor Berg-Kirkpatrick (UCSD); Shlomo Dubnov (UC San Diego)
A Dataset and Baseline for Automated Assessment of Timbre Quality in Trumpet SoundNinad Puranik (McGill University ); Alberto Acquilino (McGill University)*; Ichiro Fujinaga (McGill University); Gary Scavone (McGill University)
Visual Overviews for Sheet Music StructureFrank Heyen (VISUS, University of Stuttgart)*; Quynh Quang Ngo (VISUS, University of Stuttgart); Michael Sedlmair (Uni Stuttgart)
Passage Summarization with recurrent models for Audio – Sheet Music RetrievalLuis Carvalho (Johannes Kepler University)*; Gerhard Widmer (Johannes Kepler University)
Predicting performance difficulty from piano sheet music imagesPedro Ramoneda (Universitat Pompeu Fabra)*; Dasaem Jeong (Sogang University); Jose J. Valero-Mas (Universitat Pompeu Fabra); Xavier Serra (Universitat Pompeu Fabra )
LP-MusicCaps: LLM-Based Pseudo Music CaptioningSeungheon Doh (KAIST)*; Keunwoo Choi (Gaudio Lab, Inc.); Jongpil Lee (Neutune); Juhan Nam (KAIST)
Singing voice synthesis using differentiable LPC and glottal-flow inspired wavetablesChin-Yun Yu (Queen Mary University of London)*; George Fazekas (QMUL)
High-Resolution Violin Transcription using Weak LabelsNazif Can Tamer (Universitat Pompeu Fabra)*; Yigitcan Özer (International Audio Laboratories Erlangen); Meinard Müller (International Audio Laboratories Erlangen); Xavier Serra (Universitat Pompeu Fabra )
Quantifying the Ease of Playing Song Chords on the GuitarMarcel A Vélez Vásquez (University of Amsterdam)*; Mariëlle Baelemans (University of Amsterdam); Jonathan Driedger (Chordify); Willem Zuidema (ILLC, UvA); John Ashley Burgoyne (University of Amsterdam)
FlexDTW: Dynamic Time Warping With Flexible Boundary ConditionsIrmak Bukey (Pomona College); Jason Zhang (University of Michigan); Timothy Tsai (Harvey Mudd College)*
FiloBass: A Dataset and Corpus Based Study of Jazz BasslinesXavier Riley (C4DM)*; Simon Dixon (Queen Mary University of London)
Gender-coded sound: Analysing the gendering of music in toy commercials via multi-task learningLuca Marinelli (Queen Mary University of London)*; George Fazekas (QMUL); Charalampos Saitis (Queen Mary University of London)
Modeling Harmonic Similarity for Jazz Using Co-occurrence Vectors and the Membrane AreaCarey Bunks (Queen Mary University of London)*; Simon Dixon (Queen Mary University of London); Tillman Weyde (City, University of London); Bruno Di Giorgi (Apple)
Efficient Supervised Training of Audio Transformers for Music Representation LearningPablo Alonso-Jiménez (Universitat Pompeu Fabra)*; Xavier Serra (Universitat Pompeu Fabra ); Dmitry Bogdanov (Universitat Pompeu Fabra)
A Computational Evaluation Framework for Singable Lyric TranslationHaven Kim (KAIST)*; Kento Watanabe (National Institute of Advanced Industrial Science and Technology (AIST)); Masataka Goto (National Institute of Advanced Industrial Science and Technology (AIST)); Juhan Nam (KAIST)
Chorus-Playlist: Exploring the Impact of Listening to Only Choruses in a PlaylistKosetsu Tsukuda (National Institute of Advanced Industrial Science and Technology (AIST))*; Masahiro Hamasaki (National Institute of Advanced Industrial Science and Technology (AIST)); Masataka Goto (National Institute of Advanced Industrial Science and Technology (AIST))
Correlation of EEG responses reflects structural similarity of choruses in popular musicNeha Rajagopalan (Stanford University)*; Blair Kaneshiro (Stanford University)
Harmonic Analysis with Neural Semi-CRFQiaoyu Yang (University of Rochester)*; Frank Cwitkowitz (University of Rochester); Zhiyao Duan (Unversity of Rochester)
Exploring Sampling Techniques for Generating Melodies with a Transformer Language ModelMathias Rose Bjare (Johannes Kepler University Linz)*; Stefan Lattner (Sony CSL); Gerhard Widmer (Johannes Kepler University)
Measuring the Eurovision Song Contest: A Living Dataset for Real-World MIRJohn Ashley Burgoyne (University of Amsterdam)*; Janne Spijkervet (University of Amsterdam); David J Baker (University of Amsterdam)
Music source separation with MLP mixing of time, frequency, and channelTomoyasu Nakano (National Institute of Advanced Industrial Science and Technology (AIST))*; Masataka Goto (National Institute of Advanced Industrial Science and Technology (AIST))
Self-Similarity-Based and Novelty-based loss for music structure analysisGeoffroy Peeters (LTCI - Télécom Paris, IP Paris)*
Transfer Learning and Bias Correction with Pre-trained Audio EmbeddingsChanghong Wang (Telecom Paris, Institut polytechnique de Paris)*; Gaël Richard (Telecom Paris, Institut polytechnique de Paris); Brian McFee (New York University)
The Music Meta Ontology: a flexible semantic model for the interoperability of music metadataValentina Carriero (University of Bologna); Jacopo de Berardinis (King’s College London); Albert Meroño-Peñuela (King’s College London); Andrea Poltronieri (University of Bologna)*; Valentina Presutti (University of Bologna)
Self-Refining of Pseudo Labels for Music Source Separation with Noisy Labeled DataJunghyun Koo (Seoul National University); Yunkee Chae (Seoul National University)*; Chang-Bin Jeon (Seoul National University); Kyogu Lee (Seoul National University)
Polar Manhattan Displacement: measuring tonal distances between chords based on intervallic contentJeffrey K Miller (Queen Mary University of London)*; Johan Pauwels (Queen Mary University of London); Mark B Sandler (Queen Mary University of London)