The papers will fit in the scientific program, to be announced in some weeks.
Paper Title | Authors |
---|---|
TriAD: Capturing harmonics with 3D Convolutions | Miguel Perez Fernandez (Universitat Pompeu Fabra; Huawei)*; Holger Kirchhoff (Huawei); Xavier Serra (Universitat Pompeu Fabra ) |
Data Collection in Music Generation Training Sets: A Critical Analysis | Fabio Morreale (University of Auckland)*; Megha Sharma (University of Tokyo); I-Chieh Wei (University of Auckland) |
A Review of Validity and its Relationship to Music Information Research | Bob L. T. Sturm (KTH Royal Institute of Technology); Arthur Flexer (Johannes Kepler University Linz)* |
Segmentation and Analysis of Taniavartanam in Carnatic Music Concerts | Gowriprasad R (IIT Madras)*; Srikrishnan Sridharan (Carnatic Percussionist); R Aravind (Indian Institute of Technology Madras); Hema A Murthy (IIT Madras) |
SingStyle111: A Multilingual Singing Dataset With Style Transfer | Shuqi Dai (Carnegie Mellon University)*; Siqi Chen (University of South California); Yuxuan Wu (Carnegie Mellon University); Roy Huang (Carnegie Mellon University); Roger B. Dannenberg (School of Computer Science, Carnegie Mellon University) |
TapTamDrum: A Dataset for Dualized Drum Patterns | Behzad Haki (Universitat Pompeu Fabra)*; Błażej Kotowski (MTG); Cheuk Lun Isaac Lee (Universitat Pompeu Fabra ); Sergi Jordà (Universitat Pompeu Fabra) |
Collaborative Song Dataset (CoSoD): An annotated dataset of multi-artist collaborations in popular music | Michèle Duguay (Harvard University)*; Kate Mancey (Harvard University); Johanna Devaney (Brooklyn College) |
Efficient Notation Assembly in Optical Music Recognition | Carlos Penarrubia (University of Alicante); Carlos Garrido-Munoz (University of Alicante); Jose J. Valero-Mas (Universitat Pompeu Fabra); Jorge Calvo-Zaragoza (University of Alicante)* |
Impact of time and note duration tokenizations on deep learning symbolic music modeling | Nathan Fradet (LIP6 - Sorbonne University)*; Nicolas Gutowski (University of Angers); Fabien Chhel (Groupe ESEO); Jean-Pierre Briot (CNRS) |
Chromatic Chords in Theory and Practice | Mark R H Gotham (Durham)* |
A Few-shot Neural Approach for Layout Analysis of Music Score Images | Francisco J. Castellanos (University of Alicante)*; Antonio Javier Gallego (Universidad de Alicante); Ichiro Fujinaga (McGill University) |
Real-time Percussive Technique Recognition and Embedding Learning for the Acoustic Guitar | Andrea Martelloni (Queen Mary University of London)*; Andrew McPherson (QMUL); Mathieu Barthet (Queen Mary University of London) |
Polyffusion: A Diffusion Model for Polyphonic Score Generation with Internal and External Controls | Lejun Min (Shanghai Jiao Tong University)*; Junyan Jiang (New York University Shanghai); Gus Xia (New York University Shanghai); Jingwei Zhao (National University of Singapore) |
Introducing DiMCAT for processing and analyzing notated music on a very large scale | Johannes Hentschel (École Polytechnique Fédérale de Lausanne)*; Andrew McLeod (Fraunhofer IDMT); Yannis Rammos (EPFL); Martin A Rohrmeier (Ecole Polytechnique Fédérale de Lausanne) |
Symbolic Music Representations for Classification Tasks: A Systematic Evaluation | Huan Zhang (Queen Mary University of London)*; Emmanouil Karystinaios (Johannes Kepler University); Simon Dixon (Queen Mary University of London); Gerhard Widmer (Johannes Kepler University); Carlos Eduardo Cancino-Chacón (Johannes Kepler University Linz) |
A dataset and Baselines for Measuring and Predicting the Music Piece Memorability | Li-Yang Tseng (National Yang Ming Chiao Tung University); Tzu-Ling Lin (National Yang Ming Chiao Tung University); Hong-Han Shuai (National Yang Ming Chiao Tung University)*; JEN-WEI HUANG (NYCU); Wen-Whei Chang (National Yang Ming Chiao Tung University) |
Human-AI Music Creation: Understanding the Perceptions and Experiences of Music Creators for Ethical and Productive Collaboration | Michele Newman (University of Washington)*; Lidia J Morris (University of Washington); Jin Ha Lee (University of Washington) |
White Box Search over Audio Synthesizer Parameters | Yuting Yang (Princeton University)*; Zeyu Jin (Adobe Research); Adam Finkelstein (Princeton University); Connelly Barnes (Adobe Research) |
Decoding drums, instrumentals, vocals, and mixed sources in music using human brain activity with fMRI | Vincent K.M. Cheung (Sony Computer Science Laboratories, Inc.)*; Lana Okuma (RIKEN); Kazuhisa Shibata (RIKEN); Kosetsu Tsukuda (National Institute of Advanced Industrial Science and Technology (AIST)); Masataka Goto (National Institute of Advanced Industrial Science and Technology (AIST)); Shinichi Furuya (Sony Computer Science Laboratories Inc.) |
Exploring the correspondence of melodic contour with gesture in raga alap singing | Shreyas M Nadkarni (Indian Institute of Technology Bombay); Sujoy Roychowdhury (Indian Institute of Technology Bombay); Preeti Rao (Indian Institute of Technology Bombay)*; Martin Clayton (Durham University) |
Dual Attention-based Multi-scale Feature Fusion Approach for Dynamic Music Emotion Recognition | Liyue Zhang ( Xi’an Jiaotong University)*; Xinyu Yang (Xi’an Jiaotong University); Yichi Zhang (Xi’an Jiaotong University); Jing Luo (Xi’an Jiaotong University) |
Automatic Piano Transcription with Hierarchical Frequency-Time Transformer | Keisuke Toyama (Sony Group Corporation)*; Taketo Akama (Sony CSL); Yukara Ikemiya (Sony Research); Yuhta Takida (Sony Group Corporation); WeiHsiang Liao (Sony Group Corporation); Yuki Mitsufuji (Sony Group Corporation) |
A Cross-Version Approach to Audio Representation Learning for Orchestral Music | Michael Krause (International Audio Laboratories Erlangen)*; Christof Weiß (University of Würzburg); Meinard Müller (International Audio Laboratories Erlangen) |
IteraTTA: An interface for exploring both text prompts and audio priors in generating music with text-to-audio models | Hiromu Yakura (University of Tsukuba)*; Masataka Goto (National Institute of Advanced Industrial Science and Technology (AIST)) |
Weakly Supervised Multi-Pitch Estimation Using Cross-Version Alignment | Michael Krause (International Audio Laboratories Erlangen)*; Sebastian Strahl (International Audio Laboratories Erlangen); Meinard Müller (International Audio Laboratories Erlangen) |
Polyrhythmic modelling of non-isochronous and microtiming patterns | George Sioros (University of Plymouth)* |
On the Performance of Optical Music Recognition in the Absence of Specific Training Data | Juan Carlos Martinez-Sevilla (University of Alicante)*; Adrián Roselló (Universidad de Alicante); David Rizo (Universidad de Alicante); Jorge Calvo-Zaragoza (University of Alicante) |
Predicting Music Hierarchies with a Graph-Based Neural Decoder | Francesco Foscarin (Johannes Kepler University Linz)*; Daniel Harasim (École Polytechnique Fédérale de Lausanne); Gerhard Widmer (Johannes Kepler University) |
Roman Numeral Analysis with Graph Neural Networks: Onset-wise Predictions from Note-wise Features | Emmanouil Karystinaios (Johannes Kepler University)*; Gerhard Widmer (Johannes Kepler University) |
CLaMP: Contrastive Language-Music Pre-training for Cross-Modal Symbolic Music Information Retrieval | Shangda Wu (Central Conservatory of Music); Dingyao Yu (Peking University); Xu Tan (Microsoft Research Asia); Maosong Sun (Tsinghua University)* |
The Batik-plays-Mozart Corpus: Linking Performance to Score to Musicological Annotations | Patricia Hu (Johannes Kepler University)*; Gerhard Widmer (Johannes Kepler University) |
Musical Micro-Timing for Live Coding | Max Johnson (University of Cambridge); Mark R H Gotham (Durham)* |
Optimizing Feature Extraction for Symbolic Music | Federico Simonetta (Instituto Complutense de Ciencias Musicales)*; Ana Llorens (Universidad Complutense de Madrid); Martín Serrano (Instituto Complutense de Ciencias Musicales); Eduardo García-Portugués (Universidad Carlos III de Madrid); Álvaro Torrente (Instituto Complutense de Ciencias Musicale - Universidad Complutense de Madrid) |
Mono-to-stereo through parametric stereo generation | Joan Serra (Dolby Laboratories)*; Davide Scaini (Dolby Laboratories); Santiago Pascual (Dolby Laboratories); Daniel Arteaga (Dolby Laboratories); Jordi Pons (Dolby Laboratories); Jeroen Breebaart (Dolby Laboratories); Giulio Cengarle (Dolby Laboratories) |
From West to East: Who can understand the music of the others better? | Charilaos Papaioannou (School of ECE, National Technical University of Athens)*; Emmanouil Benetos (Queen Mary University of London); Alexandros Potamianos (National Technical University of Athens) |
The Coordinated Corpus of Popular Musics (CoCoPops): A Meta-Dataset of Melodic and Harmonic Transcriptions | Claire Arthur (Georgia Institute of Technology)*; Nathaniel Condit-Schultz (Georgia Institute of Technology) |
Composer’s Assistant: An Interactive Transformer for Multi-Track MIDI Infilling | Martin E Malandro (Sam Houston State University)* |
The FAV Corpus: An audio dataset of favorite pieces and excerpts, with formal analyses and music theory descriptors | Ethan Lustig (Ethan Lustig)*; David Temperley (Eastman School of Music) |
LyricWhiz: Robust Multilingual Lyrics Transcription by Whispering to ChatGPT | Le Zhuo (Beihang University); Ruibin Yuan (CMU)*; Jiahao Pan (HKBU); Yinghao MA (Queen Mary University of London); Yizhi Li (The University of Sheffield); Ge Zhang (University of Michigan); Si Liu (Beihang University); Roger B. Dannenberg (School of Computer Science, Carnegie Mellon University); Jie Fu (BAAI); Chenghua Lin (University of Sheffield); Emmanouil Benetos (Queen Mary University of London); Wenhu Chen (University of Waterloo); Wei Xue (HKUST); Yike Guo (Hong Kong University of Science and Technology) |
Sounds out of place? Score independent detection of conspicouous mistakes in piano performances | Alia Morsi (Universitat Pompeu Fabra)*; Kana Tatsumi (Nagoya Institute of Technology); Akira Maezawa (Yamaha Corporation); Takuya Fujishima (Yamaha Corporation); Xavier Serra (Universitat Pompeu Fabra ) |
VampNet: Music Generation via Masked Acoustic Token Modeling | Hugo F Flores Garcia (Northwestern University)*; Prem Seetharaman (Northwestern University); Rithesh Kumar (Descript); Bryan Pardo (Northwestern University) |
Expert and Novice Evaluations of Piano Performances: Criteria for Computer-Aided Feedback | Yucong Jiang (University of Richmond)* |
Stabilizing Training with Soft Dynamic Time Warping: A Case Study for Pitch Class Estimation with Weakly Aligned Targets | Johannes Zeitler (International Audio Laboratories Erlangen)*; Simon Deniffel (International Audio Laboratories Erlangen); Michael Krause (International Audio Laboratories Erlangen); Meinard Müller (International Audio Laboratories Erlangen) |
Repetition-Structure Inference with Formal Prototypes | Christoph Finkensiep (EPFL)*; Matthieu Haeberle (EPFL); Friedrich Eisenbrand (EPFL); Markus Neuwirth (Anton Bruckner Privatuniversität Linz); Martin A Rohrmeier (Ecole Polytechnique Fédérale de Lausanne) |
Algorithmic Harmonization of Tonal Melodies using Weighted Pitch Context Vectors | Peter Van Kranenburg (Utrecht University; Meertens Institute)*; Eoin J Kearns (Meertens Instituut) |
Text-to-lyrics generation with image-based semantics and reduced risk of plagiarism | Kento Watanabe (National Institute of Advanced Industrial Science and Technology (AIST))*; Masataka Goto (National Institute of Advanced Industrial Science and Technology (AIST)) |
BPS-Motif: A Dataset for Repeated Pattern Discovery of Polyphonic Symbolic Music | YO-WEI HSIAO (Academia Sinica); TZU-YUN Hung (National Taiwan Normal University); Tsung-Ping Chen (Academia Sinica); Li Su (Academia Sinica)* |
A Repetition-based Triplet Mining Approach for Music Segmentation | Morgan Buisson (Telecom-Paris)*; Brian McFee (New York University); Slim Essid (Telecom Paris - Institut Polytechnique de Paris); Helene-Camille Crayencour (CNRS) |
Contrastive Learning for Cross-modal Artist Retrieval | Andres Ferraro (Pandora/SiriusXM)*; Jaehun Kim (Pandora / SiriusXM); Andreas Ehmann (Pandora); Sergio Oramas (Pandora/SiriusXM); Fabien Gouyon (Pandora/SiriusXM) |
Finding Tori: Self-supervised Learning for Analyzing Korean Folk Song | Danbinaerin Han (Sogang Univ.); Rafael Caro Repetto (Kunstuniversität Graz); Dasaem Jeong (Sogang University)* |
Transformer-based beat tracking with low-resolution encoder and high-resolution decoder | Tian Cheng (National Institute of Advanced Industrial Science and Technology (AIST))*; Masataka Goto (National Institute of Advanced Industrial Science and Technology (AIST)) |
Towards computational music analysis for music therapy | Anja Volk (Utrecht University)*; Tinka Veldhuis (Utrecht University); Katrien Foubert (LUCA School of Arts); Jos De Backer (LUCA School of Arts) |
On the effectiveness of speech self-supervised learning for music | Yinghao MA (Queen Mary University of London)*; Ruibin Yuan (CMU); Yizhi Li (The University of Sheffield); Ge Zhang (University of Michigan); Chenghua Lin (University of Sheffield); Xingran Chen (University of Michigan); Anton Ragni (University of Sheffield); Hanzhi Yin (Carnegie Mellon University); Emmanouil Benetos (Queen Mary University of London); Norbert Gyenge (Sheffield University); Ruibo Liu (Dartmouth College); Gus Xia (New York University Shanghai); Roger B. Dannenberg (School of Computer Science, Carnegie Mellon University); Yike Guo (Hong Kong University of Science and Technology); Jie Fu (BAAI) |
Adding Descriptors to Melodies Improves Pattern Matching: A Study on Slovenian Folk Songs | Vanessa Nina Borsan (Université de Lille)*; Mathieu Giraud (CNRS, Université de Lille); Richard Groult (Université de Rouen Normandie); Thierry Lecroq (Université de Rouen Normandie ) |
The Games We Play: Exploring The Impact of ISMIR on Musicology | Vanessa Nina Borsan (Université de Lille)*; Mathieu Giraud (CNRS, Université de Lille); Richard Groult (Université de Rouen Normandie) |
How Control and Transparency for Users Could Improve Artist Fairness in Music Recommender Systems | Karlijn Dinnissen (Utrecht University)*; Christine Bauer (Paris Lodron University Salzburg) |
MoisesDB: A Dataset For Source Separation Beyond 4 Stems | Igor G. Pereira (Moises.AI)*; Felipe Araujo (Moises.AI); Filip Korzeniowski (Moises.AI); Richard Vogl (moises.ai) |
Sequence-to-Sequence Network Training Methods for Automatic Guitar Transcription with Tokenized Outputs | Sehun Kim (Nagoya University)*; Kazuya Takeda (Nagoya University); Tomoki Toda (Nagoya University) |
Towards a New Interface for Music Listening: A User Experience Study on YouTube | Ahyeon Choi (Seoul National University)*; Eunsik Shin (Seoul National University); Haesun Joung (Seoul National University); Joongseek Lee (Seoul National University); Kyogu Lee (Seoul National University) |
Modeling Bends in Popular Music Guitar Tablatures | Alexandre D’Hooge (Université de Lille)*; Louis Bigo (Université de Lille); Ken Déguernel (CNRS) |
Comparing Texture in Piano Scores | Louis Couturier (MIS, Université de Picardie Jules Verne)*; Louis Bigo (Université de Lille); Florence Leve (Université de Picardie Jules Verne - Lab. MIS - Algomus) |
Supporting musicological investigations with information retrieval tools: an iterative approach to data collection | David Lewis (University of Oxford eResearch Centre)*; Elisabete Shibata (Beethoven-Haus Bonn); Andrew Hankinson (RISM Digital); Johannes Kepper (Paderborn University); Kevin R Page (University of Oxford); Lisa Rosendahl (Paderborn University); Mark Saccomano (Paderborn University); Christine Siegert (Beethoven-Haus Bonn) |
Similarity evaluation of violin directivity patterns for musical instrument retrieval | Mirco Pezzoli (Politecnicno di Milano)*; Raffaele Malvermi (Politecnico di Milano); Fabio Antonacci (Politecnico di Milano); Augusto Sarti (Politecnico di Milano) |
Carnatic Singing Voice Separation Using Cold Diffusion on Training Data with Bleeding | Genís Plaja-Roglans (Music Technology Group)*; Marius Miron (Universitat Pompeu Fabra); Adithi Shankar (Universitat Pompeu Fabra); Xavier Serra (Universitat Pompeu Fabra ) |
Unveiling the Impact of Musical Factors in Judging a Song on First Listen: Insights from a User Survey | Kosetsu Tsukuda (National Institute of Advanced Industrial Science and Technology (AIST))*; Tomoyasu Nakano (National Institute of Advanced Industrial Science and Technology (AIST)); Masahiro Hamasaki (National Institute of Advanced Industrial Science and Technology (AIST)); Masataka Goto (National Institute of Advanced Industrial Science and Technology (AIST)) |
Towards Building a Phylogeny of Gregorian Chant Melodies | Jan Hajič, jr. (Charles University)*; Gustavo Ballen (dos Reis research group, School of Biological and Behavioural Sciences, Queen Mary University of London); Klára Mühlová (Institute of Musicology, Faculty of Arts, Masaryk University); Hana Vlhová-Wörner (Masaryk Institute and Archives, Czech Academy of Sciences) |
Audio Embeddings as Teachers for Music Classification | Yiwei Ding (Georgia Institute of Technology)*; Alexander Lerch (Georgia Institute of Technology) |
ScorePerformer: Expressive Piano Performance Rendering with Fine-Grained Control | Ilya Borovik (Skolkovo Institute of Science and Technology)*; Vladimir Viro (Peachnote) |
Timbre Transfer using Image-to-Image Denoising Diffusion Implicit Models | Luca Comanducci (Politecnico di Milano)*; Fabio Antonacci (Politecnico di Milano); Augusto Sarti (Politecnico di Milano) |
Semi-Automated Music Catalog Curation Using Audio and Metadata | Brian Regan (Spotify)*; Desislava Hristova (Spotify); Mariano Beguerisse-Díaz (Spotify) |
Crowd’s Performance on Temporal Activity Detection of Musical Instruments in Polyphonic Music | Ioannis Petros Samiotis (Delft University of Technology)*; Alessandro Bozzon (Delft University of Technology); Christoph Lofi (TU Delft) |
Singer Identity Representation Learning using Self-Supervised Techniques | Bernardo Torres (Telecom Paris, Institut polytechnique de Paris)*; Stefan Lattner (Sony CSL); Gaël Richard (Telecom Paris, Institut polytechnique de Paris) |
PESTO: Pitch Estimation with Self-supervised Transposition-equivariant Objective | Alain Riou (Télécom Paris, IP Paris, Sony CSL)*; Stefan Lattner (Sony CSL); Gaëtan Hadjeres (Sony CSL); Geoffroy Peeters (LTCI - Télécom Paris, IP Paris) |
Music as flow: a formal representation of hierarchical processes in music | Zeng Ren (EPFL)*; Wulfram Gerstner (EPFL); Martin A Rohrmeier (Ecole Polytechnique Fédérale de Lausanne) |
Online Symbolic Music Alignment with Offline Reinforcement Learning | Silvan Peter (JKU)* |
InverSinthII: Sound matching via self-supervised synthesizer-proxy and inference-time finetuning | Oren Barkan (Microsoft); Shlomi Shvartzamn (Tel Aviv University ); Noy Uzrad (Tel Aviv University ); Moshe Laufer (Tel Aviv University); Almog Elharar (Tel Aviv University); Noam Koenigstein (Tel Aviv University)* |
A Semi-Supervised Deep Learning Approach to Dataset Collection for Query-by-Humming Task | Amantur Amatov (Higher School of Economics)*; Dmitry Lamanov (Huawei Noah’s Ark Lab); Maksim Titov (Huawei Noah’s Ark Lab); Ivan Vovk (Huawei Noah’s Ark Lab); Ilya Makarov (AI Center, NUST MISiS); Mikhail Kudinov (Huawei Noah’s Ark Lab) |
Towards Improving Harmonic Sensitivity and Prediction Stability for Singing Melody Extraction | Keren Shao (UCSD)*; Ke Chen (University of California San Diego); Taylor Berg-Kirkpatrick (UCSD); Shlomo Dubnov (UC San Diego) |
A Dataset and Baseline for Automated Assessment of Timbre Quality in Trumpet Sound | Ninad Puranik (McGill University ); Alberto Acquilino (McGill University)*; Ichiro Fujinaga (McGill University); Gary Scavone (McGill University) |
Visual Overviews for Sheet Music Structure | Frank Heyen (VISUS, University of Stuttgart)*; Quynh Quang Ngo (VISUS, University of Stuttgart); Michael Sedlmair (Uni Stuttgart) |
Passage Summarization with recurrent models for Audio – Sheet Music Retrieval | Luis Carvalho (Johannes Kepler University)*; Gerhard Widmer (Johannes Kepler University) |
Predicting performance difficulty from piano sheet music images | Pedro Ramoneda (Universitat Pompeu Fabra)*; Dasaem Jeong (Sogang University); Jose J. Valero-Mas (Universitat Pompeu Fabra); Xavier Serra (Universitat Pompeu Fabra ) |
LP-MusicCaps: LLM-Based Pseudo Music Captioning | Seungheon Doh (KAIST)*; Keunwoo Choi (Gaudio Lab, Inc.); Jongpil Lee (Neutune); Juhan Nam (KAIST) |
Singing voice synthesis using differentiable LPC and glottal-flow inspired wavetables | Chin-Yun Yu (Queen Mary University of London)*; George Fazekas (QMUL) |
High-Resolution Violin Transcription using Weak Labels | Nazif Can Tamer (Universitat Pompeu Fabra)*; Yigitcan Özer (International Audio Laboratories Erlangen); Meinard Müller (International Audio Laboratories Erlangen); Xavier Serra (Universitat Pompeu Fabra ) |
Quantifying the Ease of Playing Song Chords on the Guitar | Marcel A Vélez Vásquez (University of Amsterdam)*; Mariëlle Baelemans (University of Amsterdam); Jonathan Driedger (Chordify); Willem Zuidema (ILLC, UvA); John Ashley Burgoyne (University of Amsterdam) |
FlexDTW: Dynamic Time Warping With Flexible Boundary Conditions | Irmak Bukey (Pomona College); Jason Zhang (University of Michigan); Timothy Tsai (Harvey Mudd College)* |
FiloBass: A Dataset and Corpus Based Study of Jazz Basslines | Xavier Riley (C4DM)*; Simon Dixon (Queen Mary University of London) |
Gender-coded sound: Analysing the gendering of music in toy commercials via multi-task learning | Luca Marinelli (Queen Mary University of London)*; George Fazekas (QMUL); Charalampos Saitis (Queen Mary University of London) |
Modeling Harmonic Similarity for Jazz Using Co-occurrence Vectors and the Membrane Area | Carey Bunks (Queen Mary University of London)*; Simon Dixon (Queen Mary University of London); Tillman Weyde (City, University of London); Bruno Di Giorgi (Apple) |
Efficient Supervised Training of Audio Transformers for Music Representation Learning | Pablo Alonso-Jiménez (Universitat Pompeu Fabra)*; Xavier Serra (Universitat Pompeu Fabra ); Dmitry Bogdanov (Universitat Pompeu Fabra) |
A Computational Evaluation Framework for Singable Lyric Translation | Haven Kim (KAIST)*; Kento Watanabe (National Institute of Advanced Industrial Science and Technology (AIST)); Masataka Goto (National Institute of Advanced Industrial Science and Technology (AIST)); Juhan Nam (KAIST) |
Chorus-Playlist: Exploring the Impact of Listening to Only Choruses in a Playlist | Kosetsu Tsukuda (National Institute of Advanced Industrial Science and Technology (AIST))*; Masahiro Hamasaki (National Institute of Advanced Industrial Science and Technology (AIST)); Masataka Goto (National Institute of Advanced Industrial Science and Technology (AIST)) |
Correlation of EEG responses reflects structural similarity of choruses in popular music | Neha Rajagopalan (Stanford University)*; Blair Kaneshiro (Stanford University) |
Harmonic Analysis with Neural Semi-CRF | Qiaoyu Yang (University of Rochester)*; Frank Cwitkowitz (University of Rochester); Zhiyao Duan (Unversity of Rochester) |
Exploring Sampling Techniques for Generating Melodies with a Transformer Language Model | Mathias Rose Bjare (Johannes Kepler University Linz)*; Stefan Lattner (Sony CSL); Gerhard Widmer (Johannes Kepler University) |
Measuring the Eurovision Song Contest: A Living Dataset for Real-World MIR | John Ashley Burgoyne (University of Amsterdam)*; Janne Spijkervet (University of Amsterdam); David J Baker (University of Amsterdam) |
Music source separation with MLP mixing of time, frequency, and channel | Tomoyasu Nakano (National Institute of Advanced Industrial Science and Technology (AIST))*; Masataka Goto (National Institute of Advanced Industrial Science and Technology (AIST)) |
Self-Similarity-Based and Novelty-based loss for music structure analysis | Geoffroy Peeters (LTCI - Télécom Paris, IP Paris)* |
Transfer Learning and Bias Correction with Pre-trained Audio Embeddings | Changhong Wang (Telecom Paris, Institut polytechnique de Paris)*; Gaël Richard (Telecom Paris, Institut polytechnique de Paris); Brian McFee (New York University) |
The Music Meta Ontology: a flexible semantic model for the interoperability of music metadata | Valentina Carriero (University of Bologna); Jacopo de Berardinis (King’s College London); Albert Meroño-Peñuela (King’s College London); Andrea Poltronieri (University of Bologna)*; Valentina Presutti (University of Bologna) |
Self-Refining of Pseudo Labels for Music Source Separation with Noisy Labeled Data | Junghyun Koo (Seoul National University); Yunkee Chae (Seoul National University)*; Chang-Bin Jeon (Seoul National University); Kyogu Lee (Seoul National University) |
Polar Manhattan Displacement: measuring tonal distances between chords based on intervallic content | Jeffrey K Miller (Queen Mary University of London)*; Johan Pauwels (Queen Mary University of London); Mark B Sandler (Queen Mary University of London) |