Structured Arrangement @ NeurIPS 2024
We propose style prior modelling, a generic methodology for structured sequence generation with fine-grained control. In this paper, we focus on the task of accompaniment arrangement, and by modelling the prior of disentangled style factors given content, we build a cascaded arrangement process: from lead sheet to piano texture style, and then from piano to orchestral function style. Extensive experiments show that our system generates structured, creative, and natural multi-track arrangements with state-of-the-art quality.
Query and re-Arrange (Q&A) @ IJCAI 2023
We propose Q&A, a unified framework for multi-track symbolic music rearrangement tasks. In this paper, we tackle rearrangement problems via self-supervised learning, in which the mapping styles can be regarded as conditions and controlled in a flexible way. Q&A learns both a content representation from the mixture and function (style) representations from each individual track, while the latter queries the former in order to rearrange a new piece. Our current model focuses on popular music and provides a controllable pathway to four scenarios: 1) re-instrumentation, 2) piano cover generation, 3) orchestration, and 4) voice separation. Also see our demo page and colab tutorial.
Beat Transformer @ ISMIR 2022
We propose Beat Transformer, a novel Transformer encoder architecture for joint beat and downbeat tracking. We develop a Transformer model with both time-wise attention and instrument-wise attention to capture deep-buried metrical cues. Moreover, our model adopts a novel dilated self-attention mechanism, which achieves powerful hierarchical modelling with only linear complexity. We further discover an interpretable attention pattern that mirrors our understanding of hierarchical metrical structures. Also see our colab tutorial and code.
DAT-CVAE @ ISMIR 2022
Video presentaiton for our paper "Domain Adversarial Traning on C-VAE for Controllable Music Generation". The variational auto-encoder has become a leading framework for symbolic music generation. In this paper, we focus on the task of melody harmonization and leverage domain adversarial training for better controllability. Also see our demo page and code.
AccoMontage @ Sound & Music Computing Concert 2022
Demo showcase of our latest research progress with AccoMontage, a state-of-the-art accompaniment arrangement system. In this performance, we showcase AI arrangement for Scottish folk music Auld Lang Syne, led by Hulusi performer Yixin Wang.
NUS Sound and Music Computing Lab
Welcome to Sound & Music Computing Lab at National University of Singapore! In this video I will guide you through our lab and research and introduce our lab members to you. Have fun! For more information, please feel free to access our lab webpage.
AccoMontage @ ISMIR 2021
We propose AccoMontage, an accompaniment arrangement system for whole pieces of music through unifying phrase selection and neural style transfer. Our paper is accepted by ISMIR 2021, and below is the presentation video of our paper. Another performance demo by Music X Lab members can be accessed here.
Dancing Robot @ SJTU
Presentation for undergraduate project "Real-Time Music-Driven Dance Generation for Humanoid Robot", December 2019. For details about this project, please refer to my project page.