Kavli Affiliate: Dan Luo
| First 5 Authors: Yuanyuan Wang, Hangting Chen, Dongchao Yang, Weiqin Li, Dan Luo
| Summary:
We propose Universal target audio Separation (UniSep), addressing the
separation task on arbitrary mixtures of different types of audio.
Distinguished from previous studies, UniSep is performed on unlimited source
domains and unlimited source numbers. We formulate the separation task as a
sequence-to-sequence problem, and a large language model (LLM) is used to model
the audio sequence in the discrete latent space, leveraging the power of LLM in
handling complex mixture audios with large-scale data. Moreover, a novel
pre-training strategy is proposed to utilize audio-only data, which reduces the
efforts of large-scale data simulation and enhances the ability of LLMs to
understand the consistency and correlation of information within audio
sequences. We also demonstrate the effectiveness of scaling datasets in an
audio separation task: we use large-scale data (36.5k hours), including speech,
music, and sound, to train a universal target audio separation model that is
not limited to a specific domain. Experiments show that UniSep achieves
competitive subjective and objective evaluation results compared with
single-task models.
| Search Query: ArXiv Query: search_query=au:”Dan Luo”&id_list=&start=0&max_results=3