Improving Language Model-Based Zero-Shot Text-to-Speech Synthesis with Multi-Scale Acoustic Prompts

Kavli Affiliate: Dan Luo

| First 5 Authors: Shun Lei, Yixuan Zhou, Liyang Chen, Dan Luo, Zhiyong Wu

| Summary:

Zero-shot text-to-speech (TTS) synthesis aims to clone any unseen speaker’s
voice without adaptation parameters. By quantizing speech waveform into
discrete acoustic tokens and modeling these tokens with the language model,
recent language model-based TTS models show zero-shot speaker adaptation
capabilities with only a 3-second acoustic prompt of an unseen speaker.
However, they are limited by the length of the acoustic prompt, which makes it
difficult to clone personal speaking style. In this paper, we propose a novel
zero-shot TTS model with the multi-scale acoustic prompts based on a neural
codec language model VALL-E. A speaker-aware text encoder is proposed to learn
the personal speaking style at the phoneme-level from the style prompt
consisting of multiple sentences. Following that, a VALL-E based acoustic
decoder is utilized to model the timbre from the timbre prompt at the
frame-level and generate speech. The experimental results show that our
proposed method outperforms baselines in terms of naturalness and speaker
similarity, and can achieve better performance by scaling out to a longer style
prompt.

| Search Query: ArXiv Query: search_query=au:”Dan Luo”&id_list=&start=0&max_results=3

Read More