AutoStyle-TTS: Retrieval-Augmented Generation based Automatic Style Matching Text-to-Speech Synthesis

Kavli Affiliate: Dan Luo

| First 5 Authors: Dan Luo, Chengyuan Ma, Weiqin Li, Jun Wang, Wei Chen

| Summary:

With the advancement of speech synthesis technology, users have higher
expectations for the naturalness and expressiveness of synthesized speech. But
previous research ignores the importance of prompt selection. This study
proposes a text-to-speech (TTS) framework based on Retrieval-Augmented
Generation (RAG) technology, which can dynamically adjust the speech style
according to the text content to achieve more natural and vivid communication
effects. We have constructed a speech style knowledge database containing
high-quality speech samples in various contexts and developed a style matching
scheme. This scheme uses embeddings, extracted by Llama, PER-LLM-Embedder,and
Moka, to match with samples in the knowledge database, selecting the most
appropriate speech style for synthesis. Furthermore, our empirical research
validates the effectiveness of the proposed method. Our demo can be viewed at:
https://thuhcsi.github.io/icme2025-AutoStyle-TTS

| Search Query: ArXiv Query: search_query=au:”Dan Luo”&id_list=&start=0&max_results=3

Read More