Composable Text Controls in Latent Space with ODEs

Kavli Affiliate: Feng Yuan

| First 5 Authors: Guangyi Liu, Zeyu Feng, Yuan Gao, Zichao Yang, Xiaodan Liang

| Summary:

Real-world text applications often involve composing a wide range of text
control operations, such as editing the text w.r.t. an attribute, manipulating
keywords and structure, and generating new text of desired properties. Prior
work typically learns/finetunes a language model (LM) to perform individual or
specific subsets of operations. Recent research has studied combining
operations in a plug-and-play manner, often with costly search or optimization
in the complex sequence space. This paper proposes a new efficient approach for
composable text operations in the compact latent space of text. The
low-dimensionality and differentiability of the text latent vector allow us to
develop an efficient sampler based on ordinary differential equations (ODEs)
given arbitrary plug-in operators (e.g., attribute classifiers). By connecting
pretrained LMs (e.g., GPT2) to the latent space through efficient adaption, we
then decode the sampled vectors into desired text sequences. The flexible
approach permits diverse control operators (sentiment, tense, formality,
keywords, etc.) acquired using any relevant data from different domains.
Experiments show that composing those operators within our approach manages to
generate or edit high-quality text, substantially improving over previous
methods in terms of generation quality and efficiency.

| Search Query: ArXiv Query: search_query=au:”Feng Yuan”&id_list=&start=0&max_results=3

Read More