Intermittent Semi-working Mask: A New Masking Paradigm for LLMs

Kavli Affiliate: Feng Wang

| First 5 Authors: Mingcong Lu, Jiangcai Zhu, Wang Hao, Zheng Li, Shusheng Zhang

| Summary:

Multi-turn dialogues are a key interaction method between humans and Large
Language Models (LLMs), as conversations extend over multiple rounds, keeping
LLMs’ high generation quality and low latency is a challenge. Mainstream LLMs
can be grouped into two categories based on masking strategy: causal LLM and
prefix LLM. Several works have demonstrated that prefix LLMs tend to outperform
causal ones in scenarios that heavily depend on historical context such as
multi-turn dialogues or in-context learning, thanks to their bidirectional
attention on prefix sequences. However, prefix LLMs have an inherent
inefficient training problem in multi-turn dialogue datasets. In addition, the
attention mechanism of prefix LLM makes it unable to reuse Key-Value Cache (KV
Cache) across dialogue rounds to reduce generation latency. In this paper, we
propose a novel masking scheme called Intermittent Semi-working Mask (ISM) to
address these problems. Specifically, we apply alternate bidirectional and
unidirectional attention on queries and answers in the dialogue history. In
this way, ISM is able to maintain the high quality of prefix LLM and low
generation latency of causal LLM, simultaneously. Extensive experiments
illustrate that our ISM achieves significant performance.

| Search Query: ArXiv Query: search_query=au:”Feng Wang”&id_list=&start=0&max_results=3

Read More