Skywork: A More Open Bilingual Foundation Model

Kavli Affiliate: Cheng Peng

| First 5 Authors: Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang

| Summary:

In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs.

| Search Query: ArXiv Query: search_query=au:”Cheng Peng”&id_list=&start=0&max_results=3

Read More