Every FLOP Counts: Scaling a 300B Mixture-of-Experts LING LLM without Premium GPUs

Kavli Affiliate: Feng Yuan

| First 5 Authors: Ling Team, Binwei Zeng, Chao Huang, Chao Zhang, Changxin Tian

| Summary:

In this technical report, we tackle the challenges of training large-scale
Mixture of Experts (MoE) models, focusing on overcoming cost inefficiency and
resource limitations prevalent in such systems. To address these issues, we
present two differently sized MoE large language models (LLMs), namely
Ling-Lite and Ling-Plus (referred to as "Bailing" in Chinese, spelled
Bv{a}il’ing in Pinyin). Ling-Lite contains 16.8 billion parameters with 2.75
billion activated parameters, while Ling-Plus boasts 290 billion parameters
with 28.8 billion activated parameters. Both models exhibit comparable
performance to leading industry benchmarks. This report offers actionable
insights to improve the efficiency and accessibility of AI development in
resource-constrained settings, promoting more scalable and sustainable
technologies. Specifically, to reduce training costs for large-scale MoE
models, we propose innovative methods for (1) optimization of model
architecture and training processes, (2) refinement of training anomaly
handling, and (3) enhancement of model evaluation efficiency. Additionally,
leveraging high-quality data generated from knowledge graphs, our models
demonstrate superior capabilities in tool use compared to other models.
Ultimately, our experimental findings demonstrate that a 300B MoE LLM can be
effectively trained on lower-performance devices while achieving comparable
performance to models of a similar scale, including dense and MoE models.
Compared to high-performance devices, utilizing a lower-specification hardware
system during the pre-training phase demonstrates significant cost savings,
reducing computing costs by approximately 20%. The models can be accessed at
https://huggingface.co/inclusionAI.

| Search Query: ArXiv Query: search_query=au:”Feng Yuan”&id_list=&start=0&max_results=3

Read More