RTLCoder: Fully Open-Source and Efficient LLM-Assisted RTL Code Generation Technique

Kavli Affiliate: Jing Wang

| First 5 Authors: Shang Liu, Wenji Fang, Yao Lu, Jing Wang, Qijun Zhang

| Summary:

The automatic generation of RTL code (e.g., Verilog) using natural language
instructions and large language models (LLMs) has attracted significant
research interest recently. However, most existing approaches heavily rely on
commercial LLMs such as ChatGPT, while open-source LLMs tailored for this
specific design generation task exhibit notably inferior performance. The
absence of high-quality open-source solutions restricts the flexibility and
data privacy of this emerging technique. In this study, we present a new
customized LLM solution with a modest parameter count of only 7B, achieving
better performance than GPT-3.5 on all representative benchmarks for RTL code
generation. Especially, it outperforms GPT-4 in VerilogEval Machine benchmark.
This remarkable balance between accuracy and efficiency is made possible by
leveraging our new RTL code dataset and a customized LLM algorithm, both of
which have been made fully open-source. Furthermore, we have successfully
quantized our LLM to 4-bit with a total size of 4GB, enabling it to function on
a single laptop with only slight performance degradation. This efficiency
allows the RTL generator to serve as a local assistant for engineers, ensuring
all design privacy concerns are addressed.

| Search Query: ArXiv Query: search_query=au:”Jing Wang”&id_list=&start=0&max_results=3

Read More