GenEDA: Towards Generative Netlist Functional Reasoning via Cross-Modal Circuit Encoder-Decoder Alignment

Kavli Affiliate: Jing Wang

| First 5 Authors: Wenji Fang, Wenji Fang, , ,

| Summary:

The success of foundation AI has motivated the research of circuit foundation
models, which are customized to assist the integrated circuit (IC) design
process. However, existing pre-trained circuit foundation models are typically
limited to standalone encoders for predictive tasks or decoders for generative
tasks. These two model types are developed independently, operate on different
circuit modalities, and reside in separate latent spaces. This restricts their
ability to complement each other for more advanced capabilities. In this work,
we present GenEDA, the first framework that cross-modally aligns circuit
encoders with decoders within a shared latent space. GenEDA bridges the gap
between graph-based circuit representation learning and text-based large
language models (LLMs), enabling communication between their respective latent
spaces. To achieve the alignment, we propose two paradigms to support both
open-source trainable LLMs and commercial frozen LLMs. We leverage this aligned
architecture to develop the first generative foundation model for netlists,
unleashing LLMs’ generative reasoning capability on the low-level and
bit-blasted netlists. GenEDA enables three unprecedented generative netlist
functional reasoning tasks, where it reversely generates high-level
functionalities such as specifications and RTL code from low-level netlists.
These tasks move beyond traditional gate function classification to direct
generation of full-circuit functionality. Experiments demonstrate that GenEDA
significantly boosts advanced LLMs’ (e.g., GPT and DeepSeek series) performance
in all tasks.

| Search Query: ArXiv Query: search_query=au:”Jing Wang”&id_list=&start=0&max_results=3

Read More