CIRCUIT: A Benchmark for Circuit Interpretation and Reasoning Capabilities of LLMs
Author(s)
Skelić, Lejla
DownloadThesis PDF (740.9Kb)
Advisor
Han, Ruonan
Terms of use
Metadata
Show full item recordAbstract
The role of Large Language Models (LLMs) has not been extensively explored in analog circuit design, which could benefit from a reasoning-based approach that transcends traditional optimization techniques. In particular, despite their growing relevance, there are no benchmarks to assess LLMs’ reasoning capability about circuits. Therefore, we created the CIRCUIT dataset consisting of 510 question-answer pairs spanning various levels of analog-circuit-related subjects. The best-performing model on our dataset, GPT-4o, achieves 48.04% accuracy when evaluated on the final numerical answer. To evaluate the robustness of LLMs on our dataset, we introduced a unique dataset design and evaluation metric that enable unit-test-like evaluation by grouping questions into unit tests. In this case, GPT-4o can only pass 27.45% of the unit tests, highlighting that the most advanced LLMs still struggle with understanding circuits, which requires multi-level reasoning, particularly when involving circuit topologies. This circuit-specific benchmark introduces a scalable and reliable automatic evaluation method, transferable to other reasoning domains, and highlights LLMs' limitations, offering valuable insights for advancing their application in analog integrated circuit design.
Date issued
2025-02Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology