Tackling Transistor Density, ASIC Design Cost, and LLM Scaling Challenges
ACE² addresses three converging semiconductor scaling challenges through a cohesive research program spanning EDA frameworks, machine learning, and custom ASIC design.
Monolithic 3D integration stacks multiple device layers on a single die using ultra-thin inter-tier dielectrics, enabling nanoscale Metal Inter-Layer Vias (MIVs) — 1,000× smaller than TSVs. This unlocks unprecedented transistor density but demands entirely new EDA flows that conventional tools cannot provide.
Our NSF CAREER-funded research (2025–2030) develops process-technology-aware EDA frameworks accounting for MIV coupling effects, back-gate transistor opportunities, and heterogeneous substrate integration.
Modern ASIC design faces an engineering cost explosion — a complex SoC requires hundreds of engineers and years of effort. ACE² harnesses graph neural networks, reinforcement learning, and large language models to dramatically accelerate the ASIC design process from RTL through physical implementation.
Our SysVCoder framework fine-tunes Qwen 7B for Verilog RTL generation, achieving state-of-the-art Pass@1/Pass@5. A GAT+PPO reinforcement learning agent drives timing-aware cell placement via OpenROAD.
GPU compute density for LLM inference has improved only ~15% over 2.5 years (H100 → B200), while model sizes have grown orders of magnitude. Purpose-built ASIC accelerators offer the only viable path to efficient LLM deployment at scale and on edge devices.
We design custom Compute-in-Memory (CIM) ASIC architectures for LLM acceleration, co-optimizing quantization, compression, and RAG hardware with SRAM-based dataflow to achieve significant gains in MIPS and energy efficiency.
Over $6.97M in competitive external and internal grants supporting all three research thrusts.
We gratefully acknowledge the support of the following agencies and industry partners.