Industry Forum
Time : 11:00-12:40, November 3, 2025
Place : Conference Room 101+102
Chair : Dr. Huaide Wang
Local Chair : Prof. Kwanseo Park
Advisor : Dr. Stefan Rusu
A UCIe Optical I/O Retimer Chiplet for AI Scale Up Fabrics
Abstract
We demonstrate a UCIe Optical I/O Retimer Chiplet which enables package-to-package, multi-rack system connectivity for scale-up AI fabrics. The monolithically integrated chiplet contains UCIe interfaces, high speed optical drivers and TIAs, micro-ring based optical devices, and protocol logic all on a single piece of silicon, enabling seamless integration with AI compute, memory, and switching silicon. The chiplet is the first demonstration of 16 wavelength micro-ring based optical links. Each wavelength operates at 32Gbps NRZ, achieving a total of 1.024Tbps of bidirectional bandwidth per optical port from one package to another, delivering an aggregate bandwidth of 8.192Tbps across the 8 optical ports on the chip. We show error-free transmission of UCIe data between chiplets in a package, as well as optical error-free transmission of UCIe traffic over the optical interface between packages.
Biography
Pavan Bhargava received a Ph.D degree in Electrical Engineering from UC Berkeley in 2021. He works at Ayar Labs, where he leads Analog/Mixed Signal design and Firmware development for high speed, multi wavelength optical I/O and die-to-die electrical interfaces.
Design Challenges of XO and PLL in Modern RF Applications
Abstract
XO and Phase-Locked Loop (PLL) are key IPs in RF transceiver. It dominates Error-Vector-Magnitude (EVM) performance in transmitter when low output power. As modulation improves to 4k QAM, phase noise specs of XO and PLL become more stringent. However, as process proceeds, the power supply is lower, which is detrimental to low phase noise designs. Many low noise techniques are proposed in literatures.
Nevertheless, higher reference frequency is usually adopted which is not suitable for commercial products. On the other hands, most of them focus on integer-N PLLs, which are impractical either. In modern RF transceiver, low power is also a desired feature, especially in consumer products. Low phase noise of XO inevitably needs more power. XO’s equivalent power can be reduced by turning it off in sleep mode.
However, its startup time should be shortened. Limited by high-Q of crystal, the startup time is longer than expected. In this talk, fast startup of XO, low phase noise techniques of XO and fractional-N PLL will be introduced.
Biography
Chao-Ching Hung received Ph.D. degree in electrical engineering from National Taiwan
University, Taipei, Taiwan, in 2011. He is currently with Mediatek. Since joining Mediatek in 2010, he has been developing frequency synthesizer designs for different
wireless standards, such as Blue-Tooth, WiFi, cellular, etc. He holds 16 IEEE conference / journal papers and 13 US. patents.
HBM: The Core Memory for AI Computation
Abstract
High bandwidth memory (HBM) has become a key enabler for AI computing, offering high bandwidth and low power consumption with small form factor.
Its 3D-stacked architecture with a wide memory interface significantly enhances performance for AI workloads. However, to eliminate risks that may occur in the 3D stack structure, a lot of design techniques are involved such as DFT/BIST/Redundancy and power delivery for HBM.
In particular, HBM requires a lot of consideration in the design process because the environment in which it is tested, and it is actually used are different. These design topics for HBM are going to be discussed.
Biography
Jinhyung Lee received the B.S. and Ph.D. degrees in electrical engineering and computer science from Seoul National University, Seoul, South Korea, in 2014 and 2019, respectively.
He joined SK Hynix, Icheon, South Korea. His current research interests include high-speed I/O circuits, adaptive equalizers, and high bandwidth memory (HBM). Dr. Lee received the Best JSSC Paper Award from SSCS Seoul Chapter, in 2021.
HyperAccel LPU: A Purpose-Built AI Chip for LLM Inference with Breakthrough Efficiency
Abstract
The global AI industry has experienced a remarkable transformation with the advent of transformer-based large language models (LLMs), which have redefined the whole landscape of artificial intelligence. In this talk, I will delve into the novel computational challenges posed by LLMs, particularly on the inference services. We will discuss the latest trends in AI chip technology that have developed in response to these challenges. Furthermore, I will provide an in-depth look at the core technologies behind HyperAccel's LPU (LLM Processing Unit), a cutting-edge semiconductor specifically designed for LLM inference workloads. By exploring the innovative features and design principles of the LPU, we will gain insight into how it addresses the unique requirements of LLMs, paving the way for more efficient and scalable AI solutions. Lastly, I will briefly report the latest progress of our 4nm chip product code-named “Bertha” for datacenter services.
Biography
Seungjae Moon is the co-founder and AI hardware architect at HyperAccel. He founded HyperAccel in 2023 to build innovative AI processor/solutions for Generative AI. He led the design of the LLM processing unit (LPU), an optimized architecture for efficient LLM inference. With the LPU technology, he was the recipient of IEEE Micro Best Paper Award and Design Automation Conference Best Presentation Award for the most novel hardware design of 2024. He has led the collaboration with global industries, such as AMD, being honored with the Best Exhibitor Award at the VLSI Design Conference and recognized as the IC Taiwan Grand Challenge Winner. He received the B.S. in Electrical Engineering from University of Washington, Seattle, in 2020, and the M.S. in Electrical Engineering from Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea in 2023.
[Address] #107-601, 57 Eoeun-ro, Yuseong-gu, Daejeon, Republic of Korea(34140)
[Tel] +82-2-757-0981
[Fax] +82-2-752-1522
[E-mail] secretary@a-sscc2025.org
[Registration Number] 622-82-73798
[Representative] Minkyu Je
Program
Attendee Info
Contact