IT Vision in Asia

Planning for Future Semi-conductors in AI Era


Time : 11:00-12:40, November 3, 2025

Place : Conference Room 103+104

Chair : Prof. Minjae Lee

Co-Chair : Prof. Jian Zhao

Bor-Sung Liang

MediaTek Inc.
Taiwan
Bor-Sung Liang

MediaTek Inc.
Taiwan

AI Computing Design Trends in the Generative AI Era

Abstract

Large Language Models (LLMs) have demonstrated exceptional performance across numerous generative AI applications, but they require significant computation for both AI training and inference. The growth rate of these computational requirements significantly outpaces advancements in semiconductor process technology. Consequently, innovative IC and system design techniques are essential to address challenges related to computing power, memory, bandwidth, energy consumption to meet AI computing needs.


In this talk, we will explore the evolution of LLMs in the generative AI era and their influence on AI computing design trends. We will discuss the evolution of LLMs,  from focusing on training, focusing on inference, and then agentic AI and physical AI to explore Ai capability beyond single LLM. About AI computing design trends,  we will discuss how scale-up and scale-out to affect AI computing  performance and energy efficiency, and different strategies to design Ai computing chips for data center and edge devices. These trends will significantly shape the design of future computing architectures and influence the advancement of circuit and system designs.

Biography

Dr. Bor-Sung Liang is currently a Senior Director, Corporate Strategy & Strategic Technology of  MediaTek, Hsinchu Science Park, Taiwan, and a Director of the Board, MediaTek Foundation. He is also concurrently serving as a Visiting Professor at CSIE (Department of Computer Science and Information Engineering) and GIEE (Graduate Institute of Electronics Engineering), EECS and GSAT in National Taiwan University, as well as a Visiting Professor of ECE (College of Electrical and Computer Engineering), National Yang Ming Chiao Tung University. He is also a Director of IEEE Taipei Section.


He received his Ph.D degree from Institute of Electronics, National Chiao Tung University, and graduated from EMBA, College of Management, National Taiwan University. Dr. Liang has received several important awards, such as Ten Outstanding Young Persons, Taiwan, R.O.C., National Invention and Creation Award for three times on Invention (one Gold Medal and two Silver Medals)  from Intellectual Property Bureau of the Ministry of Economic Affairs, Taiwan, Outstanding Youth Innovation Award of Industrial Technology Development Award from Department of Industrial Technology of the Ministry of Economic Affairs, Taiwan, Outstanding ICT Elite Award of ICT Month, R.O.C., and K. T. Li Young Researcher Award from Institute of Information & Computing Machinery and ACM Taipei/Taiwan Chapter. 

Jun Makino

Preferred Networks
Japan
Jun Makino

Preferred Networks
Japan

Processor Design in Post-Moore and Post-GPU Era

Abstract

We have finally come into the Post-Moore era, where the shrink of transistors does not give automatic improvement of the performance of computers. We can now see the slowing down of the improvement of the performance of CPU, GPU and even AI accelerators. However, from the viewpoint of the computer architecture, the real problem is not the `Post-Moore era, but the fact that the designs of these processors cannot take advantage of the advance of the semiconductor technology.


This is similar to what happened to shared-memory parallel-vector machines 30 years ago. Distributed-memory machines took over.  In this talk, I focused on the potential of the emerging technologies such as chiplet, and in particular that of the 3D integration of DRAM and logic, and its impact on computer architecture.

Biography

Jun Makino received PhD from the University of Tokyo. After he received PhD, he worked at University of Tokyo, the National Astronomical Observatory of Japan, Tokyo Institute of Technology. For 2014 to 2021, he worked at RIKEN AICS as a a subleader of the Flagship 2020 project.  Since 2016, he works at Kobe University.  and since 2023 also at Preferred networks as VP CTO of computer architecture,.  His research interests are stellar dynamics, large-scale scientific simulation and high-performance computing.

Xiang Qiu

MUCSE
China
Xiang Qiu

MUCSE
China

AI Networking Challenges and Solutions with MUCSE

Abstract

As generative AI advanced rapidly in recent years, the demand for computing power has been increased significantly. The AI infrastructure has evolved from a single server with several GPUs to a huge AI cluster with hundreds of thousands of GPU servers, posing unprecedented challenges to the network infrastructure. Unlike traditional data center network, AI network has completely different flow patterns and performance requirements for LLM training and inference. The network should have ultra-high bandwidth, ultra-low latency, and exceptional reliability. To address these challenges, MUCSE has developed a 400Gbps high speed RDMA NIC with advanced congestion control capability. Furthermore, we are actively working with several GPU/Accelerator vendors to provide networking solutions for advanced AI SuperPods with hundreds of GPUs.  

Biography

QIU, Xiang received his Ph.D degree in Electrical and Computer Engineering from University of California, Santa Barbara in 2013. Dr. Qiu now serves as R&D Director at Wuxi Micro Innovation Integrated Circuit Design Co. LTD (MUCSE). Before that, he held several senior positions in both academia and industry, including East China Normal University, Synopsys, and Flash Billion Inc. Dr Qiu’s research interests include high-speed networking for AI, reconfigurable computing, computer architecture and EDA.

Jaewoong Choi

ETRI
Korea
Jaewoong Choi

ETRI
Korea

ABS1x: PIM processor using Heterogenous Integration of Chiplets

Abstract

Since the emergence of artificial neural networks, we are currently experiencing a wave of innovation driven by generative AI models. In response to these changes, the demand for hyperscale accelerators capable of real-time processing of generative tasks is rapidly increasing.

In particular, chiplet-based architectures that integrate High-Bandwidth Memory (HBM) and Neural Processing Units (NPUs) are becoming the mainstream in AI processor design. This trend is not limited to industry leaders like NVIDIA and AMD but is rapidly spreading across AI hardware companies worldwide.

This speech will present the direction of AI accelerator design being carried out as part of South Korea’s national semiconductor initiative, Processing-In-Memory (PIM). It will examine the key technological innovations required to realize high-performance chiplet-based architectures and discuss the challenges involved in meeting the computational and memory demands of generative AI models. 

Biography

Jaewoong Choi received the B.S degree in electrical engineering from Hanyang University, Seoul, South Korea, in 2018, and the M.S degree in electronic engineering from the Korea Advanced Institute of Science and Technology (KAIST), South Korea, in 2020. From 2021, he has been a researcher in the Processing-in-Memory (PIM) laboratory at Electornics and Telecommunications Research Institute (ETRI), South korea. At ETRI, he developed a PHY IP targeting high-bandwidth memory(HBM3) and is currently developing a 2.5D chiplet-based accelerator.

A-SSCC2025

[Address] #107-601, 57 Eoeun-ro, Yuseong-gu, Daejeon, Republic of Korea(34140)

[Tel] +82-2-757-0981

[Fax] +82-2-752-1522

[E-mail] secretary@a-sscc2025.org 

[Registration Number] 622-82-73798

[Representative] Minkyu Je