Technical Deep Dive into the UALink 200G Specification, Scale-Up, and Use Cases | Kisaco Research


As AI workload demands continue to accelerate, Cloud Service Providers, System OEMs, and IP/Silicon vendors require a scalable, high-performance solution to support advanced workloads. By enhancing performance, optimizing power and cost efficiency, and promoting interoperability and supply chain diversity, the UALink 200G 1.0 Specification delivers a low-latency, high-bandwidth interconnect designed for efficient communication between accelerators and switches within AI computing pods.

In this session, a panel of UALink experts will explore the latest developments in the UALink 200G 1.0 Specification and demonstrate how it enables scalable, multi-node AI architectures. Attendees will also have the chance to engage directly with the panel to discuss future applications and learn how UALink-enabled systems will lay the foundation for next generation AI/ML systems.

Room 201

Sponsor(s): 
UALink Consortium
Speaker(s): 

Author:

Nafea Bshara

VP/Distinguished Engineer
AWS

Nafea Bshara is a vice president/distinguished engineer working on compute, machine learning, network, and storage architecture with Amazon Web Services.

Nafea Bshara

VP/Distinguished Engineer
AWS

Nafea Bshara is a vice president/distinguished engineer working on compute, machine learning, network, and storage architecture with Amazon Web Services.

Author:

Gaya Nagarajan

VP of Network Infrastructure
Meta

Gaya Nagarajan joined Meta in 2012 as Network Engineer and currently serves as the Vice President of Network Infrastructure. Based in the Bay Area, he leads the team responsible for the full networking stack and end-to-end lifecycle: from designing and building to operating one of the largest networks in the world that seamlessly connects 3+ billions of users – catering to their diverse and dynamic demands. Gaya is instrumental in ensuring the performance, robustness, scalability, and efficiency of Meta's networks within data centers and its large-scale AI clusters; global backbone (including subsea and terrestrial fiber investments) to Meta’s edge network and global CDN. Prior to his tenure at Meta, Gaya made significant contributions at Brocade, developing innovative products designed to meet the complex needs of large service providers. He holds a M.S. in Computer Science from the University of Kansas, and a Bachelor of Engineering from the University Visvesvaraya College of Engineering (UVCE) in India. Beyond his professional and academic achievements, Gaya is an avid sports enthusiast and keeps an active lifestyle: he has a passion for cricket, golf, and yoga.

Gaya Nagarajan

VP of Network Infrastructure
Meta

Gaya Nagarajan joined Meta in 2012 as Network Engineer and currently serves as the Vice President of Network Infrastructure. Based in the Bay Area, he leads the team responsible for the full networking stack and end-to-end lifecycle: from designing and building to operating one of the largest networks in the world that seamlessly connects 3+ billions of users – catering to their diverse and dynamic demands. Gaya is instrumental in ensuring the performance, robustness, scalability, and efficiency of Meta's networks within data centers and its large-scale AI clusters; global backbone (including subsea and terrestrial fiber investments) to Meta’s edge network and global CDN. Prior to his tenure at Meta, Gaya made significant contributions at Brocade, developing innovative products designed to meet the complex needs of large service providers. He holds a M.S. in Computer Science from the University of Kansas, and a Bachelor of Engineering from the University Visvesvaraya College of Engineering (UVCE) in India. Beyond his professional and academic achievements, Gaya is an avid sports enthusiast and keeps an active lifestyle: he has a passion for cricket, golf, and yoga.

Author:

Amber Huffman

Principal Engineer
Google Cloud

Amber Huffman is a Principal Engineer in Google Cloud responsible for leading industry engagement efforts in the data center ecosystem across servers, storage, networking, accelerators, power, cooling, security, and more. Before joining Google, she spent 25 years at Intel serving as an Intel Fellow and VP.  Amber is the President of NVM Express, on the Board of Directors for the Open Compute Project Foundation (OCP), on the Board of Directors for Ultra Accelerator Link (UALink), and the chair of the RISC-V Software Ecosystem (RISE) Project. She has led numerous industry standards to successful adoption, including NVM Express, Open NAND Flash Interface, and Serial ATA.

Amber Huffman

Principal Engineer
Google Cloud

Amber Huffman is a Principal Engineer in Google Cloud responsible for leading industry engagement efforts in the data center ecosystem across servers, storage, networking, accelerators, power, cooling, security, and more. Before joining Google, she spent 25 years at Intel serving as an Intel Fellow and VP.  Amber is the President of NVM Express, on the Board of Directors for the Open Compute Project Foundation (OCP), on the Board of Directors for Ultra Accelerator Link (UALink), and the chair of the RISC-V Software Ecosystem (RISE) Project. She has led numerous industry standards to successful adoption, including NVM Express, Open NAND Flash Interface, and Serial ATA.

Moderator

Author:

Peter Onufryk

Fellow
Intel

Peter Onufryk is a Fellow at Intel Corporation. His research interests include IP and data center architecture along with UALink, NVMe, universal chiplet interconnect express (UCIe). Peter received his Ph.D. degree in electrical and computer engineering from Rutgers University.

Peter Onufryk

Fellow
Intel

Peter Onufryk is a Fellow at Intel Corporation. His research interests include IP and data center architecture along with UALink, NVMe, universal chiplet interconnect express (UCIe). Peter received his Ph.D. degree in electrical and computer engineering from Rutgers University.

Session Type: 
General Session (Presentation)