| Page 1154 | Kisaco Research

In this keynote, Dr. Cédric Bourrasset, AI Distinguished Expert at Atos, will reveal how Atos pioneered the successful architecture, build, and delivery of large-scale AI infrastructures. He will present a live demonstration of Atos-driven technology to illustrate new AI-driven endpoints featuring GPU and IPU workflow capabilities, featuring a global customer case study to elaborate on the current complex challenges faced by designing and manufacturing large-scale AI computing platforms. He will also leverage over 15 years of personal experience in designing and manufacturing supercomputing systems.

Developer Efficiency
Edge AI
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Cedric Bourrasset

Head, High Performance AI Business Unit
Atos

Dr. Cedric Bourrasset is AI Business Leader for High Performance Computing Business Unit at Atos. He is also AI product manager for the Atos Codex AI suite, software enabling AI workloads into HPC environments as well as integrating a computer vision solution. He joined Atos in 2016 as an expert in the HPC/AI domain.

Previously, Cedric received his Ph.D. in Electronics and computer vision from the Blaise Pascal University of Clermont-Ferrand defending the dataflow model of computation for FPGA High Level Synthesis problematic in embedded machine learning applications.

Cedric Bourrasset

Head, High Performance AI Business Unit
Atos

Dr. Cedric Bourrasset is AI Business Leader for High Performance Computing Business Unit at Atos. He is also AI product manager for the Atos Codex AI suite, software enabling AI workloads into HPC environments as well as integrating a computer vision solution. He joined Atos in 2016 as an expert in the HPC/AI domain.

Previously, Cedric received his Ph.D. in Electronics and computer vision from the Blaise Pascal University of Clermont-Ferrand defending the dataflow model of computation for FPGA High Level Synthesis problematic in embedded machine learning applications.

Chip Design
Developer Efficiency
Edge AI
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Gordon Wilson

Co-Founder & CEO
Rain Neuromorphics

Gordon Wilson

Co-Founder & CEO
Rain Neuromorphics

The true potential of AI rests on super-human learning capacity, and on the ability to selectively draw on that learning. Both of these properties – scale and selectivity – challenge the design of AI computers and the tools used to program them. A rich pool of new ideas is emerging, driven by a new breed of computing company, according to Graphcore co-founder Simon Knowles. At the AI Hardware Summit, Phil Brown, VP Scaled Systems Product discusses the creation of the Intelligence Processing Unit (IPU) – a new type of processor, specifically designed for AI computation. He looks ahead, towards the development of AIs with super-human cognition, and explores the nature of computation systems needed to make powerful AI an economic everyday reality.

Developer Efficiency
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Phil Brown

VP, Scaled Systems Product
Graphcore

Phil leads Graphcore’s efforts to build large scale AI/ML processing capability using Graphcore unique Intelligence Processing Units (IPUs) and IPU-Fabric and Streaming Memory technology. Previously he has held a number of different roles at Graphcore including Director of Applications, leading development of Graphcore’s flagship AL/ML models, and Director of Field Engineering, which acts as the focal point for technical engagements with our customers. Prior to joining Graphcore, Phil worked for Cray Inc. in a number of roles, including leading their engagement with the weather forecasting and climate research customers worldwide and as a technical architect. Phil holds a PhD in Computational Chemistry from the University of Bristol.

Phil Brown

VP, Scaled Systems Product
Graphcore

Phil leads Graphcore’s efforts to build large scale AI/ML processing capability using Graphcore unique Intelligence Processing Units (IPUs) and IPU-Fabric and Streaming Memory technology. Previously he has held a number of different roles at Graphcore including Director of Applications, leading development of Graphcore’s flagship AL/ML models, and Director of Field Engineering, which acts as the focal point for technical engagements with our customers. Prior to joining Graphcore, Phil worked for Cray Inc. in a number of roles, including leading their engagement with the weather forecasting and climate research customers worldwide and as a technical architect. Phil holds a PhD in Computational Chemistry from the University of Bristol.

Developer Efficiency
Enterprise AI
ML at Scale
NLP
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Kunle Olukotun

Chief Technologist & Co-Founder
SambaNova Systems

Kunle Olukotun is the Cadence Design Professor of Electrical Engineering and Computer Science at Stanford University. Olukotun is a renowned pioneer in multi-core processor design and the leader of the Stanford Hydra chip multiprocessor (CMP) research project.

Prior to SambaNova Systems, Olukotun founded Afara Websystems to develop high-throughput, low-power multi-core processors for server systems. The Afara multi-core processor, called Niagara, was acquired by Sun Microsystems and now powers Oracle’s SPARC-based servers.

Olukotun is the Director of the Pervasive Parallel Lab and a member of the Data Analytics for What’s Next (DAWN) Lab, developing infrastructure for usable machine learning.

Olukotun is an ACM Fellow and IEEE Fellow for contributions to multiprocessors on a chip and multi-threaded processor design. Olukotun recently won the prestigious IEEE Computer Society’s Harry H. Goode Memorial Award and was also elected to the National Academy of Engineering—one of the highest professional distinctions accorded to an engineer.

Kunle received his Ph.D. in Computer Engineering from The University of Michigan.

Kunle Olukotun

Chief Technologist & Co-Founder
SambaNova Systems

Kunle Olukotun is the Cadence Design Professor of Electrical Engineering and Computer Science at Stanford University. Olukotun is a renowned pioneer in multi-core processor design and the leader of the Stanford Hydra chip multiprocessor (CMP) research project.

Prior to SambaNova Systems, Olukotun founded Afara Websystems to develop high-throughput, low-power multi-core processors for server systems. The Afara multi-core processor, called Niagara, was acquired by Sun Microsystems and now powers Oracle’s SPARC-based servers.

Olukotun is the Director of the Pervasive Parallel Lab and a member of the Data Analytics for What’s Next (DAWN) Lab, developing infrastructure for usable machine learning.

Olukotun is an ACM Fellow and IEEE Fellow for contributions to multiprocessors on a chip and multi-threaded processor design. Olukotun recently won the prestigious IEEE Computer Society’s Harry H. Goode Memorial Award and was also elected to the National Academy of Engineering—one of the highest professional distinctions accorded to an engineer.

Kunle received his Ph.D. in Computer Engineering from The University of Michigan.

Author:

Rodrigo Liang

Co-Founder & CEO
SambaNova Systems

Rodrigo is CEO and co-founder of SambaNova Systems. Prior to joining SambaNova, Rodrigo was responsible for SPARC Processor and ASIC Development at Oracle. He led the engineering organization responsible for the design of state-of-the-art processors and ASIC's for Oracle's enterprise servers.

Rodrigo Liang

Co-Founder & CEO
SambaNova Systems

Rodrigo is CEO and co-founder of SambaNova Systems. Prior to joining SambaNova, Rodrigo was responsible for SPARC Processor and ASIC Development at Oracle. He led the engineering organization responsible for the design of state-of-the-art processors and ASIC's for Oracle's enterprise servers.

Cerebras Systems builds the fastest AI accelerators in the industry. In this talk we will review how the size and scope of massive natural language processing (NLP) presents fundamental challenges to legacy compute and to traditional cloud providers. We will explore the importance of guaranteed node to node latency in large clusters, how that can’t be achieved in the cloud, and how it prevents linear and even deterministic scaling. We will examine the complexity of distributing NLP models over hundreds or thousands of GPUs and show how quickly and easily a cluster of Cerebras CS-2s is set up, and how linear scaling can be achieved over millions of compute cores with Cerebras technology. And finally, we will show how innovative customers are using clusters of Cerebras CS-2s to train large language models in order to solve both basic and applied scientific challenges, including understanding the COVID-19 replication mechanism, epigenetic language modelling for drug discovery, and in the development of clean energy. This enables researchers to test ideas that may otherwise languish for lack of resources and, ultimately, reduces the cost of curiosity.  ​

 

Chip Design
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Andy Hock

VP, Product Management
Cerebras

Dr. Andy Hock is VP of Product Management at Cerebras Systems with responsibility for product strategy. His organization drives engagement with engineering and our customers to inform the hardware, software, and machine learning technical requirements and accelerate world-leading AI with Cerebras’ products. Prior to Cerebras, Andy has held senior leadership positions with Arete Associates, Skybox Imaging (acquired by Google), and Google. He holds a PhD in Geophysics and Space Physics from UCLA.

Andy Hock

VP, Product Management
Cerebras

Dr. Andy Hock is VP of Product Management at Cerebras Systems with responsibility for product strategy. His organization drives engagement with engineering and our customers to inform the hardware, software, and machine learning technical requirements and accelerate world-leading AI with Cerebras’ products. Prior to Cerebras, Andy has held senior leadership positions with Arete Associates, Skybox Imaging (acquired by Google), and Google. He holds a PhD in Geophysics and Space Physics from UCLA.

Chip Design
Edge AI
Enterprise AI
ML at Scale
NLP
Novel AI Hardware
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering
Industry & Investment

Author:

Lip-Bu Tan

CEO
Intel

Lip-Bu Tan is chief executive officer of Intel Corporation and serves on the company’s board of directors. He was appointed to his position in March 2025.

Tan is an accomplished executive with more than two decades of semiconductor and software experience and deep relationships across the technology ecosystem. He has received several accolades for his significant contributions to the industry, including the 2022 Robert N. Noyce Award, the Semiconductor Industry Association’s highest honor, and was named one of Forbes’ Top 50 Venture Capitalists.

Tan previously served as chief executive officer of Cadence Design Systems Inc. and was also a member of its board of directors. During his 12 years as Cadence’s chief executive officer, he led a reinvention of the company and drove a cultural transformation centered on customer-centric innovation that enabled Cadence to more than double its revenue, expand operating margins and significantly outperform the market.

Tan is a founding managing partner of Walden Catalyst Ventures and chairman of Walden International, a leading venture capital firm. He has also served on the boards of public companies Credo Technology Group and Schneider Electric.

Tan holds a Bachelor of Science in physics from Nanyang Technological University in Singapore, a Master of Science in nuclear engineering from the Massachusetts Institute of Technology and an MBA from the University of San Francisco.

Lip-Bu Tan

CEO
Intel

Lip-Bu Tan is chief executive officer of Intel Corporation and serves on the company’s board of directors. He was appointed to his position in March 2025.

Tan is an accomplished executive with more than two decades of semiconductor and software experience and deep relationships across the technology ecosystem. He has received several accolades for his significant contributions to the industry, including the 2022 Robert N. Noyce Award, the Semiconductor Industry Association’s highest honor, and was named one of Forbes’ Top 50 Venture Capitalists.

Tan previously served as chief executive officer of Cadence Design Systems Inc. and was also a member of its board of directors. During his 12 years as Cadence’s chief executive officer, he led a reinvention of the company and drove a cultural transformation centered on customer-centric innovation that enabled Cadence to more than double its revenue, expand operating margins and significantly outperform the market.

Tan is a founding managing partner of Walden Catalyst Ventures and chairman of Walden International, a leading venture capital firm. He has also served on the boards of public companies Credo Technology Group and Schneider Electric.

Tan holds a Bachelor of Science in physics from Nanyang Technological University in Singapore, a Master of Science in nuclear engineering from the Massachusetts Institute of Technology and an MBA from the University of San Francisco.

Developer Efficiency
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Alexis Black Bjorlin

VP/GM, DGX Cloud
NVIDIA

Dr. Alexis Black Bjorlin was previously VP, Infrastructure Hardware Engineering at Meta. She also serves on the board of directors at Digital Realty and Celestial AI. Prior to Meta, Dr. Bjorlin was Senior Vice President and General Manager of Broadcom’s Optical Systems Division and previously Corporate Vice President of the Data Center Group and General Manager of the Connectivity Group at Intel. Prior to Intel, she spent eight years as President of Source Photonics, where she also served on the board of directors. She earned a B.S. in Materials Science and Engineering from Massachusetts Institute of Technology and a Ph.D. in Materials Science from the University of California at Santa Barbara.

Alexis Black Bjorlin

VP/GM, DGX Cloud
NVIDIA

Dr. Alexis Black Bjorlin was previously VP, Infrastructure Hardware Engineering at Meta. She also serves on the board of directors at Digital Realty and Celestial AI. Prior to Meta, Dr. Bjorlin was Senior Vice President and General Manager of Broadcom’s Optical Systems Division and previously Corporate Vice President of the Data Center Group and General Manager of the Connectivity Group at Intel. Prior to Intel, she spent eight years as President of Source Photonics, where she also served on the board of directors. She earned a B.S. in Materials Science and Engineering from Massachusetts Institute of Technology and a Ph.D. in Materials Science from the University of California at Santa Barbara.

Author:

Marshall Choy

SVP, Product
SambaNova Systems

Marshall Choy is Senior Vice President of Product at SambaNova Systems and is responsible for product management and go-to-market operations.  Marshall has extensive experience leading global organizations to bring breakthrough products to market, establish new market presences, and grow new and existing lines of business.  Marshall was previously Vice President of Product Management at Oracle until 2018.  He was responsible for the portfolio and strategy for Oracle Systems products and solutions.  He led teams that delivered comprehensive end-to-end hardware and software solutions and product management operations.  Prior to joining Oracle in 2010 when it acquired Sun Microsystems, he served as Director of Engineered Solutions at Sun.  During his 11 years there, Marshall held various positions in development, information technology, and marketing. 

Marshall Choy

SVP, Product
SambaNova Systems

Marshall Choy is Senior Vice President of Product at SambaNova Systems and is responsible for product management and go-to-market operations.  Marshall has extensive experience leading global organizations to bring breakthrough products to market, establish new market presences, and grow new and existing lines of business.  Marshall was previously Vice President of Product Management at Oracle until 2018.  He was responsible for the portfolio and strategy for Oracle Systems products and solutions.  He led teams that delivered comprehensive end-to-end hardware and software solutions and product management operations.  Prior to joining Oracle in 2010 when it acquired Sun Microsystems, he served as Director of Engineered Solutions at Sun.  During his 11 years there, Marshall held various positions in development, information technology, and marketing. 

AI Hardware Summit attendees are invited to attend the an extended networking session where they can meet attendees from across both events. The Meet & Greet is a perfect opportunity to reconnect with peers, expand your network, and discuss the state of ML across the cloud-edge continuum!

Chip Design
Developer Efficiency
Edge AI
Enterprise AI
ML at Scale
NLP
Novel AI Hardware
Systems Design
Data Science
Hardware Engineering
Software Engineering
Strategy
Systems Engineering

Author:

Colin Murdoch

Chief Business Officer
DeepMind

Decades of international commercial experience and deep technical expertise mean Colin is uniquely placed to ensure DeepMind’s cutting-edge research benefits as many people as possible. As Chief Business Officer of DeepMind, he oversees a wide-range of teams including Applied, which applies research breakthroughs to Google products and infrastructure used by billions of people. He also helps drive the growth of DeepMind, building and leading critical functions including finance and strategy and leading external and commercial partnerships. Originally an electronics and software engineer, he has held senior positions at both start-ups and global companies such as Thomson Reuters, helping them solve their own complex, mission-critical, real-world challenges.

Colin Murdoch

Chief Business Officer
DeepMind

Decades of international commercial experience and deep technical expertise mean Colin is uniquely placed to ensure DeepMind’s cutting-edge research benefits as many people as possible. As Chief Business Officer of DeepMind, he oversees a wide-range of teams including Applied, which applies research breakthroughs to Google products and infrastructure used by billions of people. He also helps drive the growth of DeepMind, building and leading critical functions including finance and strategy and leading external and commercial partnerships. Originally an electronics and software engineer, he has held senior positions at both start-ups and global companies such as Thomson Reuters, helping them solve their own complex, mission-critical, real-world challenges.

Author:

Cade Metz

Technology Correspondent
New York Times

Cade Metz is a reporter with The New York Times, covering artificial intelligence, driverless cars, robotics, virtual reality, and other emerging areas. Genius Makers is his first book. Previously, he was a senior staff writer with Wired magazine and the U.S. editor of The Register, one of Britain’s leading science and technology news sites.

A native of North Carolina and a graduate of Duke University, Metz, 48, works in The New York Times’ San Francisco bureau and lives across the bay with his wife Taylor and two daughters.

Cade Metz

Technology Correspondent
New York Times

Cade Metz is a reporter with The New York Times, covering artificial intelligence, driverless cars, robotics, virtual reality, and other emerging areas. Genius Makers is his first book. Previously, he was a senior staff writer with Wired magazine and the U.S. editor of The Register, one of Britain’s leading science and technology news sites.

A native of North Carolina and a graduate of Duke University, Metz, 48, works in The New York Times’ San Francisco bureau and lives across the bay with his wife Taylor and two daughters.