AI Hardware & Edge AI Summit 2024 - Agenda | Kisaco Research

"I will absolutely recommend to my colleagues to attend in the coming years. I had lot of questions on the application and deployment of AI before I came, now they were all answered. There were many great speakers and many of the presentations offered very insightful information that I can take back to my work."

— Sr. Director of Engineering, Oshkosh Corporation

REGISTER YOUR PLACE HERE

Access to the agenda sessions requires a full conference ticket (not an expo ticket).

Please note, the Efficient Generative AI Summit will be held at the Westin, San Jose. 


Pre-Day: Monday, 9 Sep, 2024
9:00 AM - 10:00 AM
Registration and Networking
10:00 AM - 10:30 AM
KEYNOTE

The journey from taking a foundation model (FM) from experimentation to production is filled with choices, decisions, and pitfalls that can increase undifferentiated heavylifting and delay time-to-market. In this session, learn how purpose built capabilities in Amazon SageMaker can help ML practitioners pre-train, evaluate, and fine-tune FMs with advanced techniques, and deploy FMs with fine-grain controls for generative AI use cases that have stringent requirements on accuracy, latency, and cost. Join us to learn how to simplify the generative AI journey, follow best practices, save time and cost, and shorten time-to-market.

Author:

Ankur Mehrotra

Director and GM, Amazon SageMaker
AWS

Ankur is a GM at AWS Machine Learning and leads foundational SageMaker services such as SageMaker Studio, Notebooks, Training, Inference, Feature Store, MLOps, etc. Before SageMaker, he led AI services for personalization, forecasting, healthcare & life sciences, edge AI devices and SDKs, as well as thought leadership programs such as AWS DeepRacer. Ankur has worked at Amazon for over 15 years. Before joining AWS, he spent several years in Amazon’s Consumer organization, where he led the development of automated marketing/advertising systems, as well as automated pricing systems.

Ankur Mehrotra

Director and GM, Amazon SageMaker
AWS

Ankur is a GM at AWS Machine Learning and leads foundational SageMaker services such as SageMaker Studio, Notebooks, Training, Inference, Feature Store, MLOps, etc. Before SageMaker, he led AI services for personalization, forecasting, healthcare & life sciences, edge AI devices and SDKs, as well as thought leadership programs such as AWS DeepRacer. Ankur has worked at Amazon for over 15 years. Before joining AWS, he spent several years in Amazon’s Consumer organization, where he led the development of automated marketing/advertising systems, as well as automated pricing systems.

10:30 AM - 11:00 AM

Author:

Neeraj Kumar

Chief Data Scientist
Pacific Northwest National Laboratory

As the Chief Data Scientist at Pacific Northwest National Laboratory (PNNL), Neeraj leads a talented team of scientists and professionals in addressing critical challenges in energy, artificial intelligence, health, and biothreat sectors. With over 15 years of experience in quantitative research and data science, he specializes in developing innovative solutions and managing multidisciplinary teams focused on multimillion-dollar programs at the intersection of fundamental discovery and transformative AI-driven product development.


His expertise spans Applied Math, High-Performance Computing, Computational Chemistry and Biology, Health Science, and Medical Therapeutics, enabling his to guide his team in exploring new frontiers. He has a deep understanding and application of Generative AI, AI Safety and Trustworthiness, Natural Language Processing, Applied Mathematics, Software Engineering, Modeling and Simulations, Quantum Mechanics, Data Integration, Causal Inference/Reasoning, and Reinforcement Learning. These competencies are crucial in developing scalable AI/ML models and computing infrastructures that accelerate scientific discoveries, enhance computer-aided design, and refine autonomous decision-making.

Neeraj Kumar

Chief Data Scientist
Pacific Northwest National Laboratory

As the Chief Data Scientist at Pacific Northwest National Laboratory (PNNL), Neeraj leads a talented team of scientists and professionals in addressing critical challenges in energy, artificial intelligence, health, and biothreat sectors. With over 15 years of experience in quantitative research and data science, he specializes in developing innovative solutions and managing multidisciplinary teams focused on multimillion-dollar programs at the intersection of fundamental discovery and transformative AI-driven product development.


His expertise spans Applied Math, High-Performance Computing, Computational Chemistry and Biology, Health Science, and Medical Therapeutics, enabling his to guide his team in exploring new frontiers. He has a deep understanding and application of Generative AI, AI Safety and Trustworthiness, Natural Language Processing, Applied Mathematics, Software Engineering, Modeling and Simulations, Quantum Mechanics, Data Integration, Causal Inference/Reasoning, and Reinforcement Learning. These competencies are crucial in developing scalable AI/ML models and computing infrastructures that accelerate scientific discoveries, enhance computer-aided design, and refine autonomous decision-making.

11:00 AM - 11:30 AM
KEYNOTE

Author:

Sean Lie

Chief Technology Officer and Co-Founder
Cerebras

Sean is Chief Technology Officer and co-founder of Cerebras Systems. Prior to Cerebras, Sean was Lead Hardware Architect of the IO virtualization fabric ASIC at SeaMicro. After SeaMicro was acquired by AMD, Sean was made an AMD Fellow and Chief Data Center Architect. Earlier in his career, he spent five years at AMD in their advanced architecture team. He holds a BS and MEng in Electrical Engineering and Computer Science from MIT and has authored 16 patents in computer architecture.

Sean Lie

Chief Technology Officer and Co-Founder
Cerebras

Sean is Chief Technology Officer and co-founder of Cerebras Systems. Prior to Cerebras, Sean was Lead Hardware Architect of the IO virtualization fabric ASIC at SeaMicro. After SeaMicro was acquired by AMD, Sean was made an AMD Fellow and Chief Data Center Architect. Earlier in his career, he spent five years at AMD in their advanced architecture team. He holds a BS and MEng in Electrical Engineering and Computer Science from MIT and has authored 16 patents in computer architecture.

11:30 AM - 12:00 AM
KEYNOTE

Author:

Saket Agarwal

Senior Director and Head of Engineering
Uber

Saket Agarwal is a seasoned product and engineering leader with nearly 20 years of experience across top-tier technology companies. Currently, as the Senior Director and Head of Engineering at Uber, he manages a team of full-stack software and machine learning engineers/scientists, overseeing the development of customer support & engagement platforms. His work at Uber involves leading a global team to deliver innovative AI-driven solutions that enhance customer interactions across diverse user groups, while partnering across teams at Uber to shape the vision in AI/ML.

Before joining Uber, Saket played a pivotal role at Amazon Web Services (AWS) as the Director of Software Engineering, where he led teams of over 200 engineers in creating Amazon Connect, a cutting-edge AI-powered contact center solution. His career also includes significant contributions to Amazon's Alexa conversational intelligence platform, Adobe's Correspondence Management solution, D. E. Shaw & Co., and IBM, where he honed his expertise in software development, digital customer engagement products, and AI applications. Saket’s leadership style is rooted in humility, technical excellence, and a relentless drive for operational efficiency, making him a key figure in the AI and customer service technology landscape.

Saket Agarwal

Senior Director and Head of Engineering
Uber

Saket Agarwal is a seasoned product and engineering leader with nearly 20 years of experience across top-tier technology companies. Currently, as the Senior Director and Head of Engineering at Uber, he manages a team of full-stack software and machine learning engineers/scientists, overseeing the development of customer support & engagement platforms. His work at Uber involves leading a global team to deliver innovative AI-driven solutions that enhance customer interactions across diverse user groups, while partnering across teams at Uber to shape the vision in AI/ML.

Before joining Uber, Saket played a pivotal role at Amazon Web Services (AWS) as the Director of Software Engineering, where he led teams of over 200 engineers in creating Amazon Connect, a cutting-edge AI-powered contact center solution. His career also includes significant contributions to Amazon's Alexa conversational intelligence platform, Adobe's Correspondence Management solution, D. E. Shaw & Co., and IBM, where he honed his expertise in software development, digital customer engagement products, and AI applications. Saket’s leadership style is rooted in humility, technical excellence, and a relentless drive for operational efficiency, making him a key figure in the AI and customer service technology landscape.

12:00 PM - 1:20 PM
Lunch and Networking
1:20 PM - 1:45 PM
PRESENTATION

Rapid adoption of large language models (LLMs) has created a critical need for sharing these models while preventing the users from copying, or reverse engineering, the models or the data they represent.  Fully Homomorphic Encryption (FHE) provides a post-quantum approach to secure the publication of LLMs. Recent software and hardware developments will provide a way to perform this secure sharing at execution speeds comparable to inference done on unsecured LLMs.  Dr. Rhines will review the emerging approaches and provide a roadmap for future developments that will make secure data sharing of LLMs pervasive.

Author:

Dr. Walden “Wally” Rhines

President & CEO
Cornami

WALDEN C. RHINES is President & CEO of Cornami. He is also CEO Emeritus of Mentor, a Siemens business, focusing on external communications and customer relations. He was previously CEO of Mentor Graphics for 23 years and Chairman of the Board for 17 years. During his tenure at Mentor, revenue nearly quadrupled and market value of the company increased 10X.

Prior to joining Mentor Graphics, Dr. Rhines was Executive Vice President, Semiconductor Group, responsible for TI’s worldwide semiconductor business. During his 21 years at TI, he was President of the Data Systems Group and held numerous other semiconductor executive management positions.

Dr. Rhines has served on the boards of Cirrus Logic, QORVO, TriQuint Semiconductor, Global Logic and as Chairman of the Electronic Design Automation Consortium (five two-year terms) and is currently a director. He is also a board member of the Semiconductor Research Corporation and First Growth Children & Family Charities. He is a Lifetime Fellow of the IEEE and has served on the Board of Trustees of Lewis and Clark College, the National Advisory Board of the University of Michigan and Industrial Committees advising Stanford University and the University of Florida.

Dr. Rhines holds a Bachelor of Science degree in engineering from the University of Michigan, a Master of Science and PhD in materials science and engineering from Stanford University, a master of Business Administration from Southern Methodist University and Honorary Doctor of Technology degrees from the University of Florida and Nottingham Trent University.

Dr. Walden “Wally” Rhines

President & CEO
Cornami

WALDEN C. RHINES is President & CEO of Cornami. He is also CEO Emeritus of Mentor, a Siemens business, focusing on external communications and customer relations. He was previously CEO of Mentor Graphics for 23 years and Chairman of the Board for 17 years. During his tenure at Mentor, revenue nearly quadrupled and market value of the company increased 10X.

Prior to joining Mentor Graphics, Dr. Rhines was Executive Vice President, Semiconductor Group, responsible for TI’s worldwide semiconductor business. During his 21 years at TI, he was President of the Data Systems Group and held numerous other semiconductor executive management positions.

Dr. Rhines has served on the boards of Cirrus Logic, QORVO, TriQuint Semiconductor, Global Logic and as Chairman of the Electronic Design Automation Consortium (five two-year terms) and is currently a director. He is also a board member of the Semiconductor Research Corporation and First Growth Children & Family Charities. He is a Lifetime Fellow of the IEEE and has served on the Board of Trustees of Lewis and Clark College, the National Advisory Board of the University of Michigan and Industrial Committees advising Stanford University and the University of Florida.

Dr. Rhines holds a Bachelor of Science degree in engineering from the University of Michigan, a Master of Science and PhD in materials science and engineering from Stanford University, a master of Business Administration from Southern Methodist University and Honorary Doctor of Technology degrees from the University of Florida and Nottingham Trent University.

1:45 PM - 2:10 PM
PRESENTATION

Learn the tips and tricks to become a power user of Gemini and unlock your full potential in creativity, productivity, and more. Through this talk, we’ll spark ideas and share various ways to apply AI to broad use cases, as well as techniques to generate the best response.

Author:

Lisa Cohen

Director of Data Science for Gemini, Google Assistant, and Search Platforms
Google

Lisa Cohen is Director of Data Science for Gemini (formerly "Bard"), Google Assistant, and Search Platforms. She leads an organization of data scientists at Google, responsible for using data to create excellent user experiences across these products, and partnering closely with Product, Engineering, and User Experience Research. Formerly, Lisa was Head of Data Science and Engineering for Twitter, helping drive the strategy and direction of the Twitter product, through machine learning, metric development, experimentation and causal analyses. Before Twitter, Lisa led the Azure Customer Growth Analytics organization as part of Microsoft Cloud Data sciences. Her team was responsible for analyzing OKRs, informing data-driven decisions, and developing data science models to help customers be successful on Azure. Lisa worked at Microsoft for 17yrs, and also helped develop multiple versions of Visual Studio. She holds Bachelor and Masters degrees from Harvard in Applied Mathematics. You can follow Lisa on LinkedIn and Medium.

Lisa Cohen

Director of Data Science for Gemini, Google Assistant, and Search Platforms
Google

Lisa Cohen is Director of Data Science for Gemini (formerly "Bard"), Google Assistant, and Search Platforms. She leads an organization of data scientists at Google, responsible for using data to create excellent user experiences across these products, and partnering closely with Product, Engineering, and User Experience Research. Formerly, Lisa was Head of Data Science and Engineering for Twitter, helping drive the strategy and direction of the Twitter product, through machine learning, metric development, experimentation and causal analyses. Before Twitter, Lisa led the Azure Customer Growth Analytics organization as part of Microsoft Cloud Data sciences. Her team was responsible for analyzing OKRs, informing data-driven decisions, and developing data science models to help customers be successful on Azure. Lisa worked at Microsoft for 17yrs, and also helped develop multiple versions of Visual Studio. She holds Bachelor and Masters degrees from Harvard in Applied Mathematics. You can follow Lisa on LinkedIn and Medium.

2:10 PM - 2:35 PM
PRESENTATION

Author:

Krishna Rangasayee

Founder and CEO
SiMa.ai

Krishna Rangasayee is Founder and CEO of SiMa.ai. Previously, Krishna was COO of Groq and at Xilinx for 18 years, where he held multiple senior leadership roles including Senior Vice President and GM of the overall business, and Executive Vice President of global sales. While at Xilinx, Krishna grew the business to $2.5B in revenue at 70% gross margin while creating the foundation for 10+ quarters of sustained sequential growth and market share expansion. Prior to Xilinx, he held various engineering and business roles at Altera Corporation and Cypress Semiconductor. He holds 25+ international patents and has served on the board of directors of public and private companies.

Krishna Rangasayee

Founder and CEO
SiMa.ai

Krishna Rangasayee is Founder and CEO of SiMa.ai. Previously, Krishna was COO of Groq and at Xilinx for 18 years, where he held multiple senior leadership roles including Senior Vice President and GM of the overall business, and Executive Vice President of global sales. While at Xilinx, Krishna grew the business to $2.5B in revenue at 70% gross margin while creating the foundation for 10+ quarters of sustained sequential growth and market share expansion. Prior to Xilinx, he held various engineering and business roles at Altera Corporation and Cypress Semiconductor. He holds 25+ international patents and has served on the board of directors of public and private companies.

2:35 PM - 3:20 PM
Networking Break
3:20 PM - 4:00 PM
PANEL
Moderator

Author:

Gayathri Radhakrishnan

Partner
Hitachi Ventures

Gayathri is currently Partner at Hitachi Ventures. Prior to that, she was with Micron Ventures, actively investing in startups that apply AI to solve critical problems in the areas of Manufacturing, Healthcare and Automotive. She brings over 20 years of multi-disciplinary experience across product management, product marketing, corporate strategy, M&A and venture investments in large Fortune 500 companies such as Dell and Corning and in startups. She has also worked as an early stage investor at Earlybird Venture Capital, a premier European venture capital fund based in Germany. She has a Masters in EE from The Ohio State University and MBA from INSEAD in France. She is also a Kauffman Fellow - Class 16.

Gayathri Radhakrishnan

Partner
Hitachi Ventures

Gayathri is currently Partner at Hitachi Ventures. Prior to that, she was with Micron Ventures, actively investing in startups that apply AI to solve critical problems in the areas of Manufacturing, Healthcare and Automotive. She brings over 20 years of multi-disciplinary experience across product management, product marketing, corporate strategy, M&A and venture investments in large Fortune 500 companies such as Dell and Corning and in startups. She has also worked as an early stage investor at Earlybird Venture Capital, a premier European venture capital fund based in Germany. She has a Masters in EE from The Ohio State University and MBA from INSEAD in France. She is also a Kauffman Fellow - Class 16.

Author:

John Wei

Venture Investment Director
Applied Ventures

John Wei is a venture investment director at Applied Ventures. He focuses on a range of deep tech areas and industry verticals, including advanced materials, semiconductor manufacturing and industrial & enterprise software. He also manages Applied Ventures’ investment activities in the Greater China region.

Prior to joining Applied, John was a key member of the SABIC Ventures investment team, where he led multiple investments in advanced materials, energy, sustainability, manufacturing and agriculture space in North America, Europe and Greater China.

Earlier in his career, John held various commercial and technical roles at The Linde Group and General Electric with experiences mostly in the petrochemical, power generation, alternative energy and oil & gas industries.

John has a Bachelor degree from Tsinghua University and a PhD from Rutgers University, both in Chemical Engineering. While at Rutgers, he also earned a Master's degree in Computer Science. In addition, John holds an MBA degree from UCLA with a focus in Finance and Entrepreneurship.

John Wei

Venture Investment Director
Applied Ventures

John Wei is a venture investment director at Applied Ventures. He focuses on a range of deep tech areas and industry verticals, including advanced materials, semiconductor manufacturing and industrial & enterprise software. He also manages Applied Ventures’ investment activities in the Greater China region.

Prior to joining Applied, John was a key member of the SABIC Ventures investment team, where he led multiple investments in advanced materials, energy, sustainability, manufacturing and agriculture space in North America, Europe and Greater China.

Earlier in his career, John held various commercial and technical roles at The Linde Group and General Electric with experiences mostly in the petrochemical, power generation, alternative energy and oil & gas industries.

John has a Bachelor degree from Tsinghua University and a PhD from Rutgers University, both in Chemical Engineering. While at Rutgers, he also earned a Master's degree in Computer Science. In addition, John holds an MBA degree from UCLA with a focus in Finance and Entrepreneurship.

Author:

Laura Swan

General Partner
Silicon Catalyst Ventures

Laura is in charge of portfolio company management at SCV. Laura is a Managing Partner with Silicon Catalyst and the Vice President of Operations for Silicon Catalyst Angels as well as an investor with Sand Hill Angels and a founding partner of The Batchery, a tech incubator in Berkeley California.Laura earned her Master Degree in Electrical Engineering from the University of Wyoming (Go Pokes!).

Laura Swan

General Partner
Silicon Catalyst Ventures

Laura is in charge of portfolio company management at SCV. Laura is a Managing Partner with Silicon Catalyst and the Vice President of Operations for Silicon Catalyst Angels as well as an investor with Sand Hill Angels and a founding partner of The Batchery, a tech incubator in Berkeley California.Laura earned her Master Degree in Electrical Engineering from the University of Wyoming (Go Pokes!).

Author:

Divya Raghavan

Principal
NGP Capital

Divya Raghavan is a venture investor and technology enthusiast with a diverse background in product management, software engineering, and venture capital. She currently serves as a Principal of the US investment team at NGP Capital, where her primary focus revolves around software, data, and edge infrastructure. Her expertise lies in identifying and nurturing cutting-edge startups poised to revolutionize these sectors. Divya brings solid technical acumen to venture investing.

Prior to her role at NGP Capital, Divya was a leader in software engineering at Citrix and led product management teams at SugarCRM.  In her journey towards venture investing, Divya gained invaluable experience at VC firms, including Costanoa Ventures and Samsung NEXT Ventures, where she further deepened her understanding of the startup ecosystem.

Within NGP Capital, Divya has led investments in notable companies such as Coda, Immuta, ArmorCode and Nova Labs. Her keen eye for innovation and a strong track record of successful investments have made her a trusted voice in the venture capital landscape. Beyond her professional endeavors, Divya is also an avid angel investor, continuously contributing to early product ideation and GTM, always ready to wear her product hat.

Divya received an MBA from MIT Sloan School of Management and a Master’s degree in Computer Science from the University of Florida, Gainesville.

Divya Raghavan

Principal
NGP Capital

Divya Raghavan is a venture investor and technology enthusiast with a diverse background in product management, software engineering, and venture capital. She currently serves as a Principal of the US investment team at NGP Capital, where her primary focus revolves around software, data, and edge infrastructure. Her expertise lies in identifying and nurturing cutting-edge startups poised to revolutionize these sectors. Divya brings solid technical acumen to venture investing.

Prior to her role at NGP Capital, Divya was a leader in software engineering at Citrix and led product management teams at SugarCRM.  In her journey towards venture investing, Divya gained invaluable experience at VC firms, including Costanoa Ventures and Samsung NEXT Ventures, where she further deepened her understanding of the startup ecosystem.

Within NGP Capital, Divya has led investments in notable companies such as Coda, Immuta, ArmorCode and Nova Labs. Her keen eye for innovation and a strong track record of successful investments have made her a trusted voice in the venture capital landscape. Beyond her professional endeavors, Divya is also an avid angel investor, continuously contributing to early product ideation and GTM, always ready to wear her product hat.

Divya received an MBA from MIT Sloan School of Management and a Master’s degree in Computer Science from the University of Florida, Gainesville.

4:00 PM - 4:25 PM
PRESENTATION

Author:

Ved Upadhyay

Senior Data Scientist
Walmart Global Tech

Ved Upadhyay is a seasoned professional in the realm of data science and artificial intelligence (AI). With a focus on addressing complex challenges in data science on an enterprise scale, he boasts over 7 years of hands-on experience in crafting AI-powered solutions for businesses. Ved’s expertise spans diverse industries, including retail, e-commerce, pharmaceuticals, agrotech, and socio-tech, where he has successfully productized multiple machine learning pipelines. Currently serving as a Senior Data Scientist at Walmart, Ved spearheads multiple data science initiatives centered around customer propensity and responsible AI solutions at enterprise scale. Prior to venturing into the industry, Ved earned his master’s degree in Data Science from the University of Illinois at Urbana-Champaign and contributed as a Deep Learning researcher at IIIT Hyderabad. His research contributions are reflected in multiple publications in the field of applied AI. 

Ved Upadhyay

Senior Data Scientist
Walmart Global Tech

Ved Upadhyay is a seasoned professional in the realm of data science and artificial intelligence (AI). With a focus on addressing complex challenges in data science on an enterprise scale, he boasts over 7 years of hands-on experience in crafting AI-powered solutions for businesses. Ved’s expertise spans diverse industries, including retail, e-commerce, pharmaceuticals, agrotech, and socio-tech, where he has successfully productized multiple machine learning pipelines. Currently serving as a Senior Data Scientist at Walmart, Ved spearheads multiple data science initiatives centered around customer propensity and responsible AI solutions at enterprise scale. Prior to venturing into the industry, Ved earned his master’s degree in Data Science from the University of Illinois at Urbana-Champaign and contributed as a Deep Learning researcher at IIIT Hyderabad. His research contributions are reflected in multiple publications in the field of applied AI. 

4:25 PM - 4:50 PM
PRESENTATION

Author:

Nikhil Sukhtankar

Head of Product Conversational AI & Messaging Platforms
PayPal

As Head of AI & Messaging Platforms org at PayPal, Nikhil leads Product Management & Conversational AI Design teams. Nikhil is responsible for defining the vision & strategy to bring advancements in AI, especially Generative AI & NLU/NLP to power intuitive & intelligent experiences for PayPal's Consumers & Small-Medium Business Merchants. He brings over two decades of experience in building and growing enterprise scale consumer products across technology start-ups, growth companies and S&P 500 large companies. In recent years, my focus has been on building enterprise-scale vertical Applied AI platforms that offer capabilities & services such as AI powered Chatbot, Voice / IVR applications, Semantic & Lexical Search, Agent Assist Solutions (Smart Replies, Summarization) etc within Consumer Wallet, Payments, SMB Merchants, Shopping / Rewards & Customer Success domains. Nikhil also is a Product Advisor to a handful of startups leaders and CEOs. Nikhil has a technical background with Electrical Engineering & Computer Science and has also earned an AI & ML certification from MIT.

Nikhil Sukhtankar

Head of Product Conversational AI & Messaging Platforms
PayPal

As Head of AI & Messaging Platforms org at PayPal, Nikhil leads Product Management & Conversational AI Design teams. Nikhil is responsible for defining the vision & strategy to bring advancements in AI, especially Generative AI & NLU/NLP to power intuitive & intelligent experiences for PayPal's Consumers & Small-Medium Business Merchants. He brings over two decades of experience in building and growing enterprise scale consumer products across technology start-ups, growth companies and S&P 500 large companies. In recent years, my focus has been on building enterprise-scale vertical Applied AI platforms that offer capabilities & services such as AI powered Chatbot, Voice / IVR applications, Semantic & Lexical Search, Agent Assist Solutions (Smart Replies, Summarization) etc within Consumer Wallet, Payments, SMB Merchants, Shopping / Rewards & Customer Success domains. Nikhil also is a Product Advisor to a handful of startups leaders and CEOs. Nikhil has a technical background with Electrical Engineering & Computer Science and has also earned an AI & ML certification from MIT.

4:50 PM - 6:00 PM
Day 1: Tuesday, 10 Sep, 2024
EFFICIENT MODEL BUILDING AND TRAINING
09:00 AM - 10:00 AM
Registration and Networking
9:55 AM - 10:00 AM
OPENING REMARKS
10:00 AM - 10:25 AM
AI LUMINARY KEYNOTE
Hardware
Systems
Infrastructure
Software

Author:

Partha Ranganathan

VP & Engineering Fellow
Google

Parthasarathy (Partha) Ranganathan is currently a VP & Engineering Fellow, Google where he is the area technical lead for hardware and datacenters, designing systems at scale. Prior to this, he was a HP Fellow and Chief Technologist at Hewlett Packard Labs where he led their research on systems and data centers. Partha has worked on several interdisciplinary systems projects with broad impact on both academia and industry, including widely-used innovations in energy-aware user interfaces, heterogeneous multi-cores, power-efficient servers, accelerators, and disaggregated and data-centric data centers. He has published extensively (including being the co-author on the popular "Datacenter as a Computer" textbook), is a co-inventor on more than 100 patents, and has been recognized with numerous awards. He has been named a top-15 enterprise technology rock star by Business Insider, one of the top 35 young innovators in the world by MIT Tech Review, and is a recipient of the ACM SIGARCH Maurice Wilkes award, Rice University's Outstanding Young Engineering Alumni award, and the IIT Madras distinguished alumni award. He is one of few computer scientists to have his work recognized with an Emmy award. He is also a Fellow of the IEEE and ACM, and has also served on the board of directors for OpenCompute.

Partha Ranganathan

VP & Engineering Fellow
Google

Parthasarathy (Partha) Ranganathan is currently a VP & Engineering Fellow, Google where he is the area technical lead for hardware and datacenters, designing systems at scale. Prior to this, he was a HP Fellow and Chief Technologist at Hewlett Packard Labs where he led their research on systems and data centers. Partha has worked on several interdisciplinary systems projects with broad impact on both academia and industry, including widely-used innovations in energy-aware user interfaces, heterogeneous multi-cores, power-efficient servers, accelerators, and disaggregated and data-centric data centers. He has published extensively (including being the co-author on the popular "Datacenter as a Computer" textbook), is a co-inventor on more than 100 patents, and has been recognized with numerous awards. He has been named a top-15 enterprise technology rock star by Business Insider, one of the top 35 young innovators in the world by MIT Tech Review, and is a recipient of the ACM SIGARCH Maurice Wilkes award, Rice University's Outstanding Young Engineering Alumni award, and the IIT Madras distinguished alumni award. He is one of few computer scientists to have his work recognized with an Emmy award. He is also a Fellow of the IEEE and ACM, and has also served on the board of directors for OpenCompute.

10:25 AM - 10:50 AM
KEYNOTE
Generative AI
Systems
Infrastructure

Author:

Jia Li

Co-Founder, Chief AI Officer & President
LiveX AI

Jia is Co-founder, Chief AI Officer and President of a Stealth Generative AI Startup. She is elected as IEEE Fellow for Leadership in Large Scale AI. She is co-teaching the inaugural course of Generative AI and Medicine at Stanford University, where she has served multiple roles including Advisory Board Committee to Nourish, Chief AI Fellow, RWE for Sleep Health and Adjunct Professor at the School of Medicine in the past. She was the Founding Head of R&D at Google Cloud AI. At Google, she oversaw the development of the full stack of AI products on Google Cloud to power solutions for diverse industries. With the passion to make more impact to our everyday life, she later became an entrepreneur, building and advising companies with award-winning platforms to solve today's greatest challenges in life. She has served as Mentor and Professor-in-Residence at StartX, advising founders/companies from Stanford/Alumni. She is the Co-founder and Chairperson of HealthUnity Corporation, a 501(c)3 nonprofit organization. She served briefly at Accenture as a part-time Chief AI Follow for the Generative AI strategy. She also serves as an advisor to the United Nations Children's Fund (UNICEF). She is a board member of the Children's Discovery Museum of San Jose. She was selected as a World Economic Forum Young Global Leader, a recognition bestowed on 100 of the world’s most promising business leaders, artists, public servants, technologists, and social entrepreneurs in 2018. Before joining Google, She was the Head of Research at Snap, leading the AI/AR innovation effort. She received her Ph.D. degree from the Computer Science Department at Stanford University.

Jia Li

Co-Founder, Chief AI Officer & President
LiveX AI

Jia is Co-founder, Chief AI Officer and President of a Stealth Generative AI Startup. She is elected as IEEE Fellow for Leadership in Large Scale AI. She is co-teaching the inaugural course of Generative AI and Medicine at Stanford University, where she has served multiple roles including Advisory Board Committee to Nourish, Chief AI Fellow, RWE for Sleep Health and Adjunct Professor at the School of Medicine in the past. She was the Founding Head of R&D at Google Cloud AI. At Google, she oversaw the development of the full stack of AI products on Google Cloud to power solutions for diverse industries. With the passion to make more impact to our everyday life, she later became an entrepreneur, building and advising companies with award-winning platforms to solve today's greatest challenges in life. She has served as Mentor and Professor-in-Residence at StartX, advising founders/companies from Stanford/Alumni. She is the Co-founder and Chairperson of HealthUnity Corporation, a 501(c)3 nonprofit organization. She served briefly at Accenture as a part-time Chief AI Follow for the Generative AI strategy. She also serves as an advisor to the United Nations Children's Fund (UNICEF). She is a board member of the Children's Discovery Museum of San Jose. She was selected as a World Economic Forum Young Global Leader, a recognition bestowed on 100 of the world’s most promising business leaders, artists, public servants, technologists, and social entrepreneurs in 2018. Before joining Google, She was the Head of Research at Snap, leading the AI/AR innovation effort. She received her Ph.D. degree from the Computer Science Department at Stanford University.

10:50 AM - 11:15 AM
KEYNOTE
Generative AI
Systems
Infrastructure

Author:

Thomas Sohmers

Founder and CEO
Positron AI

Thomas Sohmers is an innovative technologist and entrepreneur, renowned for his pioneering work in the field of advanced computing and artificial intelligence. Thomas began programming at a very early age, which led him to MIT as a high school student where he worked on cutting-edge research. By the age of 18, he had become a Thiel Fellow, marking the beginning of his remarkable journey in technology and innovation. In 2013, Thomas founded Rex Computing, where he designed energy-efficient processors for high-performance computing applications. His groundbreaking work earned him numerous accolades, including a feature in Forbes' 30 Under 30. After a stint exploring the AI industry, working on scaling out GPU clouds and large language models, Thomas founded and became CEO of Positron in 2023. Positron develops highly efficient transformer inferencing systems, and under Thomas's leadership, it has quickly become one of the most creative and promising startups in the AI industry.

Thomas Sohmers

Founder and CEO
Positron AI

Thomas Sohmers is an innovative technologist and entrepreneur, renowned for his pioneering work in the field of advanced computing and artificial intelligence. Thomas began programming at a very early age, which led him to MIT as a high school student where he worked on cutting-edge research. By the age of 18, he had become a Thiel Fellow, marking the beginning of his remarkable journey in technology and innovation. In 2013, Thomas founded Rex Computing, where he designed energy-efficient processors for high-performance computing applications. His groundbreaking work earned him numerous accolades, including a feature in Forbes' 30 Under 30. After a stint exploring the AI industry, working on scaling out GPU clouds and large language models, Thomas founded and became CEO of Positron in 2023. Positron develops highly efficient transformer inferencing systems, and under Thomas's leadership, it has quickly become one of the most creative and promising startups in the AI industry.

11:15 AM - 11:40 AM
KEYNOTE

The exponential growth in compute demands of AI, and the move to Software Defined Products, means that more than ever workloads are defining the semiconductor requirements. The need to hit the restrictive power, performance, area and cost constraints of edge designs, mean that every element of the design needs to be optimized and co-designed with the workloads in mind. Additionally, the design needs to evolve even after semiconductor design is complete as it adapts to new demands.

In this presentation we will look at how semiconductor design is changing to enable rapid development and deployment of custom, application optimized, system-on-chip designs – from concept through to in-life operation, as we chart the path to a sustainable compute future.

Data Center
Hardware
Infrastructure

Author:

Ankur Gupta

Senior Vice President and General Manager
Siemens EDA

Ankur Gupta is Senior Vice President and General Manager of Digital Design Creation at Siemens EDA. This includes Test, Embedded Analytics, Digital IC design, Power Optimization, and Power Integrity Analysis. Formerly he was head of Product Management and Applications at Ansys, Semiconductor and Head of Applications Engineering for Digital Implementation & Signoff at Cadence Design Systems.

Ankur has 20+ years of experience in EDA, working on some of the industry’s most innovative Test, Digital Design, Implementation and Signoff products. He holds a Master’s Degree in Electrical and Computer Engineering, from Iowa State University.

Ankur Gupta

Senior Vice President and General Manager
Siemens EDA

Ankur Gupta is Senior Vice President and General Manager of Digital Design Creation at Siemens EDA. This includes Test, Embedded Analytics, Digital IC design, Power Optimization, and Power Integrity Analysis. Formerly he was head of Product Management and Applications at Ansys, Semiconductor and Head of Applications Engineering for Digital Implementation & Signoff at Cadence Design Systems.

Ankur has 20+ years of experience in EDA, working on some of the industry’s most innovative Test, Digital Design, Implementation and Signoff products. He holds a Master’s Degree in Electrical and Computer Engineering, from Iowa State University.

11:40 AM - 12:05 PM
Systems
Infrastructure
Generative AI
Hardware
Software

Author:

Lip-Bu Tan

Founder & Chairman
Walden International

Lip-Bu Tan is Founder and Chairman of Walden International (“WI”), and Founding Managing Partner of Celesta Capital and Walden Catalyst Ventures, with over $5 billion under management.  He formerly served as Chief Executive Officer and Executive Chairman of Cadence Design Systems, Inc.  He currently serves on the Board of Schneider Electric SE (SU: FP), Intel Corporation (NASDAQ: INTC), and Credo Semiconductor (NASDAQ: CRDO).

 

Lip-Bu focuses on semiconductor/components, cloud/edge infrastructure, data management and security, and AI/machine learning.

 

Lip-Bu received his B.S. from Nanyang University in Singapore, his M.S. in Nuclear Engineering from the Massachusetts Institute of Technology, and his MBA from the University of San Francisco. He also received his honorary degree for Doctor of Humane Letters from the University of San Francisco.  Lip-Bu currently serves on Carnegie Mellon University (CMU)’s Board of Trustees and the School of Engineering Dean’s Council, Massachusetts Institute of Technology (MIT)’s School of Engineering Dean’s Advisory Council, University of California Berkeley (UCB)’s College of Engineering Advisory Board and their Computing, Data Science, and Society Advisory Board, and University of California San Francisco (UCSF)’s Executive Council. He’s also a member of the Global Advisory Board of METI Japan, The Business Council, and Committee 100. He also served on the board of the Board of Global Semiconductor Alliance (GSA) from 2009 to 2021, and as a Trustee of Nanyang Technological University (NTU) in Singapore from 2006 to 2011.  Lip-Bu has been named one of the Top 10 Venture Capitalists in China by Zero2ipo and was listed as one of the Top 50 Venture Capitalists on the Forbes Midas List. He’s the recipient of imec’s 2023 Lifetime of Innovation Award, the Semiconductor Industry Association (SIA) 2022 Robert N. Noyce Award, and GSA’s 2016 Dr. Morris Chang's Exemplary Leadership Award.  In 2017, he was ranked #1 of the most well-connected executives in the technology industry by the analytics firm Relationship Science. 

Lip-Bu Tan

Founder & Chairman
Walden International

Lip-Bu Tan is Founder and Chairman of Walden International (“WI”), and Founding Managing Partner of Celesta Capital and Walden Catalyst Ventures, with over $5 billion under management.  He formerly served as Chief Executive Officer and Executive Chairman of Cadence Design Systems, Inc.  He currently serves on the Board of Schneider Electric SE (SU: FP), Intel Corporation (NASDAQ: INTC), and Credo Semiconductor (NASDAQ: CRDO).

 

Lip-Bu focuses on semiconductor/components, cloud/edge infrastructure, data management and security, and AI/machine learning.

 

Lip-Bu received his B.S. from Nanyang University in Singapore, his M.S. in Nuclear Engineering from the Massachusetts Institute of Technology, and his MBA from the University of San Francisco. He also received his honorary degree for Doctor of Humane Letters from the University of San Francisco.  Lip-Bu currently serves on Carnegie Mellon University (CMU)’s Board of Trustees and the School of Engineering Dean’s Council, Massachusetts Institute of Technology (MIT)’s School of Engineering Dean’s Advisory Council, University of California Berkeley (UCB)’s College of Engineering Advisory Board and their Computing, Data Science, and Society Advisory Board, and University of California San Francisco (UCSF)’s Executive Council. He’s also a member of the Global Advisory Board of METI Japan, The Business Council, and Committee 100. He also served on the board of the Board of Global Semiconductor Alliance (GSA) from 2009 to 2021, and as a Trustee of Nanyang Technological University (NTU) in Singapore from 2006 to 2011.  Lip-Bu has been named one of the Top 10 Venture Capitalists in China by Zero2ipo and was listed as one of the Top 50 Venture Capitalists on the Forbes Midas List. He’s the recipient of imec’s 2023 Lifetime of Innovation Award, the Semiconductor Industry Association (SIA) 2022 Robert N. Noyce Award, and GSA’s 2016 Dr. Morris Chang's Exemplary Leadership Award.  In 2017, he was ranked #1 of the most well-connected executives in the technology industry by the analytics firm Relationship Science. 

12:05 PM - 12:30 PM
KEYNOTE

There has been tremendous demand to deploy AI models across new and diverse hardware architectures.  Many of these architectures include a variety of processing nodes and specialized hardware accelerators. The challenge is to take trained AI models developed in various open-source software frameworks and execute them efficiently on these architecures.  Software tools must evolve to provide features such as AI model import, graph analysis, quantization, optimization, and code generation.  Creating AI-centric software development tools is a complex undertaking that requires expertise in AI network theory and construction, high-performance computing, compilers, and embedded systems. This talk will share some of our experiences developing Cadence’s NeuroWeave Software Development Kit (SDK).  NeuroWeave is a collection of tools and software libraries for optimizing and compiling AI models in order to execute them efficiently on Tensilica DSPs and Neo accelerators. 

Author:

Eric Stotzer

Software Engineering Group Director
Cadence

Eric Stotzer is a Software Engineering Group Director at Cadence Design Systems, where he is responsible for the Neuroweave SDK and  Xtensa Neural Network Compiler (XNNC). Eric worked for 30 years at Texas Instruments developing system software tools for DSPs and MCUs.  Before coming to Cadence, he was at Mythic working on a neural network compiler for mixed-signal AI accelerators.  He is a coauthor of the book Using OpenMP - The Next Step: Affinity, Accelerators, Tasking, and SIMD, (MIT Press 2017). Eric holds a PhD in Computer Science from the University of Houston.

Eric Stotzer

Software Engineering Group Director
Cadence

Eric Stotzer is a Software Engineering Group Director at Cadence Design Systems, where he is responsible for the Neuroweave SDK and  Xtensa Neural Network Compiler (XNNC). Eric worked for 30 years at Texas Instruments developing system software tools for DSPs and MCUs.  Before coming to Cadence, he was at Mythic working on a neural network compiler for mixed-signal AI accelerators.  He is a coauthor of the book Using OpenMP - The Next Step: Affinity, Accelerators, Tasking, and SIMD, (MIT Press 2017). Eric holds a PhD in Computer Science from the University of Houston.

12:30 PM - 1:45 PM
Lunch and Networking
1:45 PM - 2:10 PM
Generative AI
Systems

Author:

Donald Thompson

Distinguished Engineer
LinkedIn

Donald is currently a Distinguished Engineer at LinkedIn, primarily overseeing the company's generative AI strategy, architecture, and technology. He has more than 35 years of hands-on experience as a technical architect and CTO, with an extensive background in designing and delivering innovative software and services on a large scale. In 2013, Donald co-founded Maana, which pioneered computational knowledge graphs and visual no-code/low-code authoring environments to address complex AI-based digital transformation challenges in Fortune 50 companies. During his 15 years at Microsoft, Donald started the Knowledge and Reasoning group within Microsoft's Bing division, where he innovated "Satori", an internet-scale knowledge graph constructed automatically from the web crawl. He co-founded a semantic computing incubation funded directly by Bill Gates, portions of which shipped as the SQL Server Semantic Engine. Additionally, he created Microsoft's first internet display ad delivery system and led numerous AI/ML initiatives in Microsoft Research across embedded systems, robotics, wearable computing, and privacy-preserving personal data services.

Donald Thompson

Distinguished Engineer
LinkedIn

Donald is currently a Distinguished Engineer at LinkedIn, primarily overseeing the company's generative AI strategy, architecture, and technology. He has more than 35 years of hands-on experience as a technical architect and CTO, with an extensive background in designing and delivering innovative software and services on a large scale. In 2013, Donald co-founded Maana, which pioneered computational knowledge graphs and visual no-code/low-code authoring environments to address complex AI-based digital transformation challenges in Fortune 50 companies. During his 15 years at Microsoft, Donald started the Knowledge and Reasoning group within Microsoft's Bing division, where he innovated "Satori", an internet-scale knowledge graph constructed automatically from the web crawl. He co-founded a semantic computing incubation funded directly by Bill Gates, portions of which shipped as the SQL Server Semantic Engine. Additionally, he created Microsoft's first internet display ad delivery system and led numerous AI/ML initiatives in Microsoft Research across embedded systems, robotics, wearable computing, and privacy-preserving personal data services.

Data Center
Hardware
Infrastructure

Author:

Anton McGonnell

Head of SW Products
SambaNova

Anton McGonnell is SambaNova’s Head of SW Products and leads the team defining their

cutting-edge full-stack chips-to-model platform. Prior to SambaNova, Anton played a pivotal

role in shaping the development and implementation of advanced machine learning

technologies at UIPath. Anton received his Bachelor’s in Computer Science from Queens

University in Belfast and his M.B.A. from Harvard Business School. He is a native of County

Tyrone, Ireland and currently resides in Palo Alto, CA.

Anton McGonnell

Head of SW Products
SambaNova

Anton McGonnell is SambaNova’s Head of SW Products and leads the team defining their

cutting-edge full-stack chips-to-model platform. Prior to SambaNova, Anton played a pivotal

role in shaping the development and implementation of advanced machine learning

technologies at UIPath. Anton received his Bachelor’s in Computer Science from Queens

University in Belfast and his M.B.A. from Harvard Business School. He is a native of County

Tyrone, Ireland and currently resides in Palo Alto, CA.

2:10 PM - 2:50 PM
Generative AI
Infrastructure

Author:

Daniel Wu

Strategic AI Leadership | Keynote Speaker | Educator | Entrepreneur Course Facilitator
Stanford University AI Professional Program

Daniel Wu is an accomplished technical leader with over 20 years of expertise in software engineering, AI/ML, and team development. With a diverse career spanning technology, education, finance, and healthcare, he is credited for establishing high-performing AI teams, pioneering point-of-care expert systems, co-founding a successful online personal finance marketplace, and leading the development of an innovative online real estate brokerage platform. Passionate about technology democratization and ethical AI practices, Daniel actively promotes these principles through involvement in computer science and AI/ML education programs. A sought-after speaker, he shares insights and experiences at international conferences and corporate events. Daniel holds a computer science degree from Stanford University.

Daniel Wu

Strategic AI Leadership | Keynote Speaker | Educator | Entrepreneur Course Facilitator
Stanford University AI Professional Program

Daniel Wu is an accomplished technical leader with over 20 years of expertise in software engineering, AI/ML, and team development. With a diverse career spanning technology, education, finance, and healthcare, he is credited for establishing high-performing AI teams, pioneering point-of-care expert systems, co-founding a successful online personal finance marketplace, and leading the development of an innovative online real estate brokerage platform. Passionate about technology democratization and ethical AI practices, Daniel actively promotes these principles through involvement in computer science and AI/ML education programs. A sought-after speaker, he shares insights and experiences at international conferences and corporate events. Daniel holds a computer science degree from Stanford University.

Author:

Arun Nandi

Senior Director and Head of Data & Analytics
Unilever

Arun is a visionary AI and Analytics expert recognized as one of the Top 100 Influential AI & Analytics leaders. He is the Head of Data & Analytics at Unilever today. With over 15 years of experience driving analytics-driven value in organizations, he has built AI practices from the ground up on several occasions. Arun advocates the adoption of AI to overcome enterprise-wide challenges and create growth. Beyond his professional achievements, Arun loves to travel, having explored over 40 countries and is passionate about adventure motorbiking.

Arun Nandi

Senior Director and Head of Data & Analytics
Unilever

Arun is a visionary AI and Analytics expert recognized as one of the Top 100 Influential AI & Analytics leaders. He is the Head of Data & Analytics at Unilever today. With over 15 years of experience driving analytics-driven value in organizations, he has built AI practices from the ground up on several occasions. Arun advocates the adoption of AI to overcome enterprise-wide challenges and create growth. Beyond his professional achievements, Arun loves to travel, having explored over 40 countries and is passionate about adventure motorbiking.

Author:

Neeraj Kumar

Chief Data Scientist
Pacific Northwest National Laboratory

As the Chief Data Scientist at Pacific Northwest National Laboratory (PNNL), Neeraj leads a talented team of scientists and professionals in addressing critical challenges in energy, artificial intelligence, health, and biothreat sectors. With over 15 years of experience in quantitative research and data science, he specializes in developing innovative solutions and managing multidisciplinary teams focused on multimillion-dollar programs at the intersection of fundamental discovery and transformative AI-driven product development.


His expertise spans Applied Math, High-Performance Computing, Computational Chemistry and Biology, Health Science, and Medical Therapeutics, enabling his to guide his team in exploring new frontiers. He has a deep understanding and application of Generative AI, AI Safety and Trustworthiness, Natural Language Processing, Applied Mathematics, Software Engineering, Modeling and Simulations, Quantum Mechanics, Data Integration, Causal Inference/Reasoning, and Reinforcement Learning. These competencies are crucial in developing scalable AI/ML models and computing infrastructures that accelerate scientific discoveries, enhance computer-aided design, and refine autonomous decision-making.

Neeraj Kumar

Chief Data Scientist
Pacific Northwest National Laboratory

As the Chief Data Scientist at Pacific Northwest National Laboratory (PNNL), Neeraj leads a talented team of scientists and professionals in addressing critical challenges in energy, artificial intelligence, health, and biothreat sectors. With over 15 years of experience in quantitative research and data science, he specializes in developing innovative solutions and managing multidisciplinary teams focused on multimillion-dollar programs at the intersection of fundamental discovery and transformative AI-driven product development.


His expertise spans Applied Math, High-Performance Computing, Computational Chemistry and Biology, Health Science, and Medical Therapeutics, enabling his to guide his team in exploring new frontiers. He has a deep understanding and application of Generative AI, AI Safety and Trustworthiness, Natural Language Processing, Applied Mathematics, Software Engineering, Modeling and Simulations, Quantum Mechanics, Data Integration, Causal Inference/Reasoning, and Reinforcement Learning. These competencies are crucial in developing scalable AI/ML models and computing infrastructures that accelerate scientific discoveries, enhance computer-aided design, and refine autonomous decision-making.

Moderator

Author:

Manish Patel

Founding Partner
Nava Ventures

Manish Patel is the visionary founder of Nava Ventures, a pioneering venture capital firm headquartered in the heart of Silicon Valley. With a career spanning dozens of years at the forefront of technological innovation, Manish is renowned as a Silicon Valley Veteran with a knack for solving complex problems at the intersection of business, technology, and human experience.

Throughout his career, Manish has worn many hats – from operational maven and inventive trailblazer to astute venture capitalist. His deep-seated expertise in global business and product development, coupled with an unwavering passion for transformative technologies, positions him as a driving force behind Nava Ventures' success.

Prior to founding Nava Ventures, Manish held pivotal roles at Google and Highland Capital. At Google, he served as an early product leader, spearheading teams responsible for the design, development, and scalability of groundbreaking products such as Google Ads, GoogleTV, and Google Maps. Later, at Highland Capital, Manish played a key role in expanding the firm's footprint in California, further cementing his reputation as a strategic visionary in the venture capital landscape.  In 2021, Manish embarked on his entrepreneurial journey by establishing Nava Ventures. With a persistent focus on unique technology and experienced management teams, Nava focuses on early-stage investments across a diverse portfolio spanning AI technologies, data analytics, healthcare innovations, and financial services.

Beyond his professional pursuits, Manish's passion for knowledge sharing and mentorship is evident in his roles as an esteemed educator. For years, he has served as a dedicated instructor at the Stanford School of Engineering, nurturing the next generation of tech engineers and innovators. Additionally, his commitment to fostering entrepreneurship extends globally, as he proudly serves as a Fellow at the University of Toronto's Creative Destruction Lab (CDL) and is a Distinguished Fellow at IDEO and has more than a dozen patents.

Manish Patel

Founding Partner
Nava Ventures

Manish Patel is the visionary founder of Nava Ventures, a pioneering venture capital firm headquartered in the heart of Silicon Valley. With a career spanning dozens of years at the forefront of technological innovation, Manish is renowned as a Silicon Valley Veteran with a knack for solving complex problems at the intersection of business, technology, and human experience.

Throughout his career, Manish has worn many hats – from operational maven and inventive trailblazer to astute venture capitalist. His deep-seated expertise in global business and product development, coupled with an unwavering passion for transformative technologies, positions him as a driving force behind Nava Ventures' success.

Prior to founding Nava Ventures, Manish held pivotal roles at Google and Highland Capital. At Google, he served as an early product leader, spearheading teams responsible for the design, development, and scalability of groundbreaking products such as Google Ads, GoogleTV, and Google Maps. Later, at Highland Capital, Manish played a key role in expanding the firm's footprint in California, further cementing his reputation as a strategic visionary in the venture capital landscape.  In 2021, Manish embarked on his entrepreneurial journey by establishing Nava Ventures. With a persistent focus on unique technology and experienced management teams, Nava focuses on early-stage investments across a diverse portfolio spanning AI technologies, data analytics, healthcare innovations, and financial services.

Beyond his professional pursuits, Manish's passion for knowledge sharing and mentorship is evident in his roles as an esteemed educator. For years, he has served as a dedicated instructor at the Stanford School of Engineering, nurturing the next generation of tech engineers and innovators. Additionally, his commitment to fostering entrepreneurship extends globally, as he proudly serves as a Fellow at the University of Toronto's Creative Destruction Lab (CDL) and is a Distinguished Fellow at IDEO and has more than a dozen patents.

Hardware
Software
Systems
Infrastructure
Moderator

Author:

Karl Freund

Founder & Principal Analyst
Cambrian AI Research

Karl Freund is the founder and principal analyst of Cambrian AI Research. Prior to this, he was Moor Insights & Strategy’s consulting lead for HPC and Deep Learning. His recent experiences as the VP of Marketing at AMD and Calxeda, as well as his previous positions at Cray and IBM, positions him as a leading industry expert in these rapidly evolving industries. Karl works with investment and technology customers to help them understand the emerging Deep Learning opportunity in data centers, from competitive landscape to ecosystem to strategy.

 

Karl has worked directly with datacenter end users, OEMs, ODMs and the industry ecosystem, enabling him to help his clients define the appropriate business, product, and go-to-market strategies. He is also recognized expert on the subject of low-power servers and the emergence of ARM in the datacenter and has been a featured speaker at scores of investment and industry conferences on this topic.

Accomplishments during his career include:

  • Led the revived HPC initiative at AMD, targeting APUs at deep learning and other HPC workloads
  • Created an industry-wide thought leadership position for Calxeda in the ARM Server market
  • Helped forge the early relationship between HP and Calxeda leading to the surprise announcement of HP Moonshot with Calxeda in 2011
  • Built the IBM Power Server brand from 14% market share to over 50% share
  • Integrated the Tivoli brand into the IBM company’s branding and marketing organization
  • Co-Led the integration of HP and Apollo Marketing after the Boston-based desktop company’s acquisition

 

Karl’s background includes RISC and Mainframe servers, as well as HPC (Supercomputing). He has extensive experience as a global marketing executive at IBM where he was VP Marketing (2000-2010), Cray where he was VP Marketing (1995-1998), and HP where he was a Division Marketing Manager (1979-1995).

 

Karl Freund

Founder & Principal Analyst
Cambrian AI Research

Karl Freund is the founder and principal analyst of Cambrian AI Research. Prior to this, he was Moor Insights & Strategy’s consulting lead for HPC and Deep Learning. His recent experiences as the VP of Marketing at AMD and Calxeda, as well as his previous positions at Cray and IBM, positions him as a leading industry expert in these rapidly evolving industries. Karl works with investment and technology customers to help them understand the emerging Deep Learning opportunity in data centers, from competitive landscape to ecosystem to strategy.

 

Karl has worked directly with datacenter end users, OEMs, ODMs and the industry ecosystem, enabling him to help his clients define the appropriate business, product, and go-to-market strategies. He is also recognized expert on the subject of low-power servers and the emergence of ARM in the datacenter and has been a featured speaker at scores of investment and industry conferences on this topic.

Accomplishments during his career include:

  • Led the revived HPC initiative at AMD, targeting APUs at deep learning and other HPC workloads
  • Created an industry-wide thought leadership position for Calxeda in the ARM Server market
  • Helped forge the early relationship between HP and Calxeda leading to the surprise announcement of HP Moonshot with Calxeda in 2011
  • Built the IBM Power Server brand from 14% market share to over 50% share
  • Integrated the Tivoli brand into the IBM company’s branding and marketing organization
  • Co-Led the integration of HP and Apollo Marketing after the Boston-based desktop company’s acquisition

 

Karl’s background includes RISC and Mainframe servers, as well as HPC (Supercomputing). He has extensive experience as a global marketing executive at IBM where he was VP Marketing (2000-2010), Cray where he was VP Marketing (1995-1998), and HP where he was a Division Marketing Manager (1979-1995).

 

Author:

Paolo Faraboschi

Vice President and HPE Fellow; Director, AI Research Lab
Hewlett Packard Labs, HPE

Paolo Faraboschi is a Vice President and HPE Fellow and directs the Artificial Intelligence Research Lab at Hewlett Packard Labs. Paolo has been at HP/HPE for three decades, and worked on a broad range of technologies, from embedded printer processors to exascale supercomputers. He previously led exascale computing research (2017-2020), and the hardware architecture of “The Machine” project (2014-2016), pioneered low-energy servers with HP’s project Moonshot (2010-2014), drove scalable system-level simulation research (2004-2009), and was the principal architect of a family of embedded VLIW cores (1994-2003), widely used in video SoCs and HP’s printers. Paolo is an IEEE Fellow (2014) for “contributions to embedded processor architecture and system-on-chip technology”, author of over 100 publications, 70 granted patents, and the book “Embedded Computing: a VLIW approach”. He received a Ph.D. in EECS from the University of Genoa, Italy.

Paolo Faraboschi

Vice President and HPE Fellow; Director, AI Research Lab
Hewlett Packard Labs, HPE

Paolo Faraboschi is a Vice President and HPE Fellow and directs the Artificial Intelligence Research Lab at Hewlett Packard Labs. Paolo has been at HP/HPE for three decades, and worked on a broad range of technologies, from embedded printer processors to exascale supercomputers. He previously led exascale computing research (2017-2020), and the hardware architecture of “The Machine” project (2014-2016), pioneered low-energy servers with HP’s project Moonshot (2010-2014), drove scalable system-level simulation research (2004-2009), and was the principal architect of a family of embedded VLIW cores (1994-2003), widely used in video SoCs and HP’s printers. Paolo is an IEEE Fellow (2014) for “contributions to embedded processor architecture and system-on-chip technology”, author of over 100 publications, 70 granted patents, and the book “Embedded Computing: a VLIW approach”. He received a Ph.D. in EECS from the University of Genoa, Italy.

Author:

Albert Chen

Solutions Architect
Amphenol

Albert Chen

Solutions Architect
Amphenol

Author:

Syona Sarma

Head of Hardware Engineering
Cloudflare

Syona Sarma is the Senior Director, Head of Hardware Systems at Cloudflare, where she runs the engineering team that builds Cloudflare's infrastructure. Since joining Cloudflare in 2022, she leads the design of the next generation servers that is foundational to all of Cloudflare's services, including compute and storage. More recently, she spearheaded the introduction of specialized accelerators designs on the edge, which enabled launch of Cloudflare's inference-as-a-service product, and is integral to the rapidly expanding suite of AI product offerings. Before coming to Cloudflare, Syona was at Intel, where she started her career in CPU design, and held several different roles in hardware, product and business development in Cloud Computing. 

She holds a Masters in Electrical and Computer Engineering from University of Texas at Austin, and a business degree from University of Washington.

Syona Sarma

Head of Hardware Engineering
Cloudflare

Syona Sarma is the Senior Director, Head of Hardware Systems at Cloudflare, where she runs the engineering team that builds Cloudflare's infrastructure. Since joining Cloudflare in 2022, she leads the design of the next generation servers that is foundational to all of Cloudflare's services, including compute and storage. More recently, she spearheaded the introduction of specialized accelerators designs on the edge, which enabled launch of Cloudflare's inference-as-a-service product, and is integral to the rapidly expanding suite of AI product offerings. Before coming to Cloudflare, Syona was at Intel, where she started her career in CPU design, and held several different roles in hardware, product and business development in Cloud Computing. 

She holds a Masters in Electrical and Computer Engineering from University of Texas at Austin, and a business degree from University of Washington.

Author:

Nitza Basoco

Technology and Market Strategist
Teradyne

Nitza Basoco

Technology and Market Strategist
Teradyne
2:50 PM - 3:15 PM
Generative AI
Infrastructure

Author:

Jay Dawani

CEO
Lemurian Labs

Jay Dawani is co-founder & CEO of Lemurian Labs, a startup at the forefront of general purpose accelerated computing for making AI development affordable and generally available for all companies and people to equally benefit. Author of the influential book "Mathematics for Deep Learning", he has held leadership positions at companies such as BlocPlay and Geometric Energy Corporation, spearheading projects involving quantum computing, metaverse, blockchain, AI, space robotics, and more. Jay has also served as an advisor to NASA Frontier Development Lab, SiaClassic, and many leading AI firms.

Jay Dawani

CEO
Lemurian Labs

Jay Dawani is co-founder & CEO of Lemurian Labs, a startup at the forefront of general purpose accelerated computing for making AI development affordable and generally available for all companies and people to equally benefit. Author of the influential book "Mathematics for Deep Learning", he has held leadership positions at companies such as BlocPlay and Geometric Energy Corporation, spearheading projects involving quantum computing, metaverse, blockchain, AI, space robotics, and more. Jay has also served as an advisor to NASA Frontier Development Lab, SiaClassic, and many leading AI firms.

Mirko Prezioso, co-founder and CEO of Mentium Technologies, will present an overview of the company, including who they are, what they do, and their mission. The talk will highlight why Mentium developed a Hybrid architecture for Edge AI applications that demand the highest inference reliability and speed, specifically for Mission-Critical Edge AI applications. Mentium plans to deliver Early Access Development Kits in 2024, a milestone made possible with the support of the Synopsys Cloud platform and team.

Software
Generative AI
Hardware

Author:

Mirko Prezioso

CEO
Mentium Technologies

Mirko Prezioso has a M. Sc. in condensed matter physics from the University of Parma, Italy, where he also earned his PhD in advanced materials science and technology, in 2008. He then worked on spintronics, memory effects and memristor, bringing him, in 2013 to the University of California, Santa Barbara. Here he worked on devices and algorithms for neuromorphic computation. He is the lead author of the first demonstration of in-memory computing based on integrated memristive devices. In 2017 Dr. Prezioso became the co-founder and CEO of Mentium Technologies and has led the company since then.

 

Mirko Prezioso

CEO
Mentium Technologies

Mirko Prezioso has a M. Sc. in condensed matter physics from the University of Parma, Italy, where he also earned his PhD in advanced materials science and technology, in 2008. He then worked on spintronics, memory effects and memristor, bringing him, in 2013 to the University of California, Santa Barbara. Here he worked on devices and algorithms for neuromorphic computation. He is the lead author of the first demonstration of in-memory computing based on integrated memristive devices. In 2017 Dr. Prezioso became the co-founder and CEO of Mentium Technologies and has led the company since then.

 

3:15 PM - 3:40 PM
Generative AI
Infrastructure

Author:

Arun Nandi

Senior Director and Head of Data & Analytics
Unilever

Arun is a visionary AI and Analytics expert recognized as one of the Top 100 Influential AI & Analytics leaders. He is the Head of Data & Analytics at Unilever today. With over 15 years of experience driving analytics-driven value in organizations, he has built AI practices from the ground up on several occasions. Arun advocates the adoption of AI to overcome enterprise-wide challenges and create growth. Beyond his professional achievements, Arun loves to travel, having explored over 40 countries and is passionate about adventure motorbiking.

Arun Nandi

Senior Director and Head of Data & Analytics
Unilever

Arun is a visionary AI and Analytics expert recognized as one of the Top 100 Influential AI & Analytics leaders. He is the Head of Data & Analytics at Unilever today. With over 15 years of experience driving analytics-driven value in organizations, he has built AI practices from the ground up on several occasions. Arun advocates the adoption of AI to overcome enterprise-wide challenges and create growth. Beyond his professional achievements, Arun loves to travel, having explored over 40 countries and is passionate about adventure motorbiking.

Generative AI
Hardware
Infrastructure

Author:

Hyunsik Choi

Head of SW Platform
Furiosa AI

Hyunsik Choi

Head of SW Platform
Furiosa AI
3:40 PM - 4:05 PM
Generative AI
Infrastructure

Author:

Steven Brightfield

Chief Marketing Officer
BrainChip

Steven Brightfield has over 20 years of success defining and bringing to market new semiconductor products with companies such as Qualcomm, SiMA.ai. LSI Logic, Plessey and Zoran for mobile, AR/VR, wearable, edge ML, cable/sat set-top and digital camera chips. He has 10 years of experience launching programmable semiconductor IP cores for CPU/GPU/DSP/NPUs at LSI Logic,ARC, MIPS, Silicon Arts, Improv, and BOPS and licensing into end products that are ubiquitous today. Steven’s technical foundation in digital signal processing led to using (DSPs in innovative products that digitized the world of speech, audio, multimedia, graphics, camera and video processing, most recently applying AI/ML in these same domains. Steven recently joined the BrainChip leadership team to further drive BrainChip’s brand recognition, go-to-market strategy and customer acquisition as BrainChip enters a growth phase for their flagship Akida products. Steven has a Bachelor of Science in Electrical Engineering from Purdue University. 

 

Steven Brightfield

Chief Marketing Officer
BrainChip

Steven Brightfield has over 20 years of success defining and bringing to market new semiconductor products with companies such as Qualcomm, SiMA.ai. LSI Logic, Plessey and Zoran for mobile, AR/VR, wearable, edge ML, cable/sat set-top and digital camera chips. He has 10 years of experience launching programmable semiconductor IP cores for CPU/GPU/DSP/NPUs at LSI Logic,ARC, MIPS, Silicon Arts, Improv, and BOPS and licensing into end products that are ubiquitous today. Steven’s technical foundation in digital signal processing led to using (DSPs in innovative products that digitized the world of speech, audio, multimedia, graphics, camera and video processing, most recently applying AI/ML in these same domains. Steven recently joined the BrainChip leadership team to further drive BrainChip’s brand recognition, go-to-market strategy and customer acquisition as BrainChip enters a growth phase for their flagship Akida products. Steven has a Bachelor of Science in Electrical Engineering from Purdue University. 

 

Systems
Hardware
Infrastructure

Author:

Matthew Burns

Technical Marketing Manager
Samtec

Matthew Burns develops go-to-market strategies for Samtec’s Silicon-to-Silicon solutions. Over the course of 20+ years, he has been a leader in design, applications engineering, technical sales and marketing in the telecommunications, medical and electronic components industries. Mr. Burns holds a B.S. in Electrical Engineering from Penn State University.

Matthew Burns

Technical Marketing Manager
Samtec

Matthew Burns develops go-to-market strategies for Samtec’s Silicon-to-Silicon solutions. Over the course of 20+ years, he has been a leader in design, applications engineering, technical sales and marketing in the telecommunications, medical and electronic components industries. Mr. Burns holds a B.S. in Electrical Engineering from Penn State University.

4:05 PM - 4:35 PM
Networking Break
4:35 PM - 5:00 PM
Generative AI
Infrastructure
Hardware
Software

Author:

Phil Pokorny

Chief Technology Officer
Penguin Solutions

Phil Pokorny is the Chief Technology Officer (CTO) for SGH / Penguin Solutions. He brings a wealth of engineering experience and customer insight to the design, development, support, and vision for our technology solutions.

Phil joined Penguin in February of 2001 as an engineer, and steadily progressed through the organization, taking on more responsibility and influencing the direction of key technology and design decisions. Prior to joining Penguin, he spent 14 years in various engineering and system administration roles with Cummins, Inc. and Cummins Electronics. At Cummins, Phil participated in the development of internal network standards, deployed and managed a multisite network of multiprotocol routers, and supported a diverse mix of office and engineering workers with a variety of server and desktop operating systems.

He has contributed code to Open Source projects, including the Linux kernel, lm_sensors, and LCDproc.

Phil graduated from Rose-Hulman Institute of Technology with Bachelor of Science degrees in math and electrical engineering, with a second major in computer science. 

Phil Pokorny

Chief Technology Officer
Penguin Solutions

Phil Pokorny is the Chief Technology Officer (CTO) for SGH / Penguin Solutions. He brings a wealth of engineering experience and customer insight to the design, development, support, and vision for our technology solutions.

Phil joined Penguin in February of 2001 as an engineer, and steadily progressed through the organization, taking on more responsibility and influencing the direction of key technology and design decisions. Prior to joining Penguin, he spent 14 years in various engineering and system administration roles with Cummins, Inc. and Cummins Electronics. At Cummins, Phil participated in the development of internal network standards, deployed and managed a multisite network of multiprotocol routers, and supported a diverse mix of office and engineering workers with a variety of server and desktop operating systems.

He has contributed code to Open Source projects, including the Linux kernel, lm_sensors, and LCDproc.

Phil graduated from Rose-Hulman Institute of Technology with Bachelor of Science degrees in math and electrical engineering, with a second major in computer science. 

Memory
Systems
Hardware
Infrastructure

Author:

Rochan Sankar

Co-Founder & CEO
Enfabrica

Rochan is Founder, President and CEO of Enfabrica. Prior to founding Enfabrica, he was Senior Director and leader of the Data Center Ethernet switch silicon business at Broadcom, where he defined and brought to market multiple generations of Tomahawk/Trident chips and helped build industry-wide ecosystems including 25G Ethernet and disaggregated whitebox networking.

Prior, he held roles in product management, chip architecture, and applications engineering across startup and public semiconductor companies. Rochan holds a B.A.Sc. in Electrical Engineering from the University of Toronto and an MBA from the Wharton School, and has 6 issued patents.

Rochan Sankar

Co-Founder & CEO
Enfabrica

Rochan is Founder, President and CEO of Enfabrica. Prior to founding Enfabrica, he was Senior Director and leader of the Data Center Ethernet switch silicon business at Broadcom, where he defined and brought to market multiple generations of Tomahawk/Trident chips and helped build industry-wide ecosystems including 25G Ethernet and disaggregated whitebox networking.

Prior, he held roles in product management, chip architecture, and applications engineering across startup and public semiconductor companies. Rochan holds a B.A.Sc. in Electrical Engineering from the University of Toronto and an MBA from the Wharton School, and has 6 issued patents.

5:00 PM - 5:40 PM
Hardware
Infrastructure
Moderator

Author:

RK Anand

Co-Founder and CPO
RECOGNI

RK Anand is the Co-founder and Chief Product Officer (CPO) of Recogni, an artificial intelligence startup based in San Jose specializing in building multimodal GenAI inference systems for data centers.

At Recogni, RK spearheads the company’s product development and Go-To-Market strategies within the data center industry.

With an unwavering commitment to customer needs and value creation, RK and the Recogni team are striving to deliver the highest performing and most cost and energy efficient multi-modal GenAI systems to the market.

RK brings over 35 years of leadership experience in data center compute systems, networking, and silicon development. His distinguished career includes engineering roles at Sun Microsystems and serving as Executive Vice President and General Manager at Juniper Networks. As one of the earliest employees at Juniper, RK played a pivotal role in the company’s growth from a startup to generating billions of dollars in revenue.

RK Anand

Co-Founder and CPO
RECOGNI

RK Anand is the Co-founder and Chief Product Officer (CPO) of Recogni, an artificial intelligence startup based in San Jose specializing in building multimodal GenAI inference systems for data centers.

At Recogni, RK spearheads the company’s product development and Go-To-Market strategies within the data center industry.

With an unwavering commitment to customer needs and value creation, RK and the Recogni team are striving to deliver the highest performing and most cost and energy efficient multi-modal GenAI systems to the market.

RK brings over 35 years of leadership experience in data center compute systems, networking, and silicon development. His distinguished career includes engineering roles at Sun Microsystems and serving as Executive Vice President and General Manager at Juniper Networks. As one of the earliest employees at Juniper, RK played a pivotal role in the company’s growth from a startup to generating billions of dollars in revenue.

Author:

Gaia Bellone

Chief Data Scientist
Prudential Financial

Gaia is a dynamic and accomplished leader in the field of Data Science and Artificial Intelligence. In her current role at Prudential Financial, she leads Global Data and AI Governance and serves as Chief Data Officer (CDO) for Emerging Markets.

Her contributions to Prudential Financial have been significant and impactful. As the former Chief Data Scientist at Prudential, she led the Data Science team in creating innovative solutions for Digital, Marketing, Sales, and Distribution, the AI/ML Platform team, and the GenAI Enterprise Program. Her leadership and strategic vision have been instrumental in driving business growth and enhancing operational efficiency.

Prior to her tenure at Prudential, she held prominent positions at Key Bank and JPMorgan Chase. At Key Bank, she served as the Head of Data Science for the Community Bank. Her leadership and expertise in data science were crucial in optimizing the bank's operations and improving customer experience. At JPMorgan Chase, she led the data science teams for Home Lending and Auto Finance. Her strategic insights and data-driven solutions significantly improved the business performance in these sectors, contributing to the overall success of the enterprise.

Throughout her career, she has consistently demonstrated her ability to leverage data and AI to drive business growth and improve operational efficiency. Her contributions to the businesses and the enterprise have been substantial and transformative.

Gaia Bellone

Chief Data Scientist
Prudential Financial

Gaia is a dynamic and accomplished leader in the field of Data Science and Artificial Intelligence. In her current role at Prudential Financial, she leads Global Data and AI Governance and serves as Chief Data Officer (CDO) for Emerging Markets.

Her contributions to Prudential Financial have been significant and impactful. As the former Chief Data Scientist at Prudential, she led the Data Science team in creating innovative solutions for Digital, Marketing, Sales, and Distribution, the AI/ML Platform team, and the GenAI Enterprise Program. Her leadership and strategic vision have been instrumental in driving business growth and enhancing operational efficiency.

Prior to her tenure at Prudential, she held prominent positions at Key Bank and JPMorgan Chase. At Key Bank, she served as the Head of Data Science for the Community Bank. Her leadership and expertise in data science were crucial in optimizing the bank's operations and improving customer experience. At JPMorgan Chase, she led the data science teams for Home Lending and Auto Finance. Her strategic insights and data-driven solutions significantly improved the business performance in these sectors, contributing to the overall success of the enterprise.

Throughout her career, she has consistently demonstrated her ability to leverage data and AI to drive business growth and improve operational efficiency. Her contributions to the businesses and the enterprise have been substantial and transformative.

Author:

Michael Stewart

Partner
M12

Michael Stewart

Partner
M12

Author:

Alex Pham

GM, Chief Architect, Head of Data Infrastructure and AI Platforms
Toyota North America

Alex Pham

GM, Chief Architect, Head of Data Infrastructure and AI Platforms
Toyota North America
Systems
Hardware
Infrastructure
Moderator

Author:

Steven Woo

Fellow and Distinguished Inventor
Rambus

I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College

Steven Woo

Fellow and Distinguished Inventor
Rambus

I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College

Author:

Manoj Wadekar

AI Systems Technologist
Meta

Manoj Wadekar

AI Systems Technologist
Meta

Author:

Taeksang Song

CVP
Samsung Electronics

Taeksang is a Corporate VP at Samsung Electronics where he is leading a team dedicated to pioneering cutting-edge technologies including CXL memory expander, fabric attached memory solution and processing near memory to meet the evolving demands of next-generation data-centric AI architecture. He has almost 20 years' professional experience in memory and sub-system architecture, interconnect protocols, system-on-chip design and collaborating with CSPs to enable hetegeneous computing infrastructure.  Prior to joining Samsung Electronics, he worked at Rambus Inc., SK hynix and Micron Technology in lead architect roles for the emerging memory controllers and systems. 

Taeksang receives his Ph.D. degree from KAIST, South Korea, in 2006. Dr. Song has authored and co-authored over 20 technical papers and holds over 50 U.S. patents. 

 

 

Taeksang Song

CVP
Samsung Electronics

Taeksang is a Corporate VP at Samsung Electronics where he is leading a team dedicated to pioneering cutting-edge technologies including CXL memory expander, fabric attached memory solution and processing near memory to meet the evolving demands of next-generation data-centric AI architecture. He has almost 20 years' professional experience in memory and sub-system architecture, interconnect protocols, system-on-chip design and collaborating with CSPs to enable hetegeneous computing infrastructure.  Prior to joining Samsung Electronics, he worked at Rambus Inc., SK hynix and Micron Technology in lead architect roles for the emerging memory controllers and systems. 

Taeksang receives his Ph.D. degree from KAIST, South Korea, in 2006. Dr. Song has authored and co-authored over 20 technical papers and holds over 50 U.S. patents. 

 

 

Author:

Markus Flierl

CVP Intel Cloud Services
Intel, Corp

Markus joined Intel in early 2022 to lead Intel Cloud Services which includes Intel Tiber Developer Cloud (ITDC/ cloud.intel.com), Intel Tiber App-Level Optimization (formerly known as Granulate). Intel Tiber Developer Cloud provides a range of cloud services based on Intel latest pre-production and production hardware and software with focus on AI workloads. ITDC hosts large production workloads for companies such as seekr or Prediction Guard. Before joining Intel Markus built out NVIDIA’s GPU cloud infrastructure services leveraging cutting edge NVIDIA and open source technologies. Today it is the foundation for NVIDIA’s GeForce Now cloud gaming service which has become the leader in cloud gaming with over 25 million registered users globally as well as NVIDIA’s DGX cloud and edge computing workloads like NVIDIA Omniverse™. Prior to that Markus led product strategy and product development of private and public cloud infrastructure and storage software at Oracle Corporation and Sun Microsystems.

Markus Flierl

CVP Intel Cloud Services
Intel, Corp

Markus joined Intel in early 2022 to lead Intel Cloud Services which includes Intel Tiber Developer Cloud (ITDC/ cloud.intel.com), Intel Tiber App-Level Optimization (formerly known as Granulate). Intel Tiber Developer Cloud provides a range of cloud services based on Intel latest pre-production and production hardware and software with focus on AI workloads. ITDC hosts large production workloads for companies such as seekr or Prediction Guard. Before joining Intel Markus built out NVIDIA’s GPU cloud infrastructure services leveraging cutting edge NVIDIA and open source technologies. Today it is the foundation for NVIDIA’s GeForce Now cloud gaming service which has become the leader in cloud gaming with over 25 million registered users globally as well as NVIDIA’s DGX cloud and edge computing workloads like NVIDIA Omniverse™. Prior to that Markus led product strategy and product development of private and public cloud infrastructure and storage software at Oracle Corporation and Sun Microsystems.

5:40 PM - 6:05 PM

Author:

Greg Serochi

Principal AI Technical Program Manager
Intel

Principal AI Technical Program Manager and Ecosystem Enabling Lead for Intel Gaudi.  As the Ecosystem Enabling Lead, spearheaded the strategy and execution of Intel Gaudi's ecosystem collateral, customer training and positioning.  Greg's role is to create content and training that make it easier for customer to use Intel Gaudi.  

Greg Serochi

Principal AI Technical Program Manager
Intel

Principal AI Technical Program Manager and Ecosystem Enabling Lead for Intel Gaudi.  As the Ecosystem Enabling Lead, spearheaded the strategy and execution of Intel Gaudi's ecosystem collateral, customer training and positioning.  Greg's role is to create content and training that make it easier for customer to use Intel Gaudi.  

Systems
Hardware
Infrastructure
Moderator

Author:

Drew Matter

President & CEO
Mikros Technologies

Drew Matter leads Mikros Technologies, a designer and manufacturer of best-in-class direct liquid cold plates for AI/HPC, semiconductor testing, laser & optics, and power electronics.  Mikros provides leading microchannel thermal solutions in single-phase, 2-phase, DLC and immersion systems to leading companies around the world. 

Drew Matter

President & CEO
Mikros Technologies

Drew Matter leads Mikros Technologies, a designer and manufacturer of best-in-class direct liquid cold plates for AI/HPC, semiconductor testing, laser & optics, and power electronics.  Mikros provides leading microchannel thermal solutions in single-phase, 2-phase, DLC and immersion systems to leading companies around the world. 

Author:

Steve Mills

Mechanical Engineer
Meta

Steve Mills is a Mechanical Engineer who has dedicated over 25 years to the development of IT hardware in the enterprise and hyperscale space.  After tours at DELL and Storspeed, he joined Meta in 2012 and is currently a Technical Lead for Data Center and Hardware Interfaces. He also serves on the Open Compute Project Steering Committee representing the Cooling Environments Project. He has 48 US patents and is an author of eight papers covering the packaging and cooling of electronics.

 

 

 

Steve Mills

Mechanical Engineer
Meta

Steve Mills is a Mechanical Engineer who has dedicated over 25 years to the development of IT hardware in the enterprise and hyperscale space.  After tours at DELL and Storspeed, he joined Meta in 2012 and is currently a Technical Lead for Data Center and Hardware Interfaces. He also serves on the Open Compute Project Steering Committee representing the Cooling Environments Project. He has 48 US patents and is an author of eight papers covering the packaging and cooling of electronics.

 

 

 

Author:

Matt Archibald

Director of Technical Architecture – Data Solutions
nVent

Matt Archibald is the Director of Technical Architecture at nVent supporting the data center and networking space. Matt is deeply focused on liquid cooling (close-coupled and direct-to-chip), unified infrastructure management, data center monitoring, and automated data center infrastructure management.

Matt Archibald

Director of Technical Architecture – Data Solutions
nVent

Matt Archibald is the Director of Technical Architecture at nVent supporting the data center and networking space. Matt is deeply focused on liquid cooling (close-coupled and direct-to-chip), unified infrastructure management, data center monitoring, and automated data center infrastructure management.

Author:

Vinod Kamath

Distinguished Engineer
Lenovo Infrastructure Solutions Group

Vinod Kamath

Distinguished Engineer
Lenovo Infrastructure Solutions Group
6:05 PM - 7:30 PM
Day 2: Wednesday, 11 Sep, 2024
DRIVING EFFICIENCIES AND SCALE IN AI AND INFRASTRUCTURE
09:00 AM - 10:00 AM
Registration and Networking
9:55 AM - 10:00 AM
OPENING REMARKS
10:00 AM - 10:25 AM

Building ever larger scale AI clusters hinges on addressing the challenge of creating reliable and fault-tolerant systems. This keynote will explore how Meta work on Llama 3: Herd of Models informs strategies for building robust AI infrastructure. We will highlight the Open AI Systems Initiative and Rack-scale Alignment for accelerator diversity focusing on areas of power, compute, and liquid cooling. Our discussion will emphasize the critical role of community engagement in setting open standards and interoperability. This session will provide a roadmap for developing AI systems that can withstand the unpredictability of real-world applications.

Infrastructure
Hardware
Systems

Author:

Dan Rabinovitsj

VP, Infrastructure
Meta

Dan has 30+ years’ experience in developing technology that connects people, with a particular focus on market disruption and innovation. Dan has served in executive leadership roles in Silicon Labs, NXP, Atheros, Qualcomm, Ruckus Networks and Facebook/Meta.  Dan joined Meta in 2018 to lead Facebook Connectivity, a team focused on bringing more people online at faster speeds and changing the telecom industry through the Telecom Infra Project. Dan is now supporting a team developing and sustaining data center hardware and AI systems.

Dan Rabinovitsj

VP, Infrastructure
Meta

Dan has 30+ years’ experience in developing technology that connects people, with a particular focus on market disruption and innovation. Dan has served in executive leadership roles in Silicon Labs, NXP, Atheros, Qualcomm, Ruckus Networks and Facebook/Meta.  Dan joined Meta in 2018 to lead Facebook Connectivity, a team focused on bringing more people online at faster speeds and changing the telecom industry through the Telecom Infra Project. Dan is now supporting a team developing and sustaining data center hardware and AI systems.

10:25 AM - 10:50 AM
AMD KEYNOTE

Join Vamsi Boppana, AMD Senior Vice President of AI, as he unveils the latest breakthroughs in AI technology, driving advancements across cloud, HPC, embedded, and client segments. Discover the impact of strategic partnerships and open-source innovation in accelerating AI adoption. Through real-world examples, see how AI is getting developed and deployed, reshaping the global compute landscape from the cloud to the client.

Infrastructure
Hardware
Systems

Author:

Vamsi Boppana

SVP, AI
AMD

Vamsi Boppana is responsible for AMD’s AI strategy, driving the AI roadmap across the client, edge and cloud for AMD’s AI software stack and ecosystem efforts. Until 2022, he was Senior Vice President of the Central Products Group (CPG), responsible for developing and marketing Xilinx’s Adaptive and AI product portfolio. He also served as executive sponsor for the Xilinx integration into AMD. 

At Xilinx, Boppana led the silicon development of leading products such as Versal™ and Zynq™ UltraScale™+ MPSoC. Before joining the company in 2008, he held engineering management roles at Open-Silicon and Zenasis Technologies, a company he co-founded. Boppana began his career at Fujitsu Laboratories. Caring deeply about the benefits of the technology he creates, Boppana aspires both to achieve commercial success and improve lives through the products he builds. 

Vamsi Boppana

SVP, AI
AMD

Vamsi Boppana is responsible for AMD’s AI strategy, driving the AI roadmap across the client, edge and cloud for AMD’s AI software stack and ecosystem efforts. Until 2022, he was Senior Vice President of the Central Products Group (CPG), responsible for developing and marketing Xilinx’s Adaptive and AI product portfolio. He also served as executive sponsor for the Xilinx integration into AMD. 

At Xilinx, Boppana led the silicon development of leading products such as Versal™ and Zynq™ UltraScale™+ MPSoC. Before joining the company in 2008, he held engineering management roles at Open-Silicon and Zenasis Technologies, a company he co-founded. Boppana began his career at Fujitsu Laboratories. Caring deeply about the benefits of the technology he creates, Boppana aspires both to achieve commercial success and improve lives through the products he builds. 

10:50 AM - 11:15 AM
KEYNOTE

The LLM-based Generative AI revolution is progressing from the use of language-only models to multimodal models and the transition from monolithic models to more complex Agentic AI workflows. These workflows allow AI systems to address more complex tasks essential to enterprises by doing problem decomposition, planning, self-reflection, and tool use. This talk will share how Intel is collaborating with customers and developers to advance productivity and applications of AI to such higher cognitive tasks using Intel® Gaudi® 3 AI Accelerators, including massive AI cluster buildout in Intel® Tiber™ Developer Cloud.

Infrastructure
Hardware
Systems
Software

Author:

Vasudev Lal

Principal AI Research Scientist
Intel

Principal AI Research Scientist at Intel Labs where I lead the Multimodal Cognitive AI team. The Cognitive AI team develops AI systems that can synthesize concept-level understanding from multiple modalities: vision, language, video, etc. leveraging large-scale AI clusters powered by Intel AI HW (eg: Intel Gaudi-based AI clusters). Vasudev’s current research interests include self-supervised training at scale for continuous and high dimensional modalities like images, video and audio; mechanisms to go beyond statistical learning in today’s AI systems by incorporating counterfactual reasoning and principles from causality and exploring full 3D parallelism (tensor + parallel + data) for training and inferencing large AI models on Intel AI HW (eg: Intel Gaudi-based AI clusters in Intel Dev Cloud).  Vasudev obtained his PhD in Electrical and Computer Engineering from the University of Michigan, Ann Arbor in 2012.

Vasudev Lal

Principal AI Research Scientist
Intel

Principal AI Research Scientist at Intel Labs where I lead the Multimodal Cognitive AI team. The Cognitive AI team develops AI systems that can synthesize concept-level understanding from multiple modalities: vision, language, video, etc. leveraging large-scale AI clusters powered by Intel AI HW (eg: Intel Gaudi-based AI clusters). Vasudev’s current research interests include self-supervised training at scale for continuous and high dimensional modalities like images, video and audio; mechanisms to go beyond statistical learning in today’s AI systems by incorporating counterfactual reasoning and principles from causality and exploring full 3D parallelism (tensor + parallel + data) for training and inferencing large AI models on Intel AI HW (eg: Intel Gaudi-based AI clusters in Intel Dev Cloud).  Vasudev obtained his PhD in Electrical and Computer Engineering from the University of Michigan, Ann Arbor in 2012.

11:15 AM - 11:40 AM
Moderator

Author:

Karl Freund

Founder & Principal Analyst
Cambrian AI Research

Karl Freund is the founder and principal analyst of Cambrian AI Research. Prior to this, he was Moor Insights & Strategy’s consulting lead for HPC and Deep Learning. His recent experiences as the VP of Marketing at AMD and Calxeda, as well as his previous positions at Cray and IBM, positions him as a leading industry expert in these rapidly evolving industries. Karl works with investment and technology customers to help them understand the emerging Deep Learning opportunity in data centers, from competitive landscape to ecosystem to strategy.

 

Karl has worked directly with datacenter end users, OEMs, ODMs and the industry ecosystem, enabling him to help his clients define the appropriate business, product, and go-to-market strategies. He is also recognized expert on the subject of low-power servers and the emergence of ARM in the datacenter and has been a featured speaker at scores of investment and industry conferences on this topic.

Accomplishments during his career include:

  • Led the revived HPC initiative at AMD, targeting APUs at deep learning and other HPC workloads
  • Created an industry-wide thought leadership position for Calxeda in the ARM Server market
  • Helped forge the early relationship between HP and Calxeda leading to the surprise announcement of HP Moonshot with Calxeda in 2011
  • Built the IBM Power Server brand from 14% market share to over 50% share
  • Integrated the Tivoli brand into the IBM company’s branding and marketing organization
  • Co-Led the integration of HP and Apollo Marketing after the Boston-based desktop company’s acquisition

 

Karl’s background includes RISC and Mainframe servers, as well as HPC (Supercomputing). He has extensive experience as a global marketing executive at IBM where he was VP Marketing (2000-2010), Cray where he was VP Marketing (1995-1998), and HP where he was a Division Marketing Manager (1979-1995).

 

Karl Freund

Founder & Principal Analyst
Cambrian AI Research

Karl Freund is the founder and principal analyst of Cambrian AI Research. Prior to this, he was Moor Insights & Strategy’s consulting lead for HPC and Deep Learning. His recent experiences as the VP of Marketing at AMD and Calxeda, as well as his previous positions at Cray and IBM, positions him as a leading industry expert in these rapidly evolving industries. Karl works with investment and technology customers to help them understand the emerging Deep Learning opportunity in data centers, from competitive landscape to ecosystem to strategy.

 

Karl has worked directly with datacenter end users, OEMs, ODMs and the industry ecosystem, enabling him to help his clients define the appropriate business, product, and go-to-market strategies. He is also recognized expert on the subject of low-power servers and the emergence of ARM in the datacenter and has been a featured speaker at scores of investment and industry conferences on this topic.

Accomplishments during his career include:

  • Led the revived HPC initiative at AMD, targeting APUs at deep learning and other HPC workloads
  • Created an industry-wide thought leadership position for Calxeda in the ARM Server market
  • Helped forge the early relationship between HP and Calxeda leading to the surprise announcement of HP Moonshot with Calxeda in 2011
  • Built the IBM Power Server brand from 14% market share to over 50% share
  • Integrated the Tivoli brand into the IBM company’s branding and marketing organization
  • Co-Led the integration of HP and Apollo Marketing after the Boston-based desktop company’s acquisition

 

Karl’s background includes RISC and Mainframe servers, as well as HPC (Supercomputing). He has extensive experience as a global marketing executive at IBM where he was VP Marketing (2000-2010), Cray where he was VP Marketing (1995-1998), and HP where he was a Division Marketing Manager (1979-1995).

 

Author:

Mo Elshenawy

President & CTO
Cruise

With more than 25 years of engineering and leadership expertise, Mo is the President and CTO at Cruise, a self-driving car company. Over the last six years, he has played a pivotal role in driving Cruise's engineering advancements, while scaling the team from hundreds to thousands of engineers. Mo currently leads Cruise’s engineering, operations, and product teams – those who are responsible for all aspects of our autonomous vehicles development and deployment, including AI, robotics, simulation, product, program, data and machine learning platforms, infrastructure, security, safety, operations, and hardware.

Prior to Cruise, Mo led global technologies for Amazon ReCommerce Platform, Warehouse Deals, and Liquidations: a massive scale global business that enables Amazon to evaluate, price, sell, liquidate, and donate millions of used products daily. In addition, over the past decade, Mo was a technical co-founder and CTO for three tech startups, the latest of which is a cloud-based financial services development platform used by top financial institutions.

 

Mo Elshenawy

President & CTO
Cruise

With more than 25 years of engineering and leadership expertise, Mo is the President and CTO at Cruise, a self-driving car company. Over the last six years, he has played a pivotal role in driving Cruise's engineering advancements, while scaling the team from hundreds to thousands of engineers. Mo currently leads Cruise’s engineering, operations, and product teams – those who are responsible for all aspects of our autonomous vehicles development and deployment, including AI, robotics, simulation, product, program, data and machine learning platforms, infrastructure, security, safety, operations, and hardware.

Prior to Cruise, Mo led global technologies for Amazon ReCommerce Platform, Warehouse Deals, and Liquidations: a massive scale global business that enables Amazon to evaluate, price, sell, liquidate, and donate millions of used products daily. In addition, over the past decade, Mo was a technical co-founder and CTO for three tech startups, the latest of which is a cloud-based financial services development platform used by top financial institutions.

 

11:40 AM - 12:05 PM
KEYNOTE

Explore the critical components that are currently the driving force behind the performance of Generative AI. Learn how the focus and properties within the GenAI space will inevitably shift, reflecting the actual needs and expectations of businesses. 

Infrastructure
Hardware

Author:

Karl Harvard

COO
Nscale

Karl has over 25 years of experience in the IT, Cloud and Al industry. He previously held senior leadership roles inside AWS and Google as well as building and leading start-up businesses in the HPC and Generative Al cloud service provider industry.

Karl Harvard

COO
Nscale

Karl has over 25 years of experience in the IT, Cloud and Al industry. He previously held senior leadership roles inside AWS and Google as well as building and leading start-up businesses in the HPC and Generative Al cloud service provider industry.

12:05 PM - 12:30 PM
KEYNOTE

In an era of unprecedented demand for passenger and cargo aircraft, the aviation industry is focused on enhancing operations, reducing costs, ensuring reliability and safety, and achieving net-zero environmental goals -- and innovative solutions are needed to successfully accomplish this. Recent advancements in technologies like AI, ML, and computer vision powered by deep learning are revolutionizing the industry, enabling vision-based autonomous aircraft functions such as taxi, takeoff, and landing. These breakthroughs facilitate a paradigm shift from low-level aircraft control to high-level operational oversight, promising a new era of aviation efficiency. Acubed is at the forefront of this transformation, developing autonomous flight and AI solutions through rapid, data-driven software development. Leveraging large-scale data and compute infrastructure, machine learning, and simulation, our approach is meticulously guided by certifiable verification and validation to ensure safety and reliability. 

Infrastructure
Hardware
Software
Systems
Data Center

Author:

Arne Stoschek

Vice President of AI, Autonomy & Digital Information
Acubed (Airbus)

Arne is the Vice President of AI, Autonomy & Digital Information and oversees the company’s development of autonomous flight and machine learning solutions to enable future, self-piloted aircraft. In his role, he also leads the advancement of large-scale data-driven processes to develop novel aircraft functions. He is passionate about robotics, autonomy and the impact these technologies will have on future mobility. After holding engineering leadership positions at global companies such as Volkswagen/Audi and Infineon, and at aspiring Silicon Valley startups, namely Lucid Motors/Atieva, Knightscope and Better Place, Arne dared to take his unique skill set to altitude above ground inside Airbus. Arne earned a Doctor of Philosophy in Electrical and Computer Engineering from the Technical University of Munich and held a computer vision and data analysis research position at Stanford University.

 

Arne Stoschek

Vice President of AI, Autonomy & Digital Information
Acubed (Airbus)

Arne is the Vice President of AI, Autonomy & Digital Information and oversees the company’s development of autonomous flight and machine learning solutions to enable future, self-piloted aircraft. In his role, he also leads the advancement of large-scale data-driven processes to develop novel aircraft functions. He is passionate about robotics, autonomy and the impact these technologies will have on future mobility. After holding engineering leadership positions at global companies such as Volkswagen/Audi and Infineon, and at aspiring Silicon Valley startups, namely Lucid Motors/Atieva, Knightscope and Better Place, Arne dared to take his unique skill set to altitude above ground inside Airbus. Arne earned a Doctor of Philosophy in Electrical and Computer Engineering from the Technical University of Munich and held a computer vision and data analysis research position at Stanford University.

 

12:30 PM - 1:45 PM
Lunch and Networking
SYSTEMS TRACK
1:45 PM - 2:10 PM

Author:

Zaid Kahn

VP, Cloud AI & Advanced Systems Engineering
Microsoft

Zaid is currently a VP in Microsoft’s Silicon, Cloud Hardware, and Infrastructure Engineering organization where he leads systems engineering and hardware development for Azure including AI systems and infrastructure. Zaid is part of the technical leadership team across Microsoft that sets AI hardware strategy for training and inference. Zaid's teams are also responsible for software and hardware engineering efforts developing specialized compute systems, FPGA network products and ASIC hardware accelerators.

 

Prior to Microsoft Zaid was head of infrastructure at LinkedIn where he was responsible for all aspects of architecture and engineering for Datacenters, Networking, Compute, Storage and Hardware. Zaid also led several software development teams focusing on building and managing infrastructure as code. This included zero touch provisioning, software-defined networking, network operating systems (SONiC, OpenSwitch), self-healing networks, backbone controller, software defined storage and distributed host-based firewalls. The network teams Zaid led built the global network for LinkedIn, including POP's, peering for edge services, IPv6 implementation, DWDM infrastructure and datacenter network fabric. The hardware and datacenter engineering teams Zaid led were responsible for water cooling to the racks, optical fiber infrastructure and open hardware development which was contributed to the Open Compute Project Foundation (OCP).

 

Zaid holds several patents in networking and is a sought-after keynote speaker at top tier conferences and events. Zaid is currently the chairperson for the OCP Foundation Board. He is also currently on the EECS External Advisory Board (EAB) at UC Berkeley and a board member of Internet Ecosystem Innovation Committee (IEIC), a global internet think tank promoting internet diversity. Zaid has a Bachelor of Science in Computer Science and Physics from the University of the South Pacific.

Zaid Kahn

VP, Cloud AI & Advanced Systems Engineering
Microsoft

Zaid is currently a VP in Microsoft’s Silicon, Cloud Hardware, and Infrastructure Engineering organization where he leads systems engineering and hardware development for Azure including AI systems and infrastructure. Zaid is part of the technical leadership team across Microsoft that sets AI hardware strategy for training and inference. Zaid's teams are also responsible for software and hardware engineering efforts developing specialized compute systems, FPGA network products and ASIC hardware accelerators.

 

Prior to Microsoft Zaid was head of infrastructure at LinkedIn where he was responsible for all aspects of architecture and engineering for Datacenters, Networking, Compute, Storage and Hardware. Zaid also led several software development teams focusing on building and managing infrastructure as code. This included zero touch provisioning, software-defined networking, network operating systems (SONiC, OpenSwitch), self-healing networks, backbone controller, software defined storage and distributed host-based firewalls. The network teams Zaid led built the global network for LinkedIn, including POP's, peering for edge services, IPv6 implementation, DWDM infrastructure and datacenter network fabric. The hardware and datacenter engineering teams Zaid led were responsible for water cooling to the racks, optical fiber infrastructure and open hardware development which was contributed to the Open Compute Project Foundation (OCP).

 

Zaid holds several patents in networking and is a sought-after keynote speaker at top tier conferences and events. Zaid is currently the chairperson for the OCP Foundation Board. He is also currently on the EECS External Advisory Board (EAB) at UC Berkeley and a board member of Internet Ecosystem Innovation Committee (IEIC), a global internet think tank promoting internet diversity. Zaid has a Bachelor of Science in Computer Science and Physics from the University of the South Pacific.

2:10 PM - 2:35 PM

Author:

Preet Virk

Co-Founder & COO
Celestial AI

Preet Virk

Co-Founder & COO
Celestial AI
2:35 PM - 3:00 PM

Author:

Hasan Siraj,

Head of Software and AI Infrastructure Products
Broadcom

Hasan Siraj is the Head of Software and AI Infrastructure Products at Broadcom, focused on the company's extensive Ethernet portfolio serving broad markets including the hyperscale, service provider, data center and enterprise segments. Prior to joining Broadcom in 2018, he served in a variety of product management leadership roles at Cisco Systems, including leading the Enterprise Switching, Enterprise Routing / SDWAN products teams - two largest businesses at the company. Mr. Siraj earned a master’s in electrical engineering from Cornell University and an M.B.A. from The Wharton School.  

Hasan Siraj,

Head of Software and AI Infrastructure Products
Broadcom

Hasan Siraj is the Head of Software and AI Infrastructure Products at Broadcom, focused on the company's extensive Ethernet portfolio serving broad markets including the hyperscale, service provider, data center and enterprise segments. Prior to joining Broadcom in 2018, he served in a variety of product management leadership roles at Cisco Systems, including leading the Enterprise Switching, Enterprise Routing / SDWAN products teams - two largest businesses at the company. Mr. Siraj earned a master’s in electrical engineering from Cornell University and an M.B.A. from The Wharton School.  

3:00 PM - 3:25 PM

In today’s AI infrastructure, traditional copper and pluggable optics are ineffective in scaling package-level compute advancements to the system rack and row levels, leading to low efficiency, high power consumption, and high costs. New technologies are needed to support growing model sizes and complexity. Ayar Labs' in-package optical I/O solution enables peak platform performance by providing efficient, low-cost scaling at the rack and row levels. It also offers extended accelerator memory to optimize the balance between memory and compute. In this presentation, Mark Wade will show application-level improvements in performance and TCO metrics, such as productivity, profitability, and interactivity, using optical I/O-based scale-up fabrics for inference and training.

Author:

Mark Wade

CEO and Co-Founder
Ayar Labs

Mark is the Chief Executive Officer and Co-Founder of Ayar Labs. His prior roles at Ayar Labs include Chief Technology Officer and Senior Vice President of Engineering. He is recognized as a pioneer in photonics technologies and, before founding the company, led the team that designed the optics in the world's first processor to communicate using light. He and his co-founders invented breakthrough technology at MIT and UC Berkeley from 2010-2015, which led to the formation of Ayar Labs. He holds a PhD from the University of Colorado.

Mark Wade

CEO and Co-Founder
Ayar Labs

Mark is the Chief Executive Officer and Co-Founder of Ayar Labs. His prior roles at Ayar Labs include Chief Technology Officer and Senior Vice President of Engineering. He is recognized as a pioneer in photonics technologies and, before founding the company, led the team that designed the optics in the world's first processor to communicate using light. He and his co-founders invented breakthrough technology at MIT and UC Berkeley from 2010-2015, which led to the formation of Ayar Labs. He holds a PhD from the University of Colorado.

3:25 PM - 4:00 PM
Networking Break
4:00 PM - 4:25 PM

The potential for chiplet technology to be a transformational paradigm is now widely recognized. The cost, time-to-market, and power consumption benefits of chiplet-based solutions are compelling the industry toward integrating multiple dies in a single package. 

AI has emerged as a primary catalyst for this trend. Custom silicon designed for AI benefits significantly from the chiplet approach, which combines dense logic and memory with the need for high-speed connectivity. The push for custom AI hardware is rapidly evolving, with a focus on energy-efficient designs. Chiplets offer the flexibility to create systems-in-package that balance cost, power, and performance for specific workloads without starting from scratch. AI's unique needs for inter-die communication make reducing latency crucial, and the rollout of larger clusters emphasizes the role of high-speed, optical interconnects. 

The application of chiplets extends beyond AI, with growing use in high-performance computing (HPC), next-generation 6G communication, and data center networking. Finding connectivity solutions that satisfy the requirements of these varied applications is essential to fulfilling the potential of chiplets and opening new avenues for innovation across the industry. 

Author:

Tony Chan Carusone

CTO
Alphawave Semi

Tony Chan Carusone was appointed Chief Technology Officer in January 2022.  Tony has been a professor of Electrical and Computer Engineering at the University of Toronto since 2001.  He has well over 100 publications, including 8 award-winning best papers, focused on integrated circuits for digital communication.  Tony has served as a Distinguished Lecturer for the IEEE Solid-State Circuits Society and on the Technical Program Committees of world’s leading circuits conferences.  He co-authored the classic textbooks “Analog Integrated Circuit Design” and “Microelectronic Circuits” and he is a Fellow of the IEEE.  Tony has also been a consultant to the semiconductor industry for over 20 years, working with both startups and some of the largest technology companies in the world.

Tony holds a B.A.Sc. in Engineering Science and a Ph.D. in Electrical Engineering from the University of Toronto.

Tony Chan Carusone

CTO
Alphawave Semi

Tony Chan Carusone was appointed Chief Technology Officer in January 2022.  Tony has been a professor of Electrical and Computer Engineering at the University of Toronto since 2001.  He has well over 100 publications, including 8 award-winning best papers, focused on integrated circuits for digital communication.  Tony has served as a Distinguished Lecturer for the IEEE Solid-State Circuits Society and on the Technical Program Committees of world’s leading circuits conferences.  He co-authored the classic textbooks “Analog Integrated Circuit Design” and “Microelectronic Circuits” and he is a Fellow of the IEEE.  Tony has also been a consultant to the semiconductor industry for over 20 years, working with both startups and some of the largest technology companies in the world.

Tony holds a B.A.Sc. in Engineering Science and a Ph.D. in Electrical Engineering from the University of Toronto.

4:25 PM - 4:50 PM

We at Positron set out to build a cost-effective alternative to NVIDIA for LLM inference, and after 12 months, our Florida-based head of sales made our first sale. He taught us the value of chasing our largest competitive advantages, across industries and around the globe. We also managed to build a FPGA-based hardware-and-software inference platform capable of serving monolithic and mixture-of-experts models at very competitive token rates. It wasn't easy, because the LLM landscape changes meaningfully every two weeks. Yet today we have customers both evaluating and in production, with both our physical servers and our hosted cloud service. We'll share a few of the hairy workarounds and engineering heroics that achieved equivalence with NVIDIA so quickly, and tamed the complexity of building a dedicated LLM computer from FPGAs.

Author:

Barrett Woodside

VP of Product
Positron

In developer-oriented, marketing, and product roles, Barrett spent the past decade of his career working on AI inference, first at NVIDIA, running and profiling computer vision workloads on Jetson. After three years shoehorning models onto embedded systems powering drones, robots, and surveillance systems, he joined Google Cloud where he first-hand experienced the incredible power of Transformer models running accurate translation workloads on third-generation TPUs. He helped launch Cloud AutoML Vision with Fei-Fei Li and announced the TPU Pod's first entry into the MLPerf benchmark. Most recently, he spent two years at Scale AI working on product strategy and go-to-market for Scale Spellbook, its first LLM inference and fine tuning product. Today, he is Positron's co-founder and VP of Product.

Barrett Woodside

VP of Product
Positron

In developer-oriented, marketing, and product roles, Barrett spent the past decade of his career working on AI inference, first at NVIDIA, running and profiling computer vision workloads on Jetson. After three years shoehorning models onto embedded systems powering drones, robots, and surveillance systems, he joined Google Cloud where he first-hand experienced the incredible power of Transformer models running accurate translation workloads on third-generation TPUs. He helped launch Cloud AutoML Vision with Fei-Fei Li and announced the TPU Pod's first entry into the MLPerf benchmark. Most recently, he spent two years at Scale AI working on product strategy and go-to-market for Scale Spellbook, its first LLM inference and fine tuning product. Today, he is Positron's co-founder and VP of Product.

4:50 PM - 5:15 PM
5:15 PM - 6:00 PM
Moderator

Author:

David McIntyre

Director, Product Planning: Samsung & Board Member: SNIA
SNIA

David McIntyre

Director, Product Planning: Samsung & Board Member: SNIA
SNIA

Author:

Preet Virk

Co-Founder & COO
Celestial AI

Preet Virk

Co-Founder & COO
Celestial AI

Author:

Jorn Smeets

Managing Director, North America
PhotonDelta

Jorn Smeets is Managing Director – North America for PhotonDelta. Based in Silicon Valley, his mission is to accelerate the photonic chip industry by building collaborations between North America and the PhotonDelta ecosystem. Prior to his role in the USA, Jorn was Chief Marketing Officer and Board Member of PhotonDelta, where he was closely involved with the organisational strategy and overall business operations. Jorn has a background in business management, with extensive international working experience throughout various industries in China, Singapore, France, Italy, and the Netherlands.

Jorn Smeets

Managing Director, North America
PhotonDelta

Jorn Smeets is Managing Director – North America for PhotonDelta. Based in Silicon Valley, his mission is to accelerate the photonic chip industry by building collaborations between North America and the PhotonDelta ecosystem. Prior to his role in the USA, Jorn was Chief Marketing Officer and Board Member of PhotonDelta, where he was closely involved with the organisational strategy and overall business operations. Jorn has a background in business management, with extensive international working experience throughout various industries in China, Singapore, France, Italy, and the Netherlands.

Author:

Katharine Schmidtke

Co-Founder
Eribel Systems

Dr. Katharine Schmidtke is co-founder of Eribel Systems and Adjunct Professor at the University of California, Santa Barbara.  She previously directed sourcing for Meta’s custom AI accelerators and next generation optical interconnect technology. She has over 25 years of experience in the Opto-Electronics industry including strategic roles at; Finisar Corporation, JDSU Uniphase, and New Focus. Katharine received a Ph.D. in non-linear optics from University of Southampton and completed post-doctoral research at Stanford University.

Katharine Schmidtke

Co-Founder
Eribel Systems

Dr. Katharine Schmidtke is co-founder of Eribel Systems and Adjunct Professor at the University of California, Santa Barbara.  She previously directed sourcing for Meta’s custom AI accelerators and next generation optical interconnect technology. She has over 25 years of experience in the Opto-Electronics industry including strategic roles at; Finisar Corporation, JDSU Uniphase, and New Focus. Katharine received a Ph.D. in non-linear optics from University of Southampton and completed post-doctoral research at Stanford University.

6:00 PM - 8:00 PM

Location: SP2 Bar & Restaurant, 72 N Almaden Ave, San Jose, CA 95110, United States

Join Nscale for an exclusive cocktail event in San Jose, where AI infrastructure enthusiasts and industry professionals can gather to discuss the future of compute. Enjoy a night of networking with colleagues and like-minded individuals over delicious food and drinks. Discover what Nscale has to offer and engage in insightful conversations about the latest advancements in AI infrastructure. Don’t miss this opportunity to connect and collaborate.

Day 3: Thursday, 12 Sep, 2024
EFFICIENT INFERENCE & DEPLOYMENT
09:00 AM - 10:00 AM
Registration and Networking
9:55 AM - 10:00 AM
OPENING REMARKS
10:00 AM - 10:25 AM
AI LUMINARY KEYNOTE

Join Mark Russinovich, Azure CTO and Technical Fellow, for an in-depth exploration of Microsoft's AI architecture. Discover the technology behind our sustainable datacenter design, massive supercomputers used for foundational model training, efficient infrastructure for serving models, workload management and optimizations, AI safety, and advancements in confidential computing to safeguard data during processing.

Hardware
Systems
Infrastructure

Author:

Mark Russinovich

CTO and Technical Fellow, Azure
Microsoft

Mark Russinovich is Chief Technology Officer and Technical Fellow for Microsoft Azure, Microsoft’s global enterprise-grade cloud platform. A widely recognized expert in distributed systems, operating systems and cybersecurity, Mark earned a Ph.D. in computer engineering from Carnegie Mellon University. He later co-founded Winternals Software, joining Microsoft in 2006 when the company was acquired. Mark is a popular speaker at industry conferences such as Microsoft Ignite, Microsoft Build, and RSA Conference. He has authored several nonfiction and fiction books, including the Microsoft Press Windows Internals book series, Troubleshooting with the Sysinternals Tools, as well as fictional cyber security thrillers Zero Day, Trojan Horse and Rogue Code.

Mark Russinovich

CTO and Technical Fellow, Azure
Microsoft

Mark Russinovich is Chief Technology Officer and Technical Fellow for Microsoft Azure, Microsoft’s global enterprise-grade cloud platform. A widely recognized expert in distributed systems, operating systems and cybersecurity, Mark earned a Ph.D. in computer engineering from Carnegie Mellon University. He later co-founded Winternals Software, joining Microsoft in 2006 when the company was acquired. Mark is a popular speaker at industry conferences such as Microsoft Ignite, Microsoft Build, and RSA Conference. He has authored several nonfiction and fiction books, including the Microsoft Press Windows Internals book series, Troubleshooting with the Sysinternals Tools, as well as fictional cyber security thrillers Zero Day, Trojan Horse and Rogue Code.

10:25 AM - 10:50 AM
KEYNOTE

In this talk, Dr. Vinesh Sukumar will explain how Qualcomm has been successful in deploying large generative AI models on the edge for a variety of use cases in consumer and enterprise markets. He will examine key challenges that must be overcome before large models at the edge can reach their full commercial potential. He’ll also highlight how the industry is addressing these challenges and explore emerging large multimodal models.

Edge
Inferencing Systems

Author:

Vinesh Sukumar

Head of AI Product Management
Qualcomm

Vinesh Sukumar currently serves as Senior Director – Head of AI/ML product management at Qualcomm Technologies, Inc (QTI).  In this role, he leads AI product definition, strategy and solution deployment across multiple business units.

•He has about 20 years of industry experience spread across research, engineering and application deployment. He currently holds a doctorate degree specializing in imaging and vision systems while also completing a business degree focused on strategy and marketing. He is a regular speaker in many AI industry forums and has authored several journal papers and two technical books.

Vinesh Sukumar

Head of AI Product Management
Qualcomm

Vinesh Sukumar currently serves as Senior Director – Head of AI/ML product management at Qualcomm Technologies, Inc (QTI).  In this role, he leads AI product definition, strategy and solution deployment across multiple business units.

•He has about 20 years of industry experience spread across research, engineering and application deployment. He currently holds a doctorate degree specializing in imaging and vision systems while also completing a business degree focused on strategy and marketing. He is a regular speaker in many AI industry forums and has authored several journal papers and two technical books.

10:50 AM - 11:15 AM
KEYNOTE
Hardware
Systems
Software
Infrastructure

Author:

John Overton

CEO
Kove

John Overton is the CEO of Kove IO, Inc. In the late 1980s, while at the Open Software Foundation, Dr. Overton wrote software that went on to be used by approximately two thirds of the world’s workstation market. In the 1990s, he co-invented and patented technology utilizing distributed hash tables for locality management, now widely used in storage, database, and numerous other markets. In the 2000s, he led development of the first truly capable Software-Defined Memory offering, Kove:SDM™. Kove:SDM™ enables new Artificial Intelligence and Machine Learning capabilities, while also reducing power by up to 50%. Dr. Overton has more than 65 issued patents world-wide and has peer-reviewed publications across numerous academic disciplines. He holds post-graduate and doctoral degrees from Harvard and the University of Chicago.

John Overton

CEO
Kove

John Overton is the CEO of Kove IO, Inc. In the late 1980s, while at the Open Software Foundation, Dr. Overton wrote software that went on to be used by approximately two thirds of the world’s workstation market. In the 1990s, he co-invented and patented technology utilizing distributed hash tables for locality management, now widely used in storage, database, and numerous other markets. In the 2000s, he led development of the first truly capable Software-Defined Memory offering, Kove:SDM™. Kove:SDM™ enables new Artificial Intelligence and Machine Learning capabilities, while also reducing power by up to 50%. Dr. Overton has more than 65 issued patents world-wide and has peer-reviewed publications across numerous academic disciplines. He holds post-graduate and doctoral degrees from Harvard and the University of Chicago.

Author:

Bill Wright

Edge AI Technology Evangelist
Red Hat

Bill Wright

Edge AI Technology Evangelist
Red Hat
11:15 AM - 11:40 AM
KEYNOTE
Generative AI
Hardware
Inferencing
Systems

Author:

Baskar Sridharan

Vice President, AI/ML Services & Infrastructure
AWS

Baskar Sridharan is the Vice President for AI/ML and Data Services & Infrastructure at AWS, where he oversees the strategic direction and development of key services, including Amazon Bedrock, Amazon SageMaker, and essential data platforms like Amazon EMR, Amazon Athena, and AWS Glue.


Prior to his current role, Baskar spent nearly six years at Google, where he contributed to advancements in cloud computing infrastructure. Before that, he dedicated 16 years to Microsoft, playing a pivotal role in the development of Azure Data Lake and Cosmos, which have significantly influenced the landscape of cloud storage and data management.

Baskar earned a Ph.D. in Computer Science from Purdue University and has since spent over two decades at the forefront of the tech industry.

He has lived in Seattle for over 20 years, where he, his wife, and two children embrace the beauty of the Pacific Northwest and its many outdoor activities. In his free time, Baskar enjoys practicing music and playing cricket and baseball with his kids.

Baskar Sridharan

Vice President, AI/ML Services & Infrastructure
AWS

Baskar Sridharan is the Vice President for AI/ML and Data Services & Infrastructure at AWS, where he oversees the strategic direction and development of key services, including Amazon Bedrock, Amazon SageMaker, and essential data platforms like Amazon EMR, Amazon Athena, and AWS Glue.


Prior to his current role, Baskar spent nearly six years at Google, where he contributed to advancements in cloud computing infrastructure. Before that, he dedicated 16 years to Microsoft, playing a pivotal role in the development of Azure Data Lake and Cosmos, which have significantly influenced the landscape of cloud storage and data management.

Baskar earned a Ph.D. in Computer Science from Purdue University and has since spent over two decades at the forefront of the tech industry.

He has lived in Seattle for over 20 years, where he, his wife, and two children embrace the beauty of the Pacific Northwest and its many outdoor activities. In his free time, Baskar enjoys practicing music and playing cricket and baseball with his kids.

11:40 PM - 12:10 PM
CLOSING KEYNOTE

Author:

Andrew Ng

Founder & CEO
LandingAI

Dr. Andrew Ng is a globally recognized leader in AI (Artificial Intelligence). He is Founder of DeepLearning.AI, Founder & CEO of Landing AI, General Partner at AI Fund, Chairman & Co-Founder of Coursera and an Adjunct Professor at Stanford University’s Computer Science Department.

In 2011, he led the development of Stanford University's main MOOC (Massive Open Online Courses) platform and taught an online Machine Learning course that was offered to over 100,000 students leading to the founding of Coursera where he is currently Chairman and Co-founder.

Previously, he was Chief Scientist at Baidu, where he led the company’s ~1300 person AI Group and was responsible for driving the company’s global AI strategy and infrastructure. He was also the founding lead of the Google Brain team.

As a pioneer in machine learning and online education, Dr. Ng has changed countless lives through his work in AI, and has authored or co-authored over 200 research papers in machine learning, robotics and related fields. In 2013, he was named to the Time 100 list of the most influential persons in the world. He holds degrees from Carnegie Mellon University, MIT and the University of California, Berkeley.

Andrew Ng

Founder & CEO
LandingAI

Dr. Andrew Ng is a globally recognized leader in AI (Artificial Intelligence). He is Founder of DeepLearning.AI, Founder & CEO of Landing AI, General Partner at AI Fund, Chairman & Co-Founder of Coursera and an Adjunct Professor at Stanford University’s Computer Science Department.

In 2011, he led the development of Stanford University's main MOOC (Massive Open Online Courses) platform and taught an online Machine Learning course that was offered to over 100,000 students leading to the founding of Coursera where he is currently Chairman and Co-founder.

Previously, he was Chief Scientist at Baidu, where he led the company’s ~1300 person AI Group and was responsible for driving the company’s global AI strategy and infrastructure. He was also the founding lead of the Google Brain team.

As a pioneer in machine learning and online education, Dr. Ng has changed countless lives through his work in AI, and has authored or co-authored over 200 research papers in machine learning, robotics and related fields. In 2013, he was named to the Time 100 list of the most influential persons in the world. He holds degrees from Carnegie Mellon University, MIT and the University of California, Berkeley.

12:10 PM - 1:30 PM
Lunch and Networking
1:30 PM - 1:55 PM
PRESENTATION

Flexible and programmable solutions are the key to delivering high performance, high efficiency AI at the edge. As semiconductor technologies experience the biggest shift in decades in order to meet the requirements of the latest generation of AI models, software is set to be the true enabler of success. Optimised libraries and toolkits empower all stakeholders in the developer journey to follow the “functional to performant to optimal” workflow typical of today’s edge compute application development cycle.

This presentation will present a software-first approach to enabling AI at the edge, touching on the importance of community-wide initiatives such as The UXL Foundation, of which Imagination is a founding member. 

Imagination is a global leader in innovative edge technology, delivering landmark GPU, CPU and AI semiconductor solutions across automotive, mobile, consumer and desktop markets for over thirty years. 

Edge
Infrastructure
MLOps

Author:

Tim Mamtora

Chief of Innovation and Engineering
Imagination

Tim Mamtora

Chief of Innovation and Engineering
Imagination

The rise of generative AI has increased the size of LLMs, escalating computing costs for services. While Datacenter LLM services use larger batch sizes to improve GPU efficiency, self-attention block processing is still suffering from low efficiency. SK hynix's AiM device, utilizing Processing in Memory technology, offers high bandwidth and high energy efficiency, reducing operational costs significantly compared to GPUs regardless of batch size. Additionally, applying AiM in on-device services enables high performance and low energy consumption, enhancing competitiveness. SK hynix has developed the AiMX accelerator prototype for Datacenters, showcasing single batch operations last year and planning multi-batch operations with larger models this year. The AiMX structure can be similarly applied to on-device AiM implementations. SK hynix's AiM/AiMX solutions address cost, performance, and power challenges in LLM services for both Datacenters and on-device AI applications.

Inferencing
Memory
Hardware
Systems

Author:

Euicheol Lim

Research Fellow, System Architect
SK Hynix

Eui-cheol Lim is a Research Fellow and leader of Solution Advanced Technology team in SK Hynix. He received the B.S. degree and the M.S. degree from Yonsei University, Seoul, Korea, in 1993 and 1995, and the Ph.D. degree from Sungkyunkwan University, suwon, Korea in 2006. Dr.Lim joined SK Hynix in 2016 as a system architect in memory system R&D. Before joining SK Hynix, he had been working as an SoC architect in Samsung Electronics and leading the architecture of most Exynos mobile SoC. His recent interesting points are memory and storage system architecture with new media memory and new memory solution such as CXL memory and Processing in Memory. In particular, he is proposing a new computing architecture based on PIM, which is more efficient and flexible than existing AI accelerators, to process generative AI and LLM (large language Model) that is currently causing a sensation.

Euicheol Lim

Research Fellow, System Architect
SK Hynix

Eui-cheol Lim is a Research Fellow and leader of Solution Advanced Technology team in SK Hynix. He received the B.S. degree and the M.S. degree from Yonsei University, Seoul, Korea, in 1993 and 1995, and the Ph.D. degree from Sungkyunkwan University, suwon, Korea in 2006. Dr.Lim joined SK Hynix in 2016 as a system architect in memory system R&D. Before joining SK Hynix, he had been working as an SoC architect in Samsung Electronics and leading the architecture of most Exynos mobile SoC. His recent interesting points are memory and storage system architecture with new media memory and new memory solution such as CXL memory and Processing in Memory. In particular, he is proposing a new computing architecture based on PIM, which is more efficient and flexible than existing AI accelerators, to process generative AI and LLM (large language Model) that is currently causing a sensation.

1:55 PM - 2:35 PM
PANEL
Infrastructure
MLOps
Generative AI
Systems
Moderator

Author:

Hira Dangol

Vice President, AI/ML & Automation
Bank Of America

Industry experience in AI/ML, engineering, architecture and executive roles in leading technology companies, service providers and Silicon Valley leading organizations. Currently focusing on innovation, disruption, and cutting-edge technologies through startups and technology-driven corporation in solving the pressing problems of industry and world.

Hira Dangol

Vice President, AI/ML & Automation
Bank Of America

Industry experience in AI/ML, engineering, architecture and executive roles in leading technology companies, service providers and Silicon Valley leading organizations. Currently focusing on innovation, disruption, and cutting-edge technologies through startups and technology-driven corporation in solving the pressing problems of industry and world.

Speakers

Author:

Puja Das

Senior Director, Personalization
Warner Bros. Entertainment

Dr. Puja Das, leads the Personalization team at Warner Brothers Discovery (WBD) which includes offerings on Max, HBO, Discovery+ and many more.

Prior to WBD, she led a team of Applied ML researchers at Apple, who focused on building large scale recommendation systems to serve personalized content on the App Store, Arcade and Apple Books. Her areas of expertise include user modeling, content modeling, recommendation systems, multi-task learning, sequential learning and online convex optimization. She also led the Ads prediction team at Twitter (now X), where she focused on relevance modeling to improve App Ads personalization and monetization across all of Twitter surfaces.

She obtained her Ph.D from University of Minnesota in Machine Learning, where the focus of her dissertation was online learning algorithms, which work on streaming data. Her dissertation was the recipient of the prestigious IBM Ph D. Fellowship Award.

She is active in the research community and part of the program committee at ML and recommendation system conferences. Shas mentored several undergrad and grad students and participated in various round table discussions through Grace Hopper Conference, Women in Machine Learning Program colocated with NeurIPS, AAAI and Computing Research Association- Women’s chapter.

Puja Das

Senior Director, Personalization
Warner Bros. Entertainment

Dr. Puja Das, leads the Personalization team at Warner Brothers Discovery (WBD) which includes offerings on Max, HBO, Discovery+ and many more.

Prior to WBD, she led a team of Applied ML researchers at Apple, who focused on building large scale recommendation systems to serve personalized content on the App Store, Arcade and Apple Books. Her areas of expertise include user modeling, content modeling, recommendation systems, multi-task learning, sequential learning and online convex optimization. She also led the Ads prediction team at Twitter (now X), where she focused on relevance modeling to improve App Ads personalization and monetization across all of Twitter surfaces.

She obtained her Ph.D from University of Minnesota in Machine Learning, where the focus of her dissertation was online learning algorithms, which work on streaming data. Her dissertation was the recipient of the prestigious IBM Ph D. Fellowship Award.

She is active in the research community and part of the program committee at ML and recommendation system conferences. Shas mentored several undergrad and grad students and participated in various round table discussions through Grace Hopper Conference, Women in Machine Learning Program colocated with NeurIPS, AAAI and Computing Research Association- Women’s chapter.

Author:

Logan Grasby

Senior Machine Learning Engineer
Cloudflare

Logan Grasby is a Senior Machine Learning Engineer at Cloudflare, based in Calgary, Alberta. As part of Cloudflare's Workers AI team he works on developing, deploying and scaling AI inference servers across Cloudflare's edge network. In recent work he has designed services for multi-tenant LLM LoRA inference and dynamic diffusion model pipeline servers. Prior to Cloudflare, Logan founded Azule, an LLM driven customer service and product recommendation platform for ecommerce. He also co-founded Conversion Pages and served as Director of Product at Appstle, a Shopify app development firm.

Logan Grasby

Senior Machine Learning Engineer
Cloudflare

Logan Grasby is a Senior Machine Learning Engineer at Cloudflare, based in Calgary, Alberta. As part of Cloudflare's Workers AI team he works on developing, deploying and scaling AI inference servers across Cloudflare's edge network. In recent work he has designed services for multi-tenant LLM LoRA inference and dynamic diffusion model pipeline servers. Prior to Cloudflare, Logan founded Azule, an LLM driven customer service and product recommendation platform for ecommerce. He also co-founded Conversion Pages and served as Director of Product at Appstle, a Shopify app development firm.

Author:

Daniel Valdivia

Engineer
MinIo

Daniel Valdivia is an engineer with MinIO where he focuses on Kubernetes, ML/AI and VMware. Prior to joining MinIO, Daniel was the Head of Machine Learning for Espressive. Daniel has held senior application development roles with ServiceNow, Oracle and Freescale. Daniel holds a Bachelor of Engineering from Tecnológico de Monterrey, Campus Guadalajara and Bachelor of Science in Computer Engineering from Instituto Tecnológico y de Estudios Superiores de Monterrey.

 

Daniel Valdivia

Engineer
MinIo

Daniel Valdivia is an engineer with MinIO where he focuses on Kubernetes, ML/AI and VMware. Prior to joining MinIO, Daniel was the Head of Machine Learning for Espressive. Daniel has held senior application development roles with ServiceNow, Oracle and Freescale. Daniel holds a Bachelor of Engineering from Tecnológico de Monterrey, Campus Guadalajara and Bachelor of Science in Computer Engineering from Instituto Tecnológico y de Estudios Superiores de Monterrey.

 

Data Center
Infrastructure
Inferencing
Moderator

Author:

Bijan Nowroozi

Chief Technical Officer
The Open Compute Project Foundation

Bijan Nowroozi is Chief Technical Officer of The Open Compute Project Foundation and has more than 30 years of experience with hardware and software development, signal processing, networking, and research with technology companies. Prior to The OCP Bijan developed mission critical infrastructure and was on the leading-edge standards and technology development in multiple technology waves including edge computing, AI/ML, optical/photonics, quantum, RF, wireless, small cells, UAV’s, GIS, HPC, network security, energy and more. 

Bijan Nowroozi

Chief Technical Officer
The Open Compute Project Foundation

Bijan Nowroozi is Chief Technical Officer of The Open Compute Project Foundation and has more than 30 years of experience with hardware and software development, signal processing, networking, and research with technology companies. Prior to The OCP Bijan developed mission critical infrastructure and was on the leading-edge standards and technology development in multiple technology waves including edge computing, AI/ML, optical/photonics, quantum, RF, wireless, small cells, UAV’s, GIS, HPC, network security, energy and more. 

Author:

Sadasivan Shankar

Research Technology Manager
SLAC National Laboratory and Stanford University

Sadasivan (Sadas) Shankar is Research Technology Manager at SLAC National Laboratory, adjunct Professor in Stanford Materials Science and Engineering, and Lecturer in the Stanford Graduate School of Business. He was an Associate in the Department of Physics at Harvard University, and was the first Margaret and Will Hearst Visiting Lecturer in Harvard and the first Distinguished Scientist in Residence at the Harvard Institute of Applied Computational Sciences. He has co-instructed classes related to design of materials, computing, sustainability in materials, and has received Excellence in Teaching award from Harvard University. He is co-instructing a class at Stanford University on Translation for
Innovations. He is a co-founder of and the Chief Scientist at Material Alchemy, a “last mile” translational and independent venture that has been recently founded to accelerate the path from materials discovery to adoption, with environmental sustainability as a key goal. In addition to research on fundamentals of Materials Design, his current research is on new architectures for specialized AI methods is exploring ways of bringing machine intelligence to system-level challenges in inorganic/biochemistry, materials, and physics and new frameworks for computing as information processing inspired by lessons from 

nature.
Dr. Shankar’s current research and analysis on Sustainable Computing is helping provide directions for the US Department of Energy’s EES2 scaling initiatives (energy reduction in computing every generation for 1000X reduction in 2 decades) as part of the White House Plan to Revitalize American Manufacturing and Secure Critical Supply Chains in 2022 for investment in research, development, demonstration, and commercial application (RDD&CA) in conventional semiconductors.

In addition, his analysis is helping identify pathways for energy efficient computing. While in the industry, Dr. Shankar and his team have enabled several critical technology decisions in the semiconductor industrial applications of chemistry, materials, processing, packaging, manufacturing, and design rules for over nine generations of Moore’s law including first advanced
process control application in 300 mm wafer technology; introduction of flip chip packaging using electrodeposition, 100% Pb-elimination in microprocessors, design of new materials, devices including nano warp-around devices for the advanced semiconductor technology manufacturing, processing
methods, reactors, etc. Dr. Shankar managed his team members distributed across multiple sites in the US, with collaborations in Europe. The teams won several awards from the Executive Management and technology organizations.

He is a co-inventor in over twenty patent filings covering areas in new
chemical reactor designs, semiconductor processes, bulk and nano materials for the sub 10 nanometer generation of transistors, device structures, and algorithms. He is also a co-author in over hundred publications and presentations in measurements, multi-scale and multi-physics methods spanning from quantum scale to macroscopic scales, in the areas of chemical synthesis, plasma chemistry and processing, non-equilibrium electronic, ionic, and atomic transport, energy efficiency of information processing, and machine learning methods for bridging across scales, and estimating complex materials
properties and in advanced process control.

Dr. Shankar was an invited speaker at the Clean-IT Conference in Germany on Revolutionize Digital Systems and AI (2023), Telluride Quantum Inspired Neuromorphic Computing Workshop (2023) on Limiting Energy Estimates for Classical and Quantum Information Processing, Argonne National
Laboratory Director’s Special Colloquium on the Future of Computing (2022), panelist on Carnegie Science series on Brain and Computing (2020), lecturer in the Winter Course on Computational Brain Research in IIT-M-India (2020), invited participant in the Kavli Institute of Theoretical Physics program
on Cellular Energetics in UCSB (2019), invited speaker to the Camille and Henry Dreyfus Foundation meeting on Machine Learning for problems in Chemistry and Materials Science (2019), a Senior Fellow in UCLA Institute of Pure and Applied Mathematics during the program on Machine Learning and Manybody
Physics (2016), invited to the White House event for starting of the Materials Genome Initiative (2012), Invited speaker in Erwin Schrödinger International Institute for Mathematical Physics-Vienna (2007), Intel’s first Distinguished Lecturer in Caltech (1998) and MIT (1999). He has also given several
colloquia and lectures in universities all over the world and his research was also featured in the publications Science (2012), TED (2013), Nature Machine Intelligence (2022), Nature Physics (2022).

Sadasivan Shankar

Research Technology Manager
SLAC National Laboratory and Stanford University

Sadasivan (Sadas) Shankar is Research Technology Manager at SLAC National Laboratory, adjunct Professor in Stanford Materials Science and Engineering, and Lecturer in the Stanford Graduate School of Business. He was an Associate in the Department of Physics at Harvard University, and was the first Margaret and Will Hearst Visiting Lecturer in Harvard and the first Distinguished Scientist in Residence at the Harvard Institute of Applied Computational Sciences. He has co-instructed classes related to design of materials, computing, sustainability in materials, and has received Excellence in Teaching award from Harvard University. He is co-instructing a class at Stanford University on Translation for
Innovations. He is a co-founder of and the Chief Scientist at Material Alchemy, a “last mile” translational and independent venture that has been recently founded to accelerate the path from materials discovery to adoption, with environmental sustainability as a key goal. In addition to research on fundamentals of Materials Design, his current research is on new architectures for specialized AI methods is exploring ways of bringing machine intelligence to system-level challenges in inorganic/biochemistry, materials, and physics and new frameworks for computing as information processing inspired by lessons from 

nature.
Dr. Shankar’s current research and analysis on Sustainable Computing is helping provide directions for the US Department of Energy’s EES2 scaling initiatives (energy reduction in computing every generation for 1000X reduction in 2 decades) as part of the White House Plan to Revitalize American Manufacturing and Secure Critical Supply Chains in 2022 for investment in research, development, demonstration, and commercial application (RDD&CA) in conventional semiconductors.

In addition, his analysis is helping identify pathways for energy efficient computing. While in the industry, Dr. Shankar and his team have enabled several critical technology decisions in the semiconductor industrial applications of chemistry, materials, processing, packaging, manufacturing, and design rules for over nine generations of Moore’s law including first advanced
process control application in 300 mm wafer technology; introduction of flip chip packaging using electrodeposition, 100% Pb-elimination in microprocessors, design of new materials, devices including nano warp-around devices for the advanced semiconductor technology manufacturing, processing
methods, reactors, etc. Dr. Shankar managed his team members distributed across multiple sites in the US, with collaborations in Europe. The teams won several awards from the Executive Management and technology organizations.

He is a co-inventor in over twenty patent filings covering areas in new
chemical reactor designs, semiconductor processes, bulk and nano materials for the sub 10 nanometer generation of transistors, device structures, and algorithms. He is also a co-author in over hundred publications and presentations in measurements, multi-scale and multi-physics methods spanning from quantum scale to macroscopic scales, in the areas of chemical synthesis, plasma chemistry and processing, non-equilibrium electronic, ionic, and atomic transport, energy efficiency of information processing, and machine learning methods for bridging across scales, and estimating complex materials
properties and in advanced process control.

Dr. Shankar was an invited speaker at the Clean-IT Conference in Germany on Revolutionize Digital Systems and AI (2023), Telluride Quantum Inspired Neuromorphic Computing Workshop (2023) on Limiting Energy Estimates for Classical and Quantum Information Processing, Argonne National
Laboratory Director’s Special Colloquium on the Future of Computing (2022), panelist on Carnegie Science series on Brain and Computing (2020), lecturer in the Winter Course on Computational Brain Research in IIT-M-India (2020), invited participant in the Kavli Institute of Theoretical Physics program
on Cellular Energetics in UCSB (2019), invited speaker to the Camille and Henry Dreyfus Foundation meeting on Machine Learning for problems in Chemistry and Materials Science (2019), a Senior Fellow in UCLA Institute of Pure and Applied Mathematics during the program on Machine Learning and Manybody
Physics (2016), invited to the White House event for starting of the Materials Genome Initiative (2012), Invited speaker in Erwin Schrödinger International Institute for Mathematical Physics-Vienna (2007), Intel’s first Distinguished Lecturer in Caltech (1998) and MIT (1999). He has also given several
colloquia and lectures in universities all over the world and his research was also featured in the publications Science (2012), TED (2013), Nature Machine Intelligence (2022), Nature Physics (2022).

Author:

Ankita Singh

Investment Director
Bosch Ventures

Ankita Singh

Investment Director
Bosch Ventures

Author:

Dil Radhakrishnan

Engineer
Minlo

Dileeshvar Radhakrishnan is an engineer at MinIO where he focuses on AI/ML and Kubernetes. Prior to joining MinIO, Dil served as Chief Architect at ML pioneer Espressive. He previously worked in engineering roles at ServiceNow and Rewyndr. He began his career at Tata Consulting Services. 

Dil has Bachelor of Engineering in Computer Science and Engineering from Anna University and a Masters in Computer Science from Carnegie Mellon.

 

Dil Radhakrishnan

Engineer
Minlo

Dileeshvar Radhakrishnan is an engineer at MinIO where he focuses on AI/ML and Kubernetes. Prior to joining MinIO, Dil served as Chief Architect at ML pioneer Espressive. He previously worked in engineering roles at ServiceNow and Rewyndr. He began his career at Tata Consulting Services. 

Dil has Bachelor of Engineering in Computer Science and Engineering from Anna University and a Masters in Computer Science from Carnegie Mellon.

 

2:35 PM - 3:00 PM
PRESENTATION

In this presentation, we will explore the advanced integration of Digital In-Memory Computing (D-IMC) and RISC-V technology by Axelera AI to accelerate AI inference workloads. Our approach uniquely combines the high energy efficiency and throughput of D-IMC with the versatility of RISC-V technology, creating a powerful and scalable platform. This platform is designed to handle a wide range of AI tasks, from advanced computer vision at the edge to emerging AI challenges.

We will demonstrate how our scalable architecture not only meets but exceeds the demands of modern AI applications. Our platform enhances performance while significantly reducing energy use and operational costs. By pushing the boundaries of Edge AI and venturing into new AI domains, Axelera AI is setting new benchmarks in AI processing efficiency and deployment capabilities.

Edge
Infrastructure
MLOps

Author:

Evangelos Eleftheriou

Co-Founder and CTO
Axelera AI

Evangelos Eleftheriou, an IEEE and IBM Fellow, is the Chief Technology Officer and co-founder of Axelera AI, a best-in-class performance company that develops a game-changing hardware and software platform for AI.

As a CTO, Evangelos oversees the development and dissemination of technology for external customers, vendors, and other clients to help improve and increase Axelera AI’s business.

Before his current role, Evangelos worked for IBM Research – Zurich, where he held various management positions for over 35 years. His outstanding achievements led him to become an IBM Fellow, which is IBM’s highest technical honour.

In 2002, Evangelos became a Fellow of the IEEE, and later in 2003, he was co-recipient of the IEEE ComS Leonard G. Abraham Prize Paper Award. He was also co-recipient of the 2005 Technology Award of the Eduard Rhein Foundation. In 2005, he was appointed an IBM Fellow and inducted into the IBM Academy of Technology. In 2009, he was co-recipient of the IEEE Control Systems Technology Award and the IEEE Transactions on Control Systems Technology Outstanding Paper Award. In 2016, Evangelos received an honoris causa professorship from the University of Patras, Greece. In 2018, he was inducted into the US National Academy of Engineering as Foreign Member. Evangelos has authored or coauthored over 250 publications and holds over 160 patents (granted and pending applications).

His primary interests lie in AI and machine learning, including emerging computing paradigms such as neuromorphic and in-memory computing.

Evangelos holds a PhD and a Master of Eng. degrees in Electrical Engineering from Carleton University, Canada, and a BSc in Electrical & Computer Engineering from the University of Patras, Greece.

Evangelos Eleftheriou

Co-Founder and CTO
Axelera AI

Evangelos Eleftheriou, an IEEE and IBM Fellow, is the Chief Technology Officer and co-founder of Axelera AI, a best-in-class performance company that develops a game-changing hardware and software platform for AI.

As a CTO, Evangelos oversees the development and dissemination of technology for external customers, vendors, and other clients to help improve and increase Axelera AI’s business.

Before his current role, Evangelos worked for IBM Research – Zurich, where he held various management positions for over 35 years. His outstanding achievements led him to become an IBM Fellow, which is IBM’s highest technical honour.

In 2002, Evangelos became a Fellow of the IEEE, and later in 2003, he was co-recipient of the IEEE ComS Leonard G. Abraham Prize Paper Award. He was also co-recipient of the 2005 Technology Award of the Eduard Rhein Foundation. In 2005, he was appointed an IBM Fellow and inducted into the IBM Academy of Technology. In 2009, he was co-recipient of the IEEE Control Systems Technology Award and the IEEE Transactions on Control Systems Technology Outstanding Paper Award. In 2016, Evangelos received an honoris causa professorship from the University of Patras, Greece. In 2018, he was inducted into the US National Academy of Engineering as Foreign Member. Evangelos has authored or coauthored over 250 publications and holds over 160 patents (granted and pending applications).

His primary interests lie in AI and machine learning, including emerging computing paradigms such as neuromorphic and in-memory computing.

Evangelos holds a PhD and a Master of Eng. degrees in Electrical Engineering from Carleton University, Canada, and a BSc in Electrical & Computer Engineering from the University of Patras, Greece.

Edge
Hardware
Systems

Author:

Jinwook Oh

Co Founder and CTO
Rebellions

Jinwook Oh is the Co-Founder and Chief Technology Officer of Rebellions, an AI chip company based in South Korea. After earning his Ph.D. from KAIST (Korea Advanced Institute of Science and Technology), he joined the IBM TJ Watson Research Center, where he contributed to several AI chip R&D projects as a Chip Architect, Logic Designer, and Logic Power Lead. At Rebellions, he has overseen the development and launch of two AI chips, with a third, REBEL, in progress. Jinwook's technical leadership has been crucial in establishing Rebellions as a notable player in AI technology within just three and a half years.

Jinwook Oh

Co Founder and CTO
Rebellions

Jinwook Oh is the Co-Founder and Chief Technology Officer of Rebellions, an AI chip company based in South Korea. After earning his Ph.D. from KAIST (Korea Advanced Institute of Science and Technology), he joined the IBM TJ Watson Research Center, where he contributed to several AI chip R&D projects as a Chip Architect, Logic Designer, and Logic Power Lead. At Rebellions, he has overseen the development and launch of two AI chips, with a third, REBEL, in progress. Jinwook's technical leadership has been crucial in establishing Rebellions as a notable player in AI technology within just three and a half years.

3:00 PM - 3:30 PM
Networking Break
3:30 PM - 4:10 PM
PANEL
Edge
Generative AI
Infrastructure
Moderator

Author:

Jeff White

CTO, Edge
Dell Technologies

Jeff is the Industry CTO of Dell Technologies of the Automotive sector, specifically in the area of Connected and Autonomous Vehicles and overall Edge Technology strategy lead. Jeff is responsible for leading the team that is developing the overall Dell Technologies technology strategy development, architectural direction and product requirements for the Intelligent Connected Vehicle platform.

He also is the Chairman of the Dell Automotive Design Authority Council responsible for the technical solution design. In his role as Edge Technology Lead he is driving the development of a Dell Technology wide Edge platform including the physical edge systems, heterogenous compute, memory/storage, environment, security, data management, control plane stack and automation/orchestration.

Previously, Jeff has held senior roles at an early stage artificial intelligence/machine reasoning based process automation technology provider and Elefante Group a stratospheric wireless communications platform as CTO. He also held senior positions at Hewlett Packard Enterprise, Ericsson and Alcatel-Lucent where he led technology initiatives, solutions development, business development and services delivery.
Earlier in his career White served in leadership roles at BellSouth and Cingular Wireless (now AT&T). At Cingular, he led the National Transport Infrastructure Engineering with responsibility for national transport, VoIP & IMS engineering. At BellSouth (now AT&T) he led the Broadband Internet Operations & Support organization which included broadband access tier two technical support, customer networking equipment business, broadband OSS & end-to-end process.

White holds a Bachelor of Science degree in Electrical Engineering from Southern Polytechnic University. He also served as Chairman of the Tech Titans Technology Association of North Texas representing over 300 Technology companies in the greater North Texas community. He also served on the North Texas Regional committee of the Texas Emerging Technology fund under Governor Rick Perry.

Jeff White

CTO, Edge
Dell Technologies

Jeff is the Industry CTO of Dell Technologies of the Automotive sector, specifically in the area of Connected and Autonomous Vehicles and overall Edge Technology strategy lead. Jeff is responsible for leading the team that is developing the overall Dell Technologies technology strategy development, architectural direction and product requirements for the Intelligent Connected Vehicle platform.

He also is the Chairman of the Dell Automotive Design Authority Council responsible for the technical solution design. In his role as Edge Technology Lead he is driving the development of a Dell Technology wide Edge platform including the physical edge systems, heterogenous compute, memory/storage, environment, security, data management, control plane stack and automation/orchestration.

Previously, Jeff has held senior roles at an early stage artificial intelligence/machine reasoning based process automation technology provider and Elefante Group a stratospheric wireless communications platform as CTO. He also held senior positions at Hewlett Packard Enterprise, Ericsson and Alcatel-Lucent where he led technology initiatives, solutions development, business development and services delivery.
Earlier in his career White served in leadership roles at BellSouth and Cingular Wireless (now AT&T). At Cingular, he led the National Transport Infrastructure Engineering with responsibility for national transport, VoIP & IMS engineering. At BellSouth (now AT&T) he led the Broadband Internet Operations & Support organization which included broadband access tier two technical support, customer networking equipment business, broadband OSS & end-to-end process.

White holds a Bachelor of Science degree in Electrical Engineering from Southern Polytechnic University. He also served as Chairman of the Tech Titans Technology Association of North Texas representing over 300 Technology companies in the greater North Texas community. He also served on the North Texas Regional committee of the Texas Emerging Technology fund under Governor Rick Perry.

Author:

Yvonne Lutsch

Investment Principal
Bosch Ventures

Yvonne is an accomplished Investment Principal at Bosch Ventures affiliate office located in Sunnyvale, and sources, evaluates, and executes venture capital deals in North America. Her specialty are investments in deep tech fields such as AI, edge and next gen. computing incl. quantum, robotics, industrial IoT, mobility, climate tech, semiconductors, or sensors. She is an investor and non-executive board member of Bosch Ventures’ portfolio companies Syntiant, Zapata AI, UltraSense Systems, Aclima, and Recogni.
Prior to this position Yvonne was Director of Technology Scouting and Business Development, building up an Innovation Hub in Silicon Valley including startup scouting, business development while advising executives of the Bosch business units on their strategy. She has more than two decades of solid experience in manufacturing operations and engineering in the automotive and consumer electronics space – gained through different executive roles at Bosch in Germany.
Yvonne received a diploma in Experimental Physics from University of Siegen, Germany, and holds a PhD in Applied Physics from University of Tuebingen, Germany.

Yvonne Lutsch

Investment Principal
Bosch Ventures

Yvonne is an accomplished Investment Principal at Bosch Ventures affiliate office located in Sunnyvale, and sources, evaluates, and executes venture capital deals in North America. Her specialty are investments in deep tech fields such as AI, edge and next gen. computing incl. quantum, robotics, industrial IoT, mobility, climate tech, semiconductors, or sensors. She is an investor and non-executive board member of Bosch Ventures’ portfolio companies Syntiant, Zapata AI, UltraSense Systems, Aclima, and Recogni.
Prior to this position Yvonne was Director of Technology Scouting and Business Development, building up an Innovation Hub in Silicon Valley including startup scouting, business development while advising executives of the Bosch business units on their strategy. She has more than two decades of solid experience in manufacturing operations and engineering in the automotive and consumer electronics space – gained through different executive roles at Bosch in Germany.
Yvonne received a diploma in Experimental Physics from University of Siegen, Germany, and holds a PhD in Applied Physics from University of Tuebingen, Germany.

Author:

Roberto Mijat

Senior Director
Blaize

Roberto leads product marketing and strategy at Blaize. He is an AI technology and product leader with an engineering background and over 20 years of experience in developing and taking to market advanced semiconductor hardware and software solutions.

Roberto spent over 15 years at Arm, holding several senior product and business leadership positions and leading multiple global product teams. He was a member of the company’s Product Line Board and Steering board for AI on CPU. He created and architected the Compute Libraries framework, a key component of Arm’s AI software stack, deployed in billions of devices today. Roberto established the Arm GPU Compute ecosystem from scratch and led collaborations with dozens of industry leaders, including Facebook, Google, Huawei, MediaTek, and Samsung.

At Graphcore, Roberto led the launch of the Bow IPU AI accelerator, promoted the standardization of FP8, and led collaborations with storage partners.

Roberto is an advisor at Silicon Catalyst and a Mentor at London Business School.  He holds a first degree in Artificial Intelligence and Quantum Computing and an Executive MBA from London Business School.

 

Roberto Mijat

Senior Director
Blaize

Roberto leads product marketing and strategy at Blaize. He is an AI technology and product leader with an engineering background and over 20 years of experience in developing and taking to market advanced semiconductor hardware and software solutions.

Roberto spent over 15 years at Arm, holding several senior product and business leadership positions and leading multiple global product teams. He was a member of the company’s Product Line Board and Steering board for AI on CPU. He created and architected the Compute Libraries framework, a key component of Arm’s AI software stack, deployed in billions of devices today. Roberto established the Arm GPU Compute ecosystem from scratch and led collaborations with dozens of industry leaders, including Facebook, Google, Huawei, MediaTek, and Samsung.

At Graphcore, Roberto led the launch of the Bow IPU AI accelerator, promoted the standardization of FP8, and led collaborations with storage partners.

Roberto is an advisor at Silicon Catalyst and a Mentor at London Business School.  He holds a first degree in Artificial Intelligence and Quantum Computing and an Executive MBA from London Business School.

 

Author:

Adam Benzion

Chief Experience Officer
Edge Impulse

Adam Benzion

Chief Experience Officer
Edge Impulse
Software
Hardware
Infrastructure
Systems
Moderator

Author:

Mitchelle Rasquinha

Software Engineer
MLCommons

Mitchelle Rasquinha

Software Engineer
MLCommons

Author:

Bing Yu

Senior Technical Director
Andes Technology

Bing Yu is a Sr. Technical Director at Andes Technology. He has over 30 years of experience in technical leadership and management, specializing in machine learning hardware, high performance CPUs and system architecture. In his current role, he is responsible for processor roadmap, architecture, and product design. Bing received his BS degree in Electrical Engineering from San Jose State University and completed the Stanford Executive Program (SEP) at the Stanford Graduate School of Business.

Bing Yu

Senior Technical Director
Andes Technology

Bing Yu is a Sr. Technical Director at Andes Technology. He has over 30 years of experience in technical leadership and management, specializing in machine learning hardware, high performance CPUs and system architecture. In his current role, he is responsible for processor roadmap, architecture, and product design. Bing received his BS degree in Electrical Engineering from San Jose State University and completed the Stanford Executive Program (SEP) at the Stanford Graduate School of Business.

Author:

Thomas Sohmers

Founder and CEO
Positron AI

Thomas Sohmers is an innovative technologist and entrepreneur, renowned for his pioneering work in the field of advanced computing and artificial intelligence. Thomas began programming at a very early age, which led him to MIT as a high school student where he worked on cutting-edge research. By the age of 18, he had become a Thiel Fellow, marking the beginning of his remarkable journey in technology and innovation. In 2013, Thomas founded Rex Computing, where he designed energy-efficient processors for high-performance computing applications. His groundbreaking work earned him numerous accolades, including a feature in Forbes' 30 Under 30. After a stint exploring the AI industry, working on scaling out GPU clouds and large language models, Thomas founded and became CEO of Positron in 2023. Positron develops highly efficient transformer inferencing systems, and under Thomas's leadership, it has quickly become one of the most creative and promising startups in the AI industry.

Thomas Sohmers

Founder and CEO
Positron AI

Thomas Sohmers is an innovative technologist and entrepreneur, renowned for his pioneering work in the field of advanced computing and artificial intelligence. Thomas began programming at a very early age, which led him to MIT as a high school student where he worked on cutting-edge research. By the age of 18, he had become a Thiel Fellow, marking the beginning of his remarkable journey in technology and innovation. In 2013, Thomas founded Rex Computing, where he designed energy-efficient processors for high-performance computing applications. His groundbreaking work earned him numerous accolades, including a feature in Forbes' 30 Under 30. After a stint exploring the AI industry, working on scaling out GPU clouds and large language models, Thomas founded and became CEO of Positron in 2023. Positron develops highly efficient transformer inferencing systems, and under Thomas's leadership, it has quickly become one of the most creative and promising startups in the AI industry.

Author:

Sree Ganesan

VP of Product
d-Matrix

Sree Ganesan, VP of Product, d-Matrix: Sree is responsible for product management functions and business development efforts across the company. She manages the product lifecycle, definition and translation of customer needs to the product development function, acting as the voice of the customer. Prior, Sree led the Software Product Management effort at Habana Labs/Intel, delivering state-of-the-art deep learning capabilities of the Habana SynapseAI® software suite to the market. Previously, she was Engineering Director in Intel’s AI Products Group, where she was responsible for AI software strategy and deep learning framework integration for Nervana NNP AI accelerators. Sree earned a bachelor’s degree in electrical engineering from the Indian Institute of Technology Madras and a PhD in computer engineering from the University of Cincinnati, Ohio.

Sree Ganesan

VP of Product
d-Matrix

Sree Ganesan, VP of Product, d-Matrix: Sree is responsible for product management functions and business development efforts across the company. She manages the product lifecycle, definition and translation of customer needs to the product development function, acting as the voice of the customer. Prior, Sree led the Software Product Management effort at Habana Labs/Intel, delivering state-of-the-art deep learning capabilities of the Habana SynapseAI® software suite to the market. Previously, she was Engineering Director in Intel’s AI Products Group, where she was responsible for AI software strategy and deep learning framework integration for Nervana NNP AI accelerators. Sree earned a bachelor’s degree in electrical engineering from the Indian Institute of Technology Madras and a PhD in computer engineering from the University of Cincinnati, Ohio.

4:10 PM - 4:35 PM
PRESENTATION

With the ubiquitous and increasing use of computing, the talk will quantitatively demonstrate unsustainable energy and complexity trends in computing and AI, from hardware, algorithms, and software. Our discussion of the unsustainability of these trends will motivate a few exciting directions for computing, especially for applications to AI/ML. Specifically, we will touch upon the evolution of hardware in terms of energy used following Dennard scaling and the challenges posed by continuing these current trends.  We will illustrate opportunities suggested by a few of these unsustainable trends of computing, specifically on applications to Machine Learning and Artificial Intelligence including at the edge. Given the goals of achieving AGI promised by current technologies, we will propose a modified form of Turing’s test that points to a new conceptualization of computing for application beyond the current paradigms.

Inferencing
Systems
Infrastructure

Author:

Sadasivan Shankar

Research Technology Manager
SLAC National Laboratory and Stanford University

Sadasivan (Sadas) Shankar is Research Technology Manager at SLAC National Laboratory, adjunct Professor in Stanford Materials Science and Engineering, and Lecturer in the Stanford Graduate School of Business. He was an Associate in the Department of Physics at Harvard University, and was the first Margaret and Will Hearst Visiting Lecturer in Harvard and the first Distinguished Scientist in Residence at the Harvard Institute of Applied Computational Sciences. He has co-instructed classes related to design of materials, computing, sustainability in materials, and has received Excellence in Teaching award from Harvard University. He is co-instructing a class at Stanford University on Translation for
Innovations. He is a co-founder of and the Chief Scientist at Material Alchemy, a “last mile” translational and independent venture that has been recently founded to accelerate the path from materials discovery to adoption, with environmental sustainability as a key goal. In addition to research on fundamentals of Materials Design, his current research is on new architectures for specialized AI methods is exploring ways of bringing machine intelligence to system-level challenges in inorganic/biochemistry, materials, and physics and new frameworks for computing as information processing inspired by lessons from 

nature.
Dr. Shankar’s current research and analysis on Sustainable Computing is helping provide directions for the US Department of Energy’s EES2 scaling initiatives (energy reduction in computing every generation for 1000X reduction in 2 decades) as part of the White House Plan to Revitalize American Manufacturing and Secure Critical Supply Chains in 2022 for investment in research, development, demonstration, and commercial application (RDD&CA) in conventional semiconductors.

In addition, his analysis is helping identify pathways for energy efficient computing. While in the industry, Dr. Shankar and his team have enabled several critical technology decisions in the semiconductor industrial applications of chemistry, materials, processing, packaging, manufacturing, and design rules for over nine generations of Moore’s law including first advanced
process control application in 300 mm wafer technology; introduction of flip chip packaging using electrodeposition, 100% Pb-elimination in microprocessors, design of new materials, devices including nano warp-around devices for the advanced semiconductor technology manufacturing, processing
methods, reactors, etc. Dr. Shankar managed his team members distributed across multiple sites in the US, with collaborations in Europe. The teams won several awards from the Executive Management and technology organizations.

He is a co-inventor in over twenty patent filings covering areas in new
chemical reactor designs, semiconductor processes, bulk and nano materials for the sub 10 nanometer generation of transistors, device structures, and algorithms. He is also a co-author in over hundred publications and presentations in measurements, multi-scale and multi-physics methods spanning from quantum scale to macroscopic scales, in the areas of chemical synthesis, plasma chemistry and processing, non-equilibrium electronic, ionic, and atomic transport, energy efficiency of information processing, and machine learning methods for bridging across scales, and estimating complex materials
properties and in advanced process control.

Dr. Shankar was an invited speaker at the Clean-IT Conference in Germany on Revolutionize Digital Systems and AI (2023), Telluride Quantum Inspired Neuromorphic Computing Workshop (2023) on Limiting Energy Estimates for Classical and Quantum Information Processing, Argonne National
Laboratory Director’s Special Colloquium on the Future of Computing (2022), panelist on Carnegie Science series on Brain and Computing (2020), lecturer in the Winter Course on Computational Brain Research in IIT-M-India (2020), invited participant in the Kavli Institute of Theoretical Physics program
on Cellular Energetics in UCSB (2019), invited speaker to the Camille and Henry Dreyfus Foundation meeting on Machine Learning for problems in Chemistry and Materials Science (2019), a Senior Fellow in UCLA Institute of Pure and Applied Mathematics during the program on Machine Learning and Manybody
Physics (2016), invited to the White House event for starting of the Materials Genome Initiative (2012), Invited speaker in Erwin Schrödinger International Institute for Mathematical Physics-Vienna (2007), Intel’s first Distinguished Lecturer in Caltech (1998) and MIT (1999). He has also given several
colloquia and lectures in universities all over the world and his research was also featured in the publications Science (2012), TED (2013), Nature Machine Intelligence (2022), Nature Physics (2022).

Sadasivan Shankar

Research Technology Manager
SLAC National Laboratory and Stanford University

Sadasivan (Sadas) Shankar is Research Technology Manager at SLAC National Laboratory, adjunct Professor in Stanford Materials Science and Engineering, and Lecturer in the Stanford Graduate School of Business. He was an Associate in the Department of Physics at Harvard University, and was the first Margaret and Will Hearst Visiting Lecturer in Harvard and the first Distinguished Scientist in Residence at the Harvard Institute of Applied Computational Sciences. He has co-instructed classes related to design of materials, computing, sustainability in materials, and has received Excellence in Teaching award from Harvard University. He is co-instructing a class at Stanford University on Translation for
Innovations. He is a co-founder of and the Chief Scientist at Material Alchemy, a “last mile” translational and independent venture that has been recently founded to accelerate the path from materials discovery to adoption, with environmental sustainability as a key goal. In addition to research on fundamentals of Materials Design, his current research is on new architectures for specialized AI methods is exploring ways of bringing machine intelligence to system-level challenges in inorganic/biochemistry, materials, and physics and new frameworks for computing as information processing inspired by lessons from 

nature.
Dr. Shankar’s current research and analysis on Sustainable Computing is helping provide directions for the US Department of Energy’s EES2 scaling initiatives (energy reduction in computing every generation for 1000X reduction in 2 decades) as part of the White House Plan to Revitalize American Manufacturing and Secure Critical Supply Chains in 2022 for investment in research, development, demonstration, and commercial application (RDD&CA) in conventional semiconductors.

In addition, his analysis is helping identify pathways for energy efficient computing. While in the industry, Dr. Shankar and his team have enabled several critical technology decisions in the semiconductor industrial applications of chemistry, materials, processing, packaging, manufacturing, and design rules for over nine generations of Moore’s law including first advanced
process control application in 300 mm wafer technology; introduction of flip chip packaging using electrodeposition, 100% Pb-elimination in microprocessors, design of new materials, devices including nano warp-around devices for the advanced semiconductor technology manufacturing, processing
methods, reactors, etc. Dr. Shankar managed his team members distributed across multiple sites in the US, with collaborations in Europe. The teams won several awards from the Executive Management and technology organizations.

He is a co-inventor in over twenty patent filings covering areas in new
chemical reactor designs, semiconductor processes, bulk and nano materials for the sub 10 nanometer generation of transistors, device structures, and algorithms. He is also a co-author in over hundred publications and presentations in measurements, multi-scale and multi-physics methods spanning from quantum scale to macroscopic scales, in the areas of chemical synthesis, plasma chemistry and processing, non-equilibrium electronic, ionic, and atomic transport, energy efficiency of information processing, and machine learning methods for bridging across scales, and estimating complex materials
properties and in advanced process control.

Dr. Shankar was an invited speaker at the Clean-IT Conference in Germany on Revolutionize Digital Systems and AI (2023), Telluride Quantum Inspired Neuromorphic Computing Workshop (2023) on Limiting Energy Estimates for Classical and Quantum Information Processing, Argonne National
Laboratory Director’s Special Colloquium on the Future of Computing (2022), panelist on Carnegie Science series on Brain and Computing (2020), lecturer in the Winter Course on Computational Brain Research in IIT-M-India (2020), invited participant in the Kavli Institute of Theoretical Physics program
on Cellular Energetics in UCSB (2019), invited speaker to the Camille and Henry Dreyfus Foundation meeting on Machine Learning for problems in Chemistry and Materials Science (2019), a Senior Fellow in UCLA Institute of Pure and Applied Mathematics during the program on Machine Learning and Manybody
Physics (2016), invited to the White House event for starting of the Materials Genome Initiative (2012), Invited speaker in Erwin Schrödinger International Institute for Mathematical Physics-Vienna (2007), Intel’s first Distinguished Lecturer in Caltech (1998) and MIT (1999). He has also given several
colloquia and lectures in universities all over the world and his research was also featured in the publications Science (2012), TED (2013), Nature Machine Intelligence (2022), Nature Physics (2022).

Infrastructure
Data Centres

Author:

Gerald Friedland

Principal Scientist Auto ML
AWS

Dr. Gerald Friedland is a Principal Scientist at AWS working on Low-Code, No-Code Machine Learning. Before that he was CTO and founder of Brainome, a no-code machine learning service for miniature models. Other posts include UC Berkeley, Lawrence Livermore National Lab, and the International Computer Science Institute. He was the lead figure behind the Multimedia Commons initiative, a collection of 100M images and 1M videos for research and has published more than 200 peer-reviewed articles in conferences, journals, and books. His latest book "Information-Driven Machine Learning" was released by Springer-Nature in Dec. 2023. He also co-authored a textbook on Multimedia Computing with Cambridge University Press. Dr. Friedland received his doctorate (summa cum laude) and master's degree in computer science from Freie Universitaet Berlin, Germany, in 2002 and 2006, respectively.

Gerald Friedland

Principal Scientist Auto ML
AWS

Dr. Gerald Friedland is a Principal Scientist at AWS working on Low-Code, No-Code Machine Learning. Before that he was CTO and founder of Brainome, a no-code machine learning service for miniature models. Other posts include UC Berkeley, Lawrence Livermore National Lab, and the International Computer Science Institute. He was the lead figure behind the Multimedia Commons initiative, a collection of 100M images and 1M videos for research and has published more than 200 peer-reviewed articles in conferences, journals, and books. His latest book "Information-Driven Machine Learning" was released by Springer-Nature in Dec. 2023. He also co-authored a textbook on Multimedia Computing with Cambridge University Press. Dr. Friedland received his doctorate (summa cum laude) and master's degree in computer science from Freie Universitaet Berlin, Germany, in 2002 and 2006, respectively.

4:35 PM - 4:55 PM
PRESENTATION
MLOps
Edge
Systems

Author:

Tom Sheffler

Solution Architect, Next Generation Sequencing
Former Roche

Tom earned his PhD from Carnegie Mellon in Computer Engineering with a focus on parallel computing architectures and prrogramming models.  His interest in high-performance computing took him to NASA Ames, and then to Rambus where he worked on accelerated memory interfaces for providing high bandwidth.  Following that, he co-founded the cloud video analytics company, Sensr.net, that applied scalable cloud computing to analyzing large streams of video data.  He later joined Roche to work on next-generation sequencing and scalable genomics analysis platforms.  Throughout his career, Tom has focused on the application of high performance computer systems to real world problems.

Tom Sheffler

Solution Architect, Next Generation Sequencing
Former Roche

Tom earned his PhD from Carnegie Mellon in Computer Engineering with a focus on parallel computing architectures and prrogramming models.  His interest in high-performance computing took him to NASA Ames, and then to Rambus where he worked on accelerated memory interfaces for providing high bandwidth.  Following that, he co-founded the cloud video analytics company, Sensr.net, that applied scalable cloud computing to analyzing large streams of video data.  He later joined Roche to work on next-generation sequencing and scalable genomics analysis platforms.  Throughout his career, Tom has focused on the application of high performance computer systems to real world problems.

Edge
Inferencing

Author:

Prasad Jogalekar

Head of Global Artificial Intelligence and Accelerator Hub
Ericsson

Prasad Jogalekar

Head of Global Artificial Intelligence and Accelerator Hub
Ericsson

Author:

Paul Karazuba

VP of Marketing
Expedera

Paul is Vice President of Marketing at Expedera, a leading provider of AI Inference NPU semiconductor IP. He brings a talent for transforming new technology into products that excite customers. Previously, Paul was VP Marketing at PLDA, specializing in high-speed interconnect IP, until its acquisition by Rambus. Before PLDA, he was Senior Director of Marketing at Rambus. Paul brings more than 25 years of marketing experience including Quicklogic, Aptina (Micron), and others. He holds a BS in Management and Marketing from Manhattan College.

 

Paul Karazuba

VP of Marketing
Expedera

Paul is Vice President of Marketing at Expedera, a leading provider of AI Inference NPU semiconductor IP. He brings a talent for transforming new technology into products that excite customers. Previously, Paul was VP Marketing at PLDA, specializing in high-speed interconnect IP, until its acquisition by Rambus. Before PLDA, he was Senior Director of Marketing at Rambus. Paul brings more than 25 years of marketing experience including Quicklogic, Aptina (Micron), and others. He holds a BS in Management and Marketing from Manhattan College.

 

Author:

Stuart Clubb

Technical Product Management Director
Siemens

Stuart is responsible for Catapult HLS Synthesis and Verification Solutions since July 2017. Prior to this new role, Stuart had been successfully managing the North American FAE team for Mentor/Siemens and Calypto Design Systems and was key to the growth achieved for the CSD products after the Calypto acquisition. Moving from the UK in 2001 to work at Mentor Graphics, Stuart held the position of Technical Marketing Engineer, initially on the Precision RTL synthesis product for 6 years and later on Catapult for 5 years. He has held various engineering and application engineering roles ASIC and FPGA RTL hardware design and verification. Stuart graduated from Brunel University, London, with a Bachelors of Science.

Stuart Clubb

Technical Product Management Director
Siemens

Stuart is responsible for Catapult HLS Synthesis and Verification Solutions since July 2017. Prior to this new role, Stuart had been successfully managing the North American FAE team for Mentor/Siemens and Calypto Design Systems and was key to the growth achieved for the CSD products after the Calypto acquisition. Moving from the UK in 2001 to work at Mentor Graphics, Stuart held the position of Technical Marketing Engineer, initially on the Precision RTL synthesis product for 6 years and later on Catapult for 5 years. He has held various engineering and application engineering roles ASIC and FPGA RTL hardware design and verification. Stuart graduated from Brunel University, London, with a Bachelors of Science.

4:55 PM - 5:15 PM
PRESENTATION

With the rise of generative AI increasing the demand for compute power, there is a growing need to modernize manufacturing to build the infrastructure required to meet this demand. Addressing this challenge requires an integrated software and robotics solution for electronics manufacturers across the manufacturing life cycle – from product design to assembly to disassembly.

Bright Machines' differentiated software capabilities take a data-focused approach to manufacturing. The latest product development from Bright Machines introduces a Design for Automated Assembly (DFAA) solution that leverages Nvidia Omniverse's virtual simulation platform. Chief Strategy Officer, Sviat Dulianinov, will share Bright Machines' vision for the future of manufacturing, leveraging digital twins to lower total production costs and accelerate time to market. The session will explore the impact of software and data on the digital transformation of the manufacturing industry.”

Digital Twins
Manufacturing

Author:

Sviat Dulianinov

Chief Strategy Officer
Bright Machines

Sviat is a seasoned strategy executive with 10 years of strategic management, management consulting, and startup experience across the United States and Europe. Prior to joining Bright Machines, he served as Chief Strategy Officer at Dutch retailer SPAR. Previously, Sviat had a distinct consulting career at Bain & Company and co-founded an airport mobility company. Sviat holds a Bachelor's degree in Economics and Foreign Languages from MGIMO University and an MBA from the Wharton School of the University of Pennsylvania.

Sviat Dulianinov

Chief Strategy Officer
Bright Machines

Sviat is a seasoned strategy executive with 10 years of strategic management, management consulting, and startup experience across the United States and Europe. Prior to joining Bright Machines, he served as Chief Strategy Officer at Dutch retailer SPAR. Previously, Sviat had a distinct consulting career at Bain & Company and co-founded an airport mobility company. Sviat holds a Bachelor's degree in Economics and Foreign Languages from MGIMO University and an MBA from the Wharton School of the University of Pennsylvania.

End of Conference

Jump to: Pre-Day | Day 1 | Day 2 | Day 3

Interested in learning more about the entire event?

Download our official brochure

Download