CVPR 2026 Tutorial · Denver, Colorado

The Full Stack of Physical AI

Simulation, Foundation Models, and Edge Deployment
for Next-Generation Robotics Applications

* Equal contribution

General Information

Enabling physical AI applications such as autonomous vehicles and robotics is a challenging problem due to multiple factors including data collection, model architecture design, and performing real-time inference. In this half-day tutorial we will focus on lessons and challenges encountered in developing state-of-the-art software and hardware solutions. These include tools like Isaac Sim, Isaac Lab, manipulation models such as GR00T and ACT, and hardware such as NVIDIA Jetson Thor for performing real-time inference when deploying models on real robots.

Through this tutorial, the speakers will demonstrate how attendees can create an end-to-end robotics pipeline that involves data capturing and annotation both in simulation and the real world, fine-tuning models, and eventually deploying these models on robotic systems for real-time inference. Attendees will receive deployment guides for each component of this pipeline along with slides, training datasets, and code repositories for take-home exercises.

Location
Denver, Colorado
Date
June 3–4, 2026 (TBC)
Format
Half-day Tutorial
Audience
Researchers & Practitioners
Physical AI Simulation Foundation Models Vision-Language-Action Edge Deployment Robotics Human-in-the-Loop

Motivation

The robotics and Physical AI space has been a strong and growing topic at CVPR, especially with computer vision advancements in VLM and VLA models that have become key research areas in recent years. The community has developed several Vision-Language-Action models such as GR00T, π0, OpenVLA, SmolVLA, and ACT.

However, building a complete robotics pipeline, from data collection to model training to deployment, remains a challenging multi-disciplinary endeavor. Data collection often requires expensive hardware and software solutions, which has prohibited many researchers from pursuing this path. Foundation models require careful architecture design and post-training strategies. And deploying models on edge devices demands hardware-aware optimizations to achieve real-time performance.

This tutorial bridges these gaps by providing a hands-on, end-to-end walkthrough of the full Physical AI stack. By the end, attendees will understand the high-level frameworks, tools, and open-source community activities around robotics, embedded devices, and model training, enabling researchers, industry partners, and communities worldwide to improve collaborations in this complex and growing field.

Tentative Schedule

The detailed schedule is being finalized. Check back soon for the full program with talk times and topics.

15 min
Opening

Opening Remarks and Motivation

60 min
Talk 1

TBA

60 min
Talk 2

TBA

15 min
Break

Coffee Break with Interactive Demos

60 min
Talk 3

TBA

30 min
Panel

Panel Discussion

Program

The detailed program is being finalized. Talk descriptions, speakers, and materials will be announced closer to the event.

1

TBA

Details coming soon.

2

TBA

Details coming soon.

3

TBA

Details coming soon.

What You'll Take Home

Slide decks
Training datasets
Full pipeline code
Open-source repos
Deployment guides
Community Discord

Organizers

Dr. Raymond Lo

Dr. Raymond Lo*

Developer Advocate Manager

NVIDIA

Raymond is the developer advocate manager at NVIDIA focusing on robotics and embedded systems. Previously, he was the global lead of the Intel AI evangelist team and co-founded YCombinator-backed augmented reality company Meta, raising over $80M. He holds a PhD and has spoken at TED Talks, SIGGRAPH, CVPR, NeurIPS, and more.

Johnny Núñez

Johnny Núñez*

Developer Advocate

NVIDIA

Johnny is a developer advocate at NVIDIA focusing on Physical AI and Robotics. He brings experience in computer vision, edge computing, and robotics from his experience in Computer Vision and Robotics, especially on Human-Robot-Object Interactions at the University of Barcelona. He is a key member of the Jetson Research Lab driving AI and robotics on edge devices.

Chitoku Yato

Chitoku Yato

Sr. Technical Product Marketing Manager

NVIDIA

Chitoku is Senior Technical Product Marketing Manager for the NVIDIA Jetson Edge AI platform. He works closely with the developer community to evangelize pre-trained AI models and SDKs on Jetson, including tutorials on JetBot and JetRacers. He previously worked at Sony Corporation in Tokyo.

Spencer Huang

Spencer Huang

Product Lead for Robotics

NVIDIA

Spencer is a product line manager at NVIDIA leading robotics software products. His work centers on open-source simulation frameworks for robot learning, synthetic data generation, and advancing robot autonomy from industrial mobile manipulators to generalist humanoid robots.

Dr. Mitesh Patel

Dr. Mitesh Patel*

Sr. Developer Advocate Manager

NVIDIA

Mitesh is a Senior Developer Advocate Manager at NVIDIA. His team creates workflows for GPU-accelerated data science and Generative AI applications. He previously was a Senior Research Scientist at FXPAL and Yahoo! Labs. He holds a PhD in Robotics from the University of Technology Sydney.

* Equal contribution

Related Resources