Skip to main content

Part I: Foundations of Physical AI

Welcome to Part I, where we establish the foundational concepts of Physical AI and embodied intelligence that underpin the entire course.

Part Overview

Physical AI represents a paradigm shift from traditional digital AI. While language models and image classifiers operate purely in virtual spaces, Physical AI systems must understand and interact with the real world through sensors and actuators.

This part answers fundamental questions:

  • What makes Physical AI different from digital AI?
  • Why are humanoid robots emerging now?
  • How do robots perceive their environment?
  • How does this course progress from theory to practice?

Chapters in This Part

Chapter 1: Introduction to Physical AI

Explores the concept of embodied intelligence and why AI systems need to understand physics. Covers the convergence of AI and robotics, and the recent surge in humanoid robotics development.

Key Topics:

  • Physical AI vs. digital AI
  • Embodiment and morphology
  • The physics-AI integration challenge
  • Current state of humanoid robotics

Chapter 2: Humanoid Robotics Landscape

Surveys modern humanoid robots from Tesla Optimus to Unitree G1, examining why the humanoid form factor matters and what capabilities are emerging.

Key Topics:

  • Major humanoid platforms (Tesla, Unitree, Agility, Robotis)
  • Embodiment principles and design rationale
  • Kinematics and degrees of freedom
  • Hardware-software co-design

Chapter 3: Sensor Foundations

Deep dive into the sensor suite that enables robot perception: cameras, depth sensors, IMUs, LiDAR, force/torque sensors, and tactile feedback.

Key Topics:

  • Vision sensors (RGB, depth, thermal)
  • Motion sensing (IMUs, encoders)
  • Environmental sensing (LiDAR, ultrasonic)
  • Multimodal sensor fusion strategies

Chapter 4: Course Overview

Maps out the complete learning journey from weeks 1-13, showing how concepts build from theory through simulation to physical deployment and VLA integration.

Key Topics:

  • Weekly progression and skills development
  • Theory → Simulation → Physical → VLA lifecycle
  • Lab infrastructure and expectations
  • Assessment and project milestones

Learning Outcomes

By completing Part I, you will:

  1. Articulate the distinctions between Physical AI and traditional AI systems
  2. Identify major humanoid robot platforms and their capabilities
  3. Explain how different sensor modalities contribute to robot perception
  4. Plan your learning path through the complete course curriculum

Prerequisites

Required:

  • Basic understanding of AI/ML concepts (neural networks, training)
  • Familiarity with Python programming
  • General physics knowledge (Newton's laws, kinematics)

Helpful:

  • Computer vision basics
  • Control theory fundamentals
  • Prior robotics exposure (not required)

Time Commitment

Total for Part I: 20-30 hours

ChapterReadingImplementationLabsTotal
Ch 11.5 hr1 hr1 hr3.5 hr
Ch 21.5 hr1 hr2 hr4.5 hr
Ch 32 hr2 hr3 hr7 hr
Ch 41 hr1 hr1 hr3 hr

Connection to Course Arc

Part I establishes the why and what of Physical AI. Subsequent parts build on these foundations:

  • Part II (ROS 2): The software framework for implementing Physical AI
  • Part III (Simulation): Virtual environments for safe testing
  • Part IV (Isaac): Advanced simulation and perception
  • Part V (VLA): Adding language and cognitive capabilities
  • Part VI (Hardware): Physical realization of concepts
  • Part VII (Capstone): Integration of all components

Getting Started

Ready to dive in?

Start here: Chapter 1: Introduction to Physical AI

Or review: Course Requirements if you haven't already


Part I provides the conceptual foundation for everything that follows. Take time to understand these core ideas - they'll be referenced throughout the course.