The AV Weekly: Breakthroughs in Autonomous Computing Power and AI
Welcome to The AV Weekly. This edition focuses on the computing hardware and AI breakthroughs that are accelerating autonomous driving in 2026. The shift to end-to-end neural networks has made computing power the critical bottleneck. Here is what moved this week.
Nvidia Alpamayo Gains Automaker Adoption
Nvidia's Alpamayo foundation model, unveiled at CES 2026, continues to gain traction. Mercedes-Benz confirmed the new CLA EV will be the first production vehicle to run Nvidia's full autonomous driving stack. Nvidia describes Alpamayo as the "ChatGPT moment" for autonomous driving: a large-scale AI model trained on vast driving datasets that can be fine-tuned for specific vehicles and operating environments. Nvidia expects autonomous robotaxis using Alpamayo with partners Uber and Lucid by 2027.
The End-to-End Compute Arms Race
End-to-end neural networks require enormous compute, both for training (in the data center) and inference (in the car). Tesla continues to expand its Dojo supercomputer clusters for training, while its in-vehicle Hardware 5 chip (AI5) targets the processing power needed to run FSD without a dedicated AI server. Waymo uses custom TPUs from Google for its training pipeline. The trend is clear: companies that cannot afford massive compute infrastructure are being squeezed out.
XPeng's In-House Turing Chip
Chinese automaker XPeng has developed its own Turing AI chip, with four chips per vehicle delivering up to 3,000 TOPS (trillion operations per second) of compute. This makes XPeng one of the few automakers designing its own silicon, joining Tesla in the vertically-integrated approach. XPeng plans to deploy this hardware across three new robotaxi models launching in 2026.
Smart Eye Advances Interior AI at CES 2026
Smart Eye demonstrated real-time alcohol impairment detection and an integrated driver monitoring platform at CES 2026. The system runs entirely on-device for privacy, using AI neural networks to analyze driver eye movement, blinking speed, and head position at 60 frames per second. This addresses the critical "handoff problem" for Level 2+ and Level 3 systems: knowing whether the human is ready to take control.
Numbers to Know
- 3,000 TOPS: XPeng's per-vehicle compute with four Turing chips
- 2027: Nvidia's target for Alpamayo-powered robotaxis with Uber and Lucid
- 60 FPS: Smart Eye's DMS camera capture rate for driver monitoring
- $4 million: Aurora's full-year 2025 revenue from commercial driverless freight
Related Articles
Stay Ahead of Autonomous Technology
Get the latest insights on autonomous driving safety, regulations, and technological breakthroughs. Join our community of forward-thinking transportation enthusiasts.
Get Safety Updates