As automation expands from controlled indoor environments into more complex and unpredictable outdoor settings, the ability of vision perception systems to operate reliably under harsh conditions has become a critical factor in the development of Physical AI.
Huawei on March 4 unveiled what it said is the world's first mass-produced 896-channel automotive LiDAR, a sensor designed to significantly increase perception resolution for advanced driver-assistance and autonomous driving systems.
Despite the debut of the Alpamayo self-driving AI architecture by Nvidia at CES 2026, Mercedes-Benz, the earliest adoptor of the architecture, instead turned to focus on enhanced Level 2 systems (L2+) with the removal of the L3 autonomous driving functionalities in the refreshed version of its flagship S-Class in late January.
At CES 2026, the global auto industry's conversation has shifted. The focus is no longer confined to the aspirational language of software-defined vehicles (SDVs), but increasingly to the physical limits those ambitions must confront. Battery-electric vehicles are often cast as the most natural embodiment of this future. Yet quietly, and perhaps more consequentially, vehicles powered by internal combustion engines are running up against a harsh and largely irreversible constraint of their own: the physics of computing.
At a media briefing on January 6, Nvidia's chief executive, Jensen Huang, offered further details on the safety design and real-world operating conditions of the company's newly unveiled autonomous-driving platform, Nvidia Alpamayo, as questions mount over how quickly such systems can move from demonstration to daily use.


