Long road to go until AI passes its driving test, says Li Auto CEO


The head of Chinese new energy vehicle maker Li Auto has urged the automotive industry to rethink how artificial intelligence is deployed, arguing that AI will only drive real meaningful progress once it becomes a capable productivity engine rather than a glorified assistant.
"AI is improving fast, but my working hours haven't shortened," said CEO Li Xiang in a rare interview last week. "That tells you everything; it's not really saving us time or doing our jobs for us yet."
He went on to explain what he called a three-stage framework for understanding AI adoption: a tool for information like a search engine, a tool in the form of an assistant, and finally an active agent for production to replace human beings.
Most current AI applications, he said, are still in the first stage, where they offer Google-like outputs or surface-level answers based on vast datasets.
The real value, he argued, will come when AI becomes capable of producing tangible, high-quality work in place of humans.
That shift is especially relevant to the auto industry's race to build intelligent driving systems.
"The key question is whether an AI agent can replace a human in a high-intensity, professional task," Li said, framing the idea as a benchmark for evaluating progress in smart cockpits and smart driving.
Li's talk comes when China's automotive industry is reconsidering the position of vehicles' driving-assist functions.
A fatal Xiaomi car crash in late March triggered widespread discussion about smart driving and even threw it into doubt.
In April, China's Ministry of Industry and Information Technology demanded carmakers avoid exaggeration in their marketing and make drivers fully aware of the systems' functions and limitations.
Asked whether smart driving was approaching a plateau, Li pushed back. "This is the darkest hour before the dawn," he said. "We want to solve the problems others can't."
He said Li Auto has ramped up its investment in in-house AI development this year, tripling its planned budget for training infrastructure.
The company is building a vision-language-action (VLA) model; its version of a foundational large model for cars.
While open-source language models such as DeepSeek provide a starting point, Li said the visual and action components must be built from the ground up with automotive data.
"No one else will collect 3D driving data for you," he said. "No one else will optimize for the chip constraints in your vehicle."
Li likens today's moment to the early days of Android in mobile phones. "If DeepSeek is like Linux," he said, "we want to build the Android of smart driving."
Li also disclosed that the company would open-source its operating system, Li OS, developed over four years.
The decision, he said, was driven by a desire to contribute to the community after benefiting from DeepSeek's public release.
"People assume you open-source something because it's weak. Actually, we're sharing it because we think it's strong," he said.
Beyond technical capability, Li sees AI reshaping the in-car experience, particularly for family users, a core demographic for Li Auto.
He envisions intelligent agents that understand household speech patterns and contextual behavior, blending high-definition vision, natural language understanding, and precise vehicle control.
From the perspective of business, McKinsey said the "smart cockpit" — where cars transform from mere transportation tools into living rooms — is rising as a new growth engine for carmakers.
As smart driving systems become standard, the focus is shifting to in-car experiences, with carmakers investing heavily in creating more personalized, intuitive environments for consumers, said the consulting firm.