At SEMICON China 2026, AMD VP and Head of Corporate Strategy and Partnerships Mario Morales delivered a stark warning: while the AI market is rapidly expanding toward a projected US$1.7 trillion valuation, the ultimate constraint may not be silicon capability — but electricity.
To illustrate the pace of adoption, Morales compared AI to the early internet era. While it took roughly 10 years for the internet to reach one billion users, AI achieved that milestone in a fraction of the time following its mainstream breakthrough in late 2022. Looking ahead, AMD projects AI could surpass 5 billion users by 2030.
This rapid adoption is fueling an unprecedented surge in demand for computing power across cloud service providers and the broader supply chain, with the semiconductor industry emerging as a primary beneficiary.

Credit: Digitimes
Data cited at the event shows the global semiconductor market is expected to grow from US$772 billion in 2025 to US$975 billion in 2026, representing 26.3% year-over-year growth. Morales further indicated the market could exceed US$1.2 trillion in the near term and reach US$1.7 trillion over AMD's forecast horizon.
The 700GW constraint and rising infrastructure costs
This growth trajectory, however, carries a significant physical constraint. Morales warned that meeting AI-driven compute demand by 2030 would require more than twice today's global power capacity, with total demand reaching approximately 700 gigawatts.
He underscored the scale of investment required, noting that each gigawatt of power corresponds to roughly US$25 billion in facilities — positioning energy availability as a central constraint for the next phase of AI expansion.

Credit: Digitimes
Capex surge reflects industry urgency
Despite these constraints, major technology companies are continuing to accelerate capital spending. Global capex reached approximately US$447 billion in 2025 and is expected to rise to US$650 billion this year, before surpassing US$720 billion the year after.
This investment wave is closely tied to the industry's transition from AI training to inference, where models are deployed at scale across a wide range of applications. While companies such as OpenAI and Anthropic are seeing strong revenue growth — particularly from enterprise use cases like AI coding tools — they continue to operate under heavy cost pressure from infrastructure buildout.
Efficiency gains may accelerate demand
Morales also pointed to the "Jevons paradox," noting that improvements in efficiency do not necessarily reduce resource consumption. As AI systems become more efficient and model costs decline — dropping by roughly 10x per year — usage is expected to increase rather than stabilize.
The growing availability of open-source models is further accelerating this trend. These models are reaching the market within months of proprietary alternatives, narrowing the performance gap and enabling broader adoption.

Credit: Digitimes
Physical AI expands the opportunity set
Looking further ahead, Morales highlighted "physical AI" as the next major growth frontier. This includes applications across the global labor market — estimated at US$40 trillion — as well as sectors such as transportation, which counts over 1.2 billion vehicles worldwide.
He noted that this shift will expand demand beyond GPUs, with CPUs and NPUs playing a more significant role in supporting agentic AI systems and distributed workloads. The physical AI opportunity alone is expected to exceed US$200 billion.
Supply chain scaling under pressure
Meeting this demand will require sustained expansion across the semiconductor supply chain. The foundry segment is projected to grow at a 17% annual rate, surpassing US$350 billion. Meanwhile, the memory market is expected to rise sharply from US$151 billion in 2025 to US$464 billion in 2026, reflecting both rising bit demand and increasing architectural complexity driven by AI workloads.

Credit: Digitimes
The bottleneck shifts from compute to power
Morales' remarks point to a broader shift in how constraints are defined in the AI industry. While recent competition has centered on compute performance and chip capabilities, the next phase is increasingly shaped by infrastructure availability — particularly power.
The projected requirement of 700 gigawatts highlights a structural imbalance between AI demand and global energy capacity. As adoption accelerates toward billions of users, the ability to secure sufficient power and deploy infrastructure at scale may become a key differentiator across regions and companies.
At the same time, the Jevons paradox suggests that efficiency gains will not ease this pressure. Lower costs and wider accessibility are more likely to drive further expansion in usage, reinforcing demand for both compute and energy.
The implication is clear: the AI race is entering a new phase. Beyond advances in silicon, competitiveness will increasingly depend on access to power, capital investment capacity, and the ability to scale infrastructure in parallel with demand.
Article edited by Jerry Chen

