AI enabling next industrial revolution
Perspectives from BofA Global Research’s Leading Analysts
July 25, 2025

Vivek Arya, Senior Research Analyst, Semiconductors
AI could be a $1tn industry by 2030
Since the release of ChatGPT in November 2022, AI has transitioned from a discretionary IT layer to the backbone of a new industrial revolution. We see a path for AI to become a $1tn industry by 2030, ~$800bn of which could be dedicated toward generative AI computing, networking and storage infrastructure. AI systems, currently a $250bn market, or roughly 5% of the $5.4tn global IT spend, could reach $820bn, or ~10% by decade’s end, growing at a +26% CAGR, far exceeding overall IT spend growth at just +8% CAGR. AI is becoming structural, with intelligence becoming an input cost like electricity or bandwidth, embedded in sectors where speed and precision move earnings, such as healthcare, defense, finance, industrial automation, cybersecurity and many others. The result for the semiconductor industry is a multiyear capex super-cycle, not a one-off.
Hyperscalers and sovereign AI remain two compelling drivers
At the center of the storm are the hyperscalers — companies who build and operate massive, global-scale computing infrastructure largely for the training and usage of frontier AI models such as ChatGPT and Llama. Cloud capex, driven by said hyperscalers, is expected to be $397bn in 2025 (up 42% YoY) and rising to an additional $435bn in 2026, driving the construction of clusters comprised of hundreds of thousands to millions of advanced semiconductors. Furthermore, we expect every major country/region to invest in creating independent “sovereign” AI factories, trained in local language and culture, generating high-tech employment and serving critical healthcare, defense, industrial, financial and cyber needs. As “AI factories” have become end-to-end systems, we have seen the spend pattern shift from piecemeal chip purchases to holistic orchestration in which servers, networking, storage and power infrastructure move in lockstep. As the race to train the most advanced AI model continues, inference (the usage of an AI model) is getting pushed further down the stack toward cheaper domain-specific silicon deployments, ensuring the capex flywheel keeps turning while unit economics tighten.
AI Scaling: the silicon/chip vs. the system/rack
This race toward more sophisticated AI is not just about buying more chips — it’s about creating a cluster of advanced semiconductors (typically GPUs) that work in unison. Two plays dominate: “scale-up,” wherein tens or hundreds of GPUs are placed into a single logical system, and “scale-out,” wherein many logical systems are linked across racks, data centers and regions. Both depend on ultra-high-bandwidth, low-latency specialized electronics that are incredibly power hungry. Indeed, every incremental gigawatt unlocked translates into tens of billions of dollars of AI factory opportunity, making energy sourcing and advanced cooling critical for the global deployment of advanced AI. Behind the AI data center model is a further value chain of bottlenecks. Semicap tools and advanced packaging unlock denser accelerators, while modern chip design software allows for complex architectures to improve chip performance as Moore’s Law strains.
What’s next? Physical, life sciences and AI at the edge
The next leg of AI capex sees a transition from training to inference (or thinking to doing). We have already begun to see agentic AI penetrate software systems, executing multistep tasks autonomously across tools and APIs. Reliability continues to be a question mark, but the payoff is dramatically cheaper workflow productivity. In the next 2–3 years, we expect the broad emergence of physical AI, as intelligence is embedded into machines that can sense, think and act. Robots, surgical devices and industrial gear all demand high sensor content, safety-critical computing and power efficiency. Edge AI sees inference (AI model usage) being pushed directly to the data source (factory floors, vehicles, phones, wearables, robots). Requiring low latency, privacy compliance and energy efficiency, we see success for AI at the edge hinging on model compression, memory bandwidth and thermal design. Finally, by 2030, we expect to see the early innings of life sciences AI. Tackling molecular design, diagnostics and personalized medicine, life sciences AI will need to operate under tight regulations and high validation costs, driving demand for domain-specific models, secure data repositories and explainable outputs (likely through computing-intensive reasoning models). These future AI domains imply long-tail capex cycles, wherein specialized silicon and packaging for sensors, new power/thermal envelopes and tighter software/hardware integration are expected to continue driving the semiconductor industry through 2030. The next leg of this capex cycle will likely remain centered around GPUs but will also drive strong growth in CPU, memory and networking chip markets. Meanwhile, analog chips are expected to see strong demand from physical, edge and life sciences AI applications as they become mainstream.
Banking regulation changes start new era
Updates to banking regulations should usher in several positive outcomes for banks and will be critical in determining whether stablecoins are a risk or an opportunity.