AI’s insatiable demand for compute power is smashing Moore’s law
The rapid rise of artificial intelligence (AI) is driving an extraordinary surge in demand for computing power, data centres, and energy. This demand is growing at such an unprecedented pace that it will require significant strategic shifts at both national and industry levels, according to analysis by Bain & Company.
For decades, Moore’s law – that is, the number of transistors on an integrated circuit doubles about every two years – has been the unrivaled measure of technological progress. Most technological innovations launched have to date successfully aligned with Moore’s law trajectory – but now, with the arrival of artificial intelligence – the model is on its way to being smashed.
AI’s computational needs – the number of computations that must be performed to support evolving models – are growing more than twice as fast as Moore’s law. With continued growth of these models and more adoption of AI by enterprises, Bain & Company’s analysis suggests that the total global compute demand could reach 200 gigawatts by 2030.
The impact of this on data centres is huge: around $500 billion in annual spending would be needed to meet this demand. Similarly, huge investments will be needed in the power infrastructure to meet electricity demand – at a time when the grid has seen relatively flat load growth for the past 20 years.

“If the current scaling laws hold, AI will increasingly strain supply chains globally,” said David Crawford, senior partner at Bain & Company. “Because AI compute demand is outpacing semiconductor efficiency, the trends call for dramatic increases in power supply on grids that have not added capacity for decades. Add the arms race dynamic between nations and leading providers, and the potential for overbuild and underbuild has never been more challenging to navigate.”
“Working through the potential for innovation, infrastructure, supply shortages, and algorithmic gains is critical to navigate the next few years.”
How do you fund $500 billion every year?
Bain & Company emphasizes that funding may become a mounting issue. With the $500 billion of capital investment “far exceeding any anticipated or imagined government subsidies”, the large majority of funds will have to come from the private sector.
The firm’s analysis of sustainable ratios of capital expenditures (capex) to revenue for cloud service providers suggests that $500 billion of annual capex corresponds to $2 trillion in annual revenue.
To put that number into perspective: even if companies shifted all of their on-premise IT budgets to cloud and also reinvested the savings anticipated from applying AI in sales, marketing, customer support, and R&D (estimated at about 20% of those budgets) into capital spending on new data centers, the amount would still fall $800 billion short of the revenue needed to fund the full investment.
Even with AI-related savings, $800 billion in additional revenue would need to be generated to fund the necessary data centers
“By 2030, technology executives will be faced with the challenge of deploying about $500 billion in capital expenditures and finding about $2 trillion in new revenue to profitably meet demand,” noted Crawford.
Overcoming capacity bottlenecks
Then, even if the money is made available, there are serious risks that capacity bottlenecks in computing power and energy will hamper AI’s adoption and scaling. “It will be difficult to build data centers fast enough to meet rising demand given constraints in four areas: power supply, construction services, compute enablers (such as GPUs), and the limited supply of data center equipment,” said Crawford.
Of these four factors, energy is likely to be the most challenging as bringing new power generation, transmission, and distribution online in a highly regulated industry can take four years or longer.
“While no single issue will solve this deep challenge, innovation, government support, and efficient markets are all factors that could help close the gap,” Crawford concluded.

