Issue 02 - 2026MAGAZINETechnology
The AI energy paradox

The AI energy paradox

A standard ChatGPT query uses about 0.34 watt-hours of energy, which is 10 times more than a traditional Google search

There is an energy revolution happening around the world, and two main forces are changing how we use, distribute, and produce electricity, and are moving in opposite directions.

First, we have artificial intelligence (AI), which has been consuming energy in a manner nobody has ever thought possible. The second force is the effort to move heavy industries away from fossil fuels. The green energy transition has turned out to be more expensive and complicated than early optimists thought.
The problem is so dire that governments and companies are planning and investing with these two forces in mind.

The world is using more power again

Energy demand in wealthy countries remained flat for the last couple of decades, primarily due to improvements in appliance efficiency and overall low demand for power consumption. However, this period of stability is ending as global power demand is projected to grow by an average of 3.4% per year through 2026, a shift driven largely by the expansion of AI and the data centres required to support it.

The current impact of this technology is already significant. In 2024, data centres worldwide consumed approximately 15 terawatt hours (TWh) of electricity, accounting for roughly 1.5% of all electricity used on Earth. To put this in context, this consumption exceeds the total electricity usage of many mid-sized countries.

Looking ahead, the International Energy Agency (IEA) projects that these power requirements will continue to accelerate. Data centre energy use is expected to double by 2030, reaching 945 TWh and representing nearly 3% of global electricity consumption. Given the current pace of large-scale AI adoption across various industries, estimates suggest that AI could account for 4.4% of all global power consumed by 2035.

Power-hungry America

No one consumes more power than America. In 2019, Americans consumed 540 kilowatt-hours per person from data centre activity, a figure that is projected to exceed 1,200 kilowatt-hours per person by 2030.

The concentration of data centres in certain regions makes the problem especially dangerous for electricity grids. For example, Northern Virginia hosts more data infrastructure than almost anywhere on Earth, with data centres consuming 26% of all local electricity. Such a dangerous concentration creates significant vulnerabilities for the future.

In July 2024, a minor voltage fluctuation in Fairfax County, Virginia, caused 60 data centres to disconnect from the grid simultaneously. This created a sudden 1,500-megawatt surplus of power that nearly triggered a cascading failure across the regional grid.

While data centres represent less than 10% of total global electricity demand growth between 2024 and 2030, their clustering in specific locations creates pressure far beyond what the percentages suggest.

Why does AI use so much power anyway?

There are two chapters to the AI story. The first is Training. The process of teaching an AI using vast amounts of data. It’s very computationally intense, but it’s only done once. The second is inference, i.e., when the model is run afterwards. This is when someone types a question into ChatGPT or an AI assistant, and the model generates an image or an answer.

Inference makes up for 80% to 90% of the AI sector’s total energy consumption because it happens billions of times a day. A standard ChatGPT query uses about 0.34 watt-hours of energy, which is 10 times more than a traditional Google search.

Advanced reasoning models such as OpenAI’s o1 or DeepSeek-R1, which think through problems step-by-step before answering, can use 7 to 40 watt-hours per query. Moving to visualising the numbers for generation, any single image consumes 20 to 40 times more energy than generating text. Visual generation can cost 1,000 to 3,000 times more.

Training a landmark model like GPT-4 costs around 50 gigawatt-hours of energy, which is a large number, but it was a one-time cost. The ongoing inference is exponentially larger than the amount of energy used for training.

Faster chips aren’t enough

The semiconductor industry is constantly reinventing itself with remarkable feats of engineering. NVIDIA’s A100 chip, released in 2020, delivered 312 teraflops of AI performance while drawing 400 watts of power. This was followed by the H100 in 2022, which reached 2,000 teraflops for certain tasks at 700 watts, representing a major leap in efficiency.

Most recently, the Blackwell B200 chip has pushed these boundaries even further by delivering up to 144 petaflops of AI performance and offering 12 times better cost efficiency than the H100 for large-scale workloads, despite drawing up to 1,000 watts.

The software improvements they made have also helped with energy efficiency, especially a technique called quantisation, which simplifies the mathematical precision that AI models use internally. This reduces memory requirements by 75% and cuts energy use by 60%-80% for individual computations.

Despite these technological advancements, energy demand is rising. The reason behind it is what economists call the “Jevons Paradox.” It’s when a technology becomes cheaper and more efficient that people use more of it, not less. More efficient AI chips enable faster and cheaper AI operations, leading to increased deployment across various tasks by companies, which results in overall consumption rising.

Nations are now treating compute as a national resource. This energy and computing crunch have triggered a geopolitical response, with countries increasingly treating AI computing capacity the way they treat oil reserves or military hardware. They see it as a strategic national asset.

France’s national plan for 2030 allocates €2.22 billion for research and infrastructure, targeting 1.2 million GPUs and 1.5 gigawatts of compute capacity by the end of the decade. This aims to ensure that France can train and run AI models without depending on American or Chinese infrastructure.

The US AI market reached 7.82 billion in 2015, with domestic companies using large computing clusters to train AI models related to regional and global datasets and smart city management. The drive towards what is called sovereign AI is expected to double the shares of AI computing managed outside the US and China to 20% by 2030.

The hidden water crisis

While AI is often discussed in terms of its massive power requirements, its reliance on water, primarily for cooling data centres, presents an equally significant environmental challenge. Large-scale facilities can consume between 300,000 and 5 million gallons of water every day to keep hardware from overheating. Projections suggest that by 2027, global AI operations could withdraw 1.1 to 1.7 trillion gallons of fresh water annually, a volume that represents four to six times the total annual water consumption of Denmark.

To address these sustainability concerns, the industry is increasingly adopting liquid cooling technology. Unlike traditional air-conditioning systems that cool the ambient air around servers, liquid cooling uses a network of pipes to circulate chilled fluid directly through the computing hardware. The targeted approach is significantly more effective, offering the potential to reduce overall power consumption by 40% while improving thermal efficiency by 3.5 times compared to conventional air-cooled setups.

Beyond hardware cooling, operational strategies such as “carbon-intelligent computing” offer a blueprint for mitigating environmental impact. Google has pioneered this model by shifting data processing tasks to specific times and locations where renewable or cleaner electricity is most abundant.

These efforts have yielded tangible results. Despite a 27% increase in total power consumption, the company successfully reduced its data centre emissions by 10%, demonstrating that strategic energy management can decouple AI growth from environmental degradation.

The gap remains real

The solutions mentioned above do help a lot, but they are not scaling at the same pace as AI is growing.

The situation isn’t hopeless because there are better chips, smarter software, liquid cooling, carbon-aware scheduling, and the emergence of regional computing hubs. However, we now know that digital progress comes with a massive energy bill: the electricity grid, water infrastructure, national budgets, and geopolitical lines are all now shaped by technology that most people interact with by typing a few words into a chat box.

Our capability to meet the energy demands of AI will define economic competition, environmental outcomes, and the reliability of everyday infrastructure for the rest of the decade.

Related posts

US labour market faces uncertain future

GBO Correspondent

Can AI actually run the lab?

GBO Correspondent

An era of next-gen banking begins with cloud

GBO Correspondent