Artificial intelligence chatbots are known as massive amounts of energy. But why do they spend so much electricity? The answer lies on the huge scale of these systems.
In recent years, ChatgPT has fired popularity, with nearly 200 million users sending more than 1,000 million requests to the application every day, such as, among other applications, Google Gemini, Grok or perplexity.ia.
It may seem that the answers to these requests “appear out of nowhere.” However, ourselvesArtificial Intelligence Chatbots consume a colossal amount of energy, which would annually feed entire cities.
In 2023, the data centers used to train and process were responsible for 4.4% of consumption of electricity in the United States. Global, these centers represent about 1.5% of world energy consumption.
These numbers are expected to shoot, at least duplicate by 2030, as the search for AI is growing.
“Just three years agowe didn’t even have the chatgpt, ”he says to Alex de Vries-Gaoresearcher in sustainability of emerging technologies at Vrije Universiteit Amsterdam and founder of Digiconomist, a platform dedicated to exposing the unintentional consequences of digital trends.
“And now we are talking about a technology that will be responsible for almost Half of the electrical consumption Global data centers ”, emphasizes Vries-Gao.
But what makes IA chatbots so intensive in energy? The answer is in huge scale of these systems. In particular, there are two phases of AI that consume the most energy: training and inference, explaina mosharaf chowdhurycomputer scientist at the University of Michigan.
To train AI chatbots, large language models (LLMs) receive huge data setsallowing you to learn, recognize standards and make predictions.
In general, there is a belief that “The bigger the better”In AI training, in which larger models, which absorb more data, tend to produce more accurate predictions.
“So, what happens when trying to train is that the models are currently so big that do not fit in a single graphic processing unit; Do not fit in a single server, ”says Chowdhury.
To get an idea of the scale, one of Vries-Gao, published in 2023 at Joule, estimated that A single NVIDIA DGX A100 server requires up to 6.5 kW of power. The training of an LLM normally Requires multiple serverseach with an average of eight GPUS, operating for weeks or months.
Not total, isto consumes mountains of energy: It is estimated that OpenAi GPT-4 training used 50 Gigawatts-Hour of Energythe equivalent to feed the city of San Francisco for three days.
A inference It also consumes a lot of energy. It is at this stage that an AI chatbot draws conclusions from what he has learned and generates a response to a request.
Although the execution of an LLM after trained require considerably fewer computational resources, Inference is intensive at Energidue to the high number of orders sent to chatbots.
According to OpenAi, in July 2025, ChatgPT users sent more than 2.5 billion requests dailywhich means that multiple servers are used to generate immediate responses. Not to mention other widely used chatbots, including the Gemini Da Googlewhich, according to representatives of the company, will soon become The default option on access to Google Search.
“Even in the inference, it is not really possible to save energy”Says Chowdhury.“ These are not massive data. The data model is already huge, but we have a gigantic number of people to use it”.
Conclusion: The guilt (too) is yours.