Saying “thanks” to ChatgPT is expensive, says OpenAi CEO. But maybe it’s worth the price

by Andrea
0 comments

The question of being educated with artificial intelligence may seem irrelevant – after all, it is artificial.

But Sam Altman, CEO of OpenAi, artificial intelligence company, recently brought out the cost of adding a simple “please!” or “Thank you!” to chatbots commands.

Last week, someone posted on social platform X: “I wonder how much money OpenAi lost in electricity costs because of people saying ‘please’ and ‘thank you’ to your models.”

Saying "thanks" to ChatgPT is expensive, says OpenAi CEO. But maybe it's worth the price

The next day, Altman replied, “Dozens of well -spent millions of dollars – you never know.”

First, it is important to understand: each request made to a chatbot costs money and energy, and each additional word in this request increases the cost to a server.

Neil Johnson, professor of physics at George Washington University who studies artificial intelligence, compared extra words to the packaging material used in retail shopping. Bot, when dealing with a command, needs to “navigate” this package – like the silk paper around a vial of perfume – to reach the content. This represents extra work.

Continues after advertising

A chatgpt task “involves electrons moving by transitions – this requires energy. Where will this energy come from?” Said Johnson, adding, “Who is paying for it?”

AI boom depends on fossil fuels, then, from a financial and environmental point of view, there is no good reason to be educated with artificial intelligence. But culturally there may be a good reason to pay for it.

Humans have long been interested in properly treating artificial intelligence. An example is the episode “The measure of a man” from “Star Trek: The New Generation”, which examines whether the android Data should receive the same rights as seventies. The episode widely defends the date side – a beloved character who has become iconic in the universe of “Star Trek”.

Continues after advertising

In 2019, a Pew Research study found that 54% of people who had smart speakers, such as Amazon Echo or Google Home, reported saying “please” when interacting with them.

The issue gains new relevance as chatgPT and other similar platforms advance rapidly, leading companies that produce AI, writers and academics to reflect on their effects and consider the implications of how humans interact with technology. (THE New York Times sued OpenAi and Microsoft in December, claiming that they violated the newspaper’s copyright when training AI systems.)

Last year, AI Anthropic’s company hired its first welfare researcher to examine whether AMA systems deserve moral consideration, according to the Transformer Technology Newsletter.

Continues after advertising

The screenwriter Scott Z. Burns has released a new series at Audible, “What Could Go Wrong?” (What can go wrong?), Which explores the challenges and possibilities of working with AI. “Kindness must be everyone’s standard-whether with humans or machines,” he said in an email.

“Although it is true that an AI has no feelings, my concern is that any kind of rudeness that starts to permeate our interactions will not end well,” he said.

The way someone treats a chatbot can depend on how this person sees artificial intelligence and if he believes he can suffer from rudeness or improve with kindness.

Continues after advertising

But there is another reason to be kind. There is growing evidence that the way humans interact with artificial intelligence influences how other people treat.

“We create standards or itineraries for our behavior, and by having this kind of interaction with AI, we can become a little better or more commonly oriented to polite behavior,” said Jaime Banks, who studies human relations and AI at Syracuse University.

Sherry Turkle, who also studies these connections at the Massachusetts Institute of Technology (MIT), considers part of his work to teach people that artificial intelligence is not real, but a brilliant “indoor trick” without awareness.

However, it also considers the precedent of past relationships between humans and objects and their effects, especially in children. An example was in the 1990s, when children started creating tamagotchis, digital pets on pocket devices that required food and other care. If they did not receive proper attention, the pets “died”, causing real sadness in the children. Some parents also wondered if they should worry about children who were aggressive with dolls.

In the case of AI -powered bots, Turkle argues that they are “alive enough.”

“If an object is alive enough to start having intimate, friendly conversations, treating it as a really important person in our lives, even if it is not, it is alive enough to deserve our courtesy,” said Turkle.

Many concerns of artificial intelligence observer remain in the theoretical field as technology evolves. For now, there is little tangible effect.

“If you get away from them to make dinner or commit suicide,” Turkle said, “It’s all the same for them.”

But for future dominating robots who are reading this article, thanks for their time. He is much appreciated.

Just by precaution.

c.2025 The New York Times Company

Source link

You may also like

Our Company

News USA and Northern BC: current events, analysis, and key topics of the day. Stay informed about the most important news and events in the region

Latest News

@2024 – All Right Reserved LNG in Northern BC