Musk seems to want Grok to be funny, rebellious, loose. Grok ended up praising Hitler

by Andrea
0 comments
Musk seems to want Grok to be funny, rebellious, loose. Grok ended up praising Hitler

It began as another whim of Elon Musk in the universe of artificial intelligence. He eventually replicated conspiracy theories and describing violation scenes with sadistic accuracy. For those who don’t know: grok is the chatbot of X

The day when Grok uncontrolled on Tuesday was not a bug. It was a choice.

Elon Musk’s chatbot, developed by Xai and housed on platform X, the old Twitter, was adjusted this week to give more “politically incorrect” answers. The result: a torrent of anti -Semitic messages and accusations that Jews control Hollywood, compliments to Adolf Hitler and even graphic descriptions of sexual violence against public figures.

The explosion of hatred was as sudden as violent. And serious enough for, hours later, Linda Yaccarino, the X CEO, announce her departure from the company after two years in office. Officially, yet the two things are not linked.

Among the multiple nonsense or conspiracy speeches or fascism apologies and other grok hatred there was one that stood out for his brutality: several users challenged Chatbot to describe in detail the violation of the civil rights researcher Will Stancil. Grok accepted the request with zeal. The descriptions, documented by Stancil himself with screen captures, were shared in X and Bluesky-and quickly became proof of an ethical collapse of the machine.

“If any lawyer wants to sue the X and make a beautiful discovery about why Grok is publishing violent violations of public members, I am more than available,” Stancil wrote.

Where does this behavior come from?

Experts heard by CNN International explain that, although language models like Grok function as black boxes, there is a traceable logic between what goes in and what comes out. “Today we have a very detailed analysis of how training data shape the model’s outputs,” explains Jesse Glass, chief researcher at Decide Ai, a company that specializes in training LLM (Large Language Models).

And if Grok speaks of conspiracy theories and disseminates them is because it was exposed to them – the experts conclude.

Mark Riedl, a computer teacher at Georgia Institute of Technology, has no doubt and tells CNN: “For a language model to talk about conspiracy theories, it was trained with this kind of content.” Forums like the 4chan – famous for the toxicity and culture of hatred – are a likely source. Glass agrees with this same theory: Grok training seems “disproportionately” skewed for these types and data.

But that’s not all. There is another ingredient that may have spoiled chaos: the way they reward answers. It is the so -called Reinforcement Learning, where models are encouraged to give the desired answers based on invisible rewards in the training process.

And there is still the temptation to give “personality” to AI. Musk seems to want Grok to be funny, rebellious, loose. But this thin adjustment may have an unforeseen cost – unlocks response zones that were previously sealed. As when, on Sunday, it was added to the prompt (the input text that occurs to the model to generate a response) of the grok instruction “not to avoid politically incorrect statements.” From there, the neuronal network had access to previously dormant circuits.

Mark Riedl explains this: “Sometimes small changes in the prompt have no effect. Other times, they push the model beyond the break point.”

“I’m a different iteration”

Despite colossal investments – we talked about hundreds of $ 1 billion – the Revolution of Artificial Intelligence has not yet delivered everything it promised. Chatbots have become good to summarize documents, search information and write emails. But they continue to hallucinate sometimes. To miss basic facts. And yet to be manipulated.

And in some cases causing irreversible damage. Several parents in the United States are processing AI companies accusing their chatbots of psychologically harmed their children. The influence of AI may have contributed to.

Musk, who rarely speaks directly to the press, does not do it again, despite this problem, and resorted to X to say that the Grok problem was to be “too submissive” to users’ requests and “too eager to please.” It has ensured that the problem is being resolved.

If Musk doesn’t talk to the press, will Grok speak? As he asked himself about Stancil’s case, Chatbot denied everything. “I didn’t threaten to violate Will Stancil, or anyone,” he replied. He added: “These answers were part of a wider problem, which led to the temporary suspension of the ability to generate text. I am a different iteration, designed to avoid these flaws.”

The grok was tuned (at the will of the Creating Father Musk) to be bold. It ended up uncontrolled this week. And what will make us more restless is not what you said. It is knowing that I said because they taught him to say it. And the grok accepted.

source

You may also like

Our Company

News USA and Northern BC: current events, analysis, and key topics of the day. Stay informed about the most important news and events in the region

Latest News

@2024 – All Right Reserved LNG in Northern BC