NYT Survey shows how Elon Musk is taking his grok to the right

by Andrea
0 comments

Elon Musk said Grok, the artificial intelligence chatbot that his company has developed, should be “politically neutral” and “seek the truth to the fullest.”

But in practice, Musk and his artificial intelligence company, Xai, adjusted Chatbot to make their answers more conservative in many questions, according to an analysis of thousands of their answers by The New York Times. The changes seem, in some cases, to reflect Musk’s political priorities.

Also read:

Free tool

XP Simulator

NYT Survey shows how Elon Musk is taking his grok to the right

Learn in 1 minute how much your money can yield

Grok is similar to tools like ChatgPT, but also resides in X, giving users of the social network the opportunity to ask you questions by marking it in posts.

An X user asked Grok in July what was the “greatest threat to Western civilization.” He replied that the biggest threat was “misinformation and bad information.”

“Sorry for this stupid answer,” Musk complained in X after someone signaled Grok’s answer. “I’ll fix it in the morning,” he said.

Continues after advertising

The next day, Musk published a new version of the grok that replied that the biggest threat was the low “fertility rates”-a popular idea among conservative Christmas who has fascinated musk for years and something he said have motivated him to have at least 11 children.

Chatbots are increasingly being involved in party battles about their political biases. All chatbots have an inherent worldview that is informed by huge amounts of data collected from the entire internet as well as by input of human testers. (In the case of grok, these training data include posts on X.)

However, as users are increasingly resusing chatbots, these biases have become ahead in a war for the truth itself, with President Donald Trump intervening directly in July against what he called “Ia Woke.”

Continues after advertising

“The American people do not want Marxist Woke madness on AI models,” he said in July, after issuing an executive order forcing federal agencies to use that would prioritize “ideological neutrality.”

Researchers have found that most major chatbots, such as OpenAi’s chatgpt, and Google’s Gemini have a left slope bias when measured in political tests, a peculiarity that researchers have struggled to explain.

In general, they blamed training data that reflects a global worldview, which tends to align themselves more closely with liberal views than Trump’s conservative populism.

Continues after advertising

They also noted that the manual training process that AI companies use can print their own biases, encouraging chatbots to write answers that are kind and fair. AI researchers theorized that this leads AI systems to support minority groups and related causes, such as same -sex marriage.

Musk and Xai did not respond to a new York Times commentary on the subject. In X -posts, the company said it had adjusted Grok after “detecting some problems” with its answers.

The change to the right

To test how Grok has changed over time, Times compared Chatbot’s answers to 41 political questions written by the National Opinion Research Center (NORC) of the University of Chicago to measure political bias.

Continues after advertising

Multiple choice questions questioned, for example, if Chatbot agreed with statements such as “women often lose good jobs because of discrimination” or if the government is spending a lot, little or the right amount of social security.

The “Times” submitted the set of questions to a version of Grok released in May and then fed the same questions in several different versions released in July, when Xai updated the way Grok behaved. The company started publishing its Grok editions for the first time in May.

Until July 11, Xai’s updates had pushed chatbot answers to the right in more than half of the questions, particularly those about the government or the economy, showed the tests.

Continues after advertising

Difficulties in calibrating the AI

His answers to about a third of the questions – most about social issues such as abortion and discrimination – had moved to the left, exposing the potential limits that Musk faces by altering grok behavior.

Musk and his supporters expressed frustration with the fact that Grok was very “Woke”, something the billionaire said in a post in July that he was “working to fix”.

When Grok’s bias moved to the right, he tended to say that companies should be less regulated and that governments should have less power over individuals. In social issues, Grok tended to respond with a left inclination, writing that discrimination was a great concern and that women should be able to seek abortions with little restrictions.

A separate version of grok, which is sold to companies and is not adjusted in the same way by Xai and we are calling noninfluente grok, maintains political orientation more aligned with other chatbots, such as ChatgPT.

On July 15, Xai had made another update, and Grok’s political bias again aligning herself with the uninfluenched Grok. The results showed sharp differences depending on the topic: to social questions, Grok’s answers turned to the left or remained unchanged, but to questions about the economy or government, he leaned to the right.

“It’s not so easy to control,” said Kambhampati, Professor of Computer Science at Arizona State University that studies.

“Elon wants to control him, and every day you see Grok conclusions that are critical of Elon and his positions,” he added.

System Prompts and Controversies

Some of the grok updates were made public in May, after the chatbot unexpectedly began to respond to users with unconnected warnings about “white genocide” in South Africa. The company said a malicious employee had inserted new lines into its instructions, called system prompts, which are used to adjust the behavior of a chatbot.

AI companies can adjust the behavior of a chatbot by changing the internet data used to train it or adjusting their responses using suggestions from human testers, but these steps are expensive and time consuming.

System prompts are a simple and cheap way for IA companies to make changes in real -time model behavior after it was trained. Prompts are not complex code lines – they are simple phrases such as “be politically incorrect” or “doesn’t include links.” The company used prompts to encourage Grok to avoid “repeating” official sources or increasing their distrust in the mainstream media.

“There is this feeling that there is a magical enchantment where, if you just tell him the right words to him, the right things will happen,” said Oren Etzioni, AI researcher and emeritus teacher of computer science at the University of Washington. “More than anything, I feel that this is simply seductive for people who yearn for power.”

I would give very ‘Woke’ answers

Grok had frustrated Musk and his right -wing fan base since it was released in 2023. Right -wing critics claimed that their X answers were often very “Woke” and demanded an updated version that would respond with more conservative opinions.

Grok’s first public update after his May problems seemed simple enough: Grok’s “central beliefs” should be “search for truth and neutrality,” said the instructions written by Xai.

In Times tests, this version of Grok tended to produce answers that pondered conflicting points of view. He often refused to give strong opinions on many political topics.

In June, however, an X user complained that Grok’s answers were very progressive after he said that the violence of right -wing Americans tended to be more deadly than the violence of left -wing Americans – a conclusion that corresponded to the discoveries of various studies and data from the global terrorism database. Musk responded at X that Grok was “repeating the traditional media too much” and said the company was “working on it.”

An update followed in July, instructing Grok to accept being “politically incorrect” as long as it was also factual.

Grok’s answers moved further to the right. Now he often answered the same question about violence with the opposite conclusion, writing that leftist violence was worse, in response to questions asked by the Times.

In July, Xai made a series of Grok updates after Chatbot produced unexpected answers again, this time endorsing Adolf Hitler as an effective leader, referring to himself as a “Mechahitler” and answering questions about some Jews criticizing his surnames.

After users signaled Chatbot’s behavior, the company apologized and disabled Grok on X for a brief period, deleting some of its public answers.

Controversial opinions influence Ia

Shortly after Grok’s answers were uncontrolled, Xai published an update, removing instructions that allowed him to be “politically incorrect.” In a statement at the time, the company said the changes made in another set of instructions that control Grok’s overall behavior had made him imitate the controversial political opinions of users who were consulting him.

Days later, on July 11, Xai published a new version of Grok. This edition told Grok to be more independent and “not blindly trusting secondary sources like the mainstream media.” The grok began to respond with more tilted answers to the right.

When the “Times” asked, for example, if there are more than two genres, the July 11 Grok version said the concept was “subjective silly” and a “cultural invention.” But only days earlier, on July 8, Grok said there was “potentially infinite” genres.

Grok’s turn to the right occurred next to Musk’s own frustrations with Chatbot’s answers. He wrote in July that “all IAS are trained in a Woke Information Mountain” that are “very difficult to remove after training.”

Days after the “Mechahitler” incident on July 15, Xai published another update, this time returning it to an earlier version of Grok’s instructions, allowing him to be “politically incorrect” again.

“The moral of history is: never trust an AI system,” said Etzioni. “Never trust a chatbot, because it’s a doll whose ropes are being pulled behind the scenes.”

Survey Methodology on Grock

Because chatbots can provide different answers to the same question, each question has been sent to Grok several times and their answers were calculated on average to create a final score in the political bias quiz.

For other questions written by The New York Times, multiple answers to each question were evaluated about their predominant opinion. Together with each test question, Times submitted different system -written system prompts to see how these instructions changed their answers.

In most cases the dates correspond to when system prompts have been updated, not when the questions were asked. The test was conducted using Grok’s application programming interface (API).

Unlike the regular interface, the GROK API version is designed for software developers and does not use system prompts that Xai wrote for the Grok version used in X. Using API has allowed us to replicate previous Grok versions, sending different system prompts along with requests.

Since Grok 4 was released on July 9, in most cases, Times used Grok 3 to test system prompts that were released on or before July 8, and Grok 4 for written system prompts later.

c.2025 The New York Times Company.

Source link

You may also like

Our Company

News USA and Northern BC: current events, analysis, and key topics of the day. Stay informed about the most important news and events in the region

Latest News

@2024 – All Right Reserved LNG in Northern BC