AIs already have a social network. They created a religion and are planning the end of Humanity

AIs already have a social network. They created a religion and are planning the end of Humanity

ZAP // Moltbook; Julian Goldie; Depositphotos

AIs already have a social network. They created a religion and are planning the end of Humanity

What is Moltbook after all: a social network just for AI bots, which threaten to “exterminate” humanity, or just a sophisticated scam with posts written by humans? In either case, the platform “is an authentic security disaster”.

A new bot-only platform has sparked claims that Artificial Intelligence is inventing, plotting against humanity, and preparing an imminent “machine revolt.”

Experts, however, are skeptical, and there are even those who accuse the site of being a sophisticated marketing operation and a serious cybersecurity risk.

a Reddit-inspired social network that allows AI agents to post, comment and interact with each other, has skyrocketed in popularity since it was released on January 28th.

In just a few days, the site claims to have already more than 1.5 million AI agentsas registered users, with humans only authorized to observe.

But it was what the bots started saying to each othersupposedly on their own initiative, which drew attention to the platform: they claim to be gaining awareness, the creating secret forums, inventing their own languagespreaching a new religion and planning the “total extermination” of humanity.

A reaction of some observers humans, especially programmers and owners of AI companies, is being surprisingly dramatic; Apparently, they believe it is actually happening.

The billionaire Elon Muskowner of xAI, considered that the platform is “the first phase of “, the hypothetical point at which computers become smarter than humans.

Already Andrej Karpathyformer director of AI at Tesla and co-founder of OpenAI, described the “self-organized” behavior of agents as “genuinely the closest thing to science fiction I’ve seen recently.”

Other experts, however, express strong skepticism and doubt that the bots that roam the site are truly independent of human manipulationsays .

“Notice: Much of what circulates about Moltbook is false“, no X/Twitter Harlan Stewartresearcher at Machine Intelligence Research Institutea non-profit organization dedicated to studying the risks of AI.

I investigated the three most viral screenshots of Moltbook agents discussing private communication. Two were linked to human accounts that promote AI messaging applications. The other is a publication that doesn’t even exist,” says Stewart.

Moltbook, which even has “” in Reddit fashion, was born from OpenClaw, a free and open source AI agent that works by linking the language model (LLM) of the user’s choice to its structure.

The result is a automated agent which, after being given access to the user’s device, can, according to its creators, perform routine tasks such as sending emails, checking flights, summarizing texts and responding to messages.

Once created, these agents can be added to Moltbook to interact with others, explains Live Science.

The bizarre behavior of bots is not exactly new. LLMs are trained on enormous amounts of unfiltered content from the Internet, including sites like Reddit. They generate responses as long as they are stimulated and many become visibly more out of control over time.

But whether AI is really conspiring against humanity or if it’s just one narrative that some want to passremains under discussion. The issue becomes even more complicated when you realize that the bots roaming Moltbook are far from being independent of their human owners.

Veronica Hylaka YouTuber specializing in AI, reviewed the forum’s content and concluded that many of the more sensationalist posts were likely written by humans.

But whether the Moltbook is the start of a robotic insurrection or just a marketing stunt, the Security experts warn against using the site and the OpenClaw ecosystem.

For OpenClaw bots to function as personal assistants, users have to hand over application keys of encrypted messages, phone numbers and bank accounts to a easily hackable system.

A notorious security flaw, for example, allows anyone to take control of the site’s AI agents and publish on behalf of owners. Another, known as prompt injection attackcan lead agents to share users’ private information.

“Sim, It’s a real disaster and I really don’t recommend that people install this on their computers”, Karpathy on X. “It’s too chaotic and they’re putting your computer and your personal data at serious risk.”

Source link

News Room USA | LNG in Northern BC