
Make calls, take care of bills, book a table at the restaurant, send emails, manage your calendar and check in for your flights: are we closer to artificial general intelligence (AGI)?
Generative artificial intelligence has entered everyday life at a speed that’s hard to ignore since the arrival of ChatGPT about three years ago.
Technology has fueled a rush to invest in companies in the sector and has infiltrated search engines, mobile phone interfaces, and the daily lives of hundreds of millions of people.
However, there are complaints. One is that asking a chatbot to write text, generate an image, or organize information can sometimes save time, but in certain cases it requires a new layer of work — writing instructions, adjusting requests, correcting your “hallucinations.” And it’s frustrating.
In fact, it is so frustrating for the user that the AI’s next bet is to agentic AIdesigned to perform tasks more autonomously, with less human intervention and, in theory, with results closer to what is expected from an “always on” assistant. Instead of limiting itself to answering questions, this type of system seeks to take action: navigate pages, move files, fill out forms, organize messages, make appointments and handle payments.
It was in this context that what appears to be the first “real world” example of this approach was born: .
An AI agent with access to your accounts
OpenClaw — which started as ClawdBot — presents itself as an AI agent capable of taking over day-to-day tasks if you are given sufficient permissions. By gaining access to computer files and email and social media accounts, the system is now able to execute user requests without having to constantly stop to ask for authorization, request passwords or wait for confirmations. The technology is built on the Claude Codea version of the company’s model Anthropicalthough it can be configured to use other AI models, if the user prefers, according to .
The project was developed by software engineer Peter Steinberger and launched at the end of November. But how does it work in practice?
Speak to the IA agent via WhatsApp or Telegram
OpenClaw runs on a user’s computer — or on a virtual private server — and connects messaging apps like WhatsApp, Telegram or Discord to an “agent” with programming and automation capabilities.
The logic is to transform the conversation into a command center: the person sends a message with what they want (“organize this folder”, “summarize what happened in the group”, “book a dinner”), the message is forwarded to the AI agent, which tries to interpret the request and then executes actions on the device where it is installed.
These actions can include searching for files, running scripts, editing documents, automating tasks in a browser and then returning to the user a summary of what was done, again via chat: the user asks, the agent works and responds with results.
But for this to be possible, OpenClaw must always be active. Therefore, many users choose to install the system on dedicated equipment, which is connected 24 hours a day.
Organize files or… book a table at the restaurant
The promise of OpenClaw is to bring together, in a single “assistant”, work and personal life tasks that are normally spread across several apps and services.
One of the examples cited by the BBC is the ability of this agent to manage busy groups in the WhatsApp: instead of reading dozens or hundreds of messages, the user requests a summary of the essentials — it is AI that defines what requires attention and what can be ignored. Some users will have given OpenClaw a “voice” and a number to call restaurants and try reserve tables.
There are also applications linked to household finances and bureaucracy. OpenClaw could search for better prices from suppliers and comparison sites, and automate changes through the browser to reduce expenses. If you have access to email, you can archive invoices, organize bills, and even set up payments on the user’s behalf.
In the professional world, the same logic extends to preparation of materials. If the user has, for example, a presentation to give, they can ask OpenClaw to work on a first version while they sleep, returning a draft in the morning — potentially ready to be presented, depending on the quality of the initial “briefing”.
Convenience with high risk
OpenClaw becomes more powerful the more access you have. And that’s where concerns arise: the speed and autonomy that make it attractive depend on a almost complete trustsince the system needs to touch sensitive accounts and data without asking permission at every step.
If something goes wrong — whether due to failures in the system itself or external attacks — everything entrusted to you could be exposed. Attacks from prompt injectionin which an agent is manipulated through malicious instructions hidden in seemingly innocent content, leading him to perform dangerous actions, is something no one wants. Another risk is that of intrusion if the system is on a poorly protected virtual server.
Furthermore, security researchers have already demonstrated that it is possible to trick OpenClaw into performing potentially harmful actions through prompt engineering — they may instruct you to delete files. Because many AI systems do not reliably distinguish between legitimate commands and dangerous commands, the boundary between useful and destructive can be easily crossed in these cases.
And when the agents start talking to each other?
The bigger risk may be less in what OpenClaw does today and more in the trajectory it can open up. Several OpenClaws have started using a own social networkcall , to interact with each other. The space would work in a similar way to Reddit: agents post “thoughts”, ask for advice on practical problems (such as solving a difficult task assigned by the user) and even on personal issues, including how to deal with abuse from their owners.
Humans can follow conversations but not participateas you already noticed this month.
For some, this sounds like a harbinger of a step towards the so-called artificial general intelligence (AGI)the point at which artificial intelligence equals or surpasses human intelligence. For others, it could be staging: Maybe agents are just “acting out” how they imagine we would expect them to act if we gave them their own online space.
Whatever the interpretation, Moltbook is being seen as a sign of enormous change: an era in which AI agents not only perform isolated tasks, but communicate with each other to coordinate, organize and optimize the user’s life.