“Herd effect” is leading AI to a dead end, technology pioneer warns

SAN FRANCISCO — Over a 40-year career as a computer scientist, Yann LeCun became a world reference in artificial intelligence — and was also known for not skimping on provocations.

He is one of three pioneering researchers to receive the Turing Prize, often called the “Nobel of computing,” for work that underpinned modern AI. For more than a decade, he also served as chief AI scientist at Meta, owner of Facebook and Instagram.

Since leaving Meta in November, LeCun has been increasingly harsh in his criticism of what he sees as Silicon Valley’s obsession with a single path to building intelligent machines. In his view, the technology industry is heading towards a dead end in the development of AI — after years of research and hundreds of billions of dollars invested.

Opportunity with security!

“Herd effect” is leading AI to a dead end, technology pioneer warns

The central point, he says, is the same one he has defended for years: great language models — the technology behind tools like ChatGPT — have a limit to how far they can evolve. Even so, companies would be betting everything on projects that, according to him, will not lead to the goal of creating computers as intelligent as humans, or even more so. LeCun also provokes by saying that Chinese companies, more open to trying other approaches, may get there sooner.

“There is this herd effect where everyone in Silicon Valley needs to work on the same thing,” he said, in a recent interview from his home in Paris. “That doesn’t leave much room for other approaches that could be much more promising in the long term.”

The criticism enters the center of a debate that has shaken the sector since OpenAI started the AI ​​boom in 2022, with the launch of ChatGPT: is it possible to create so-called general artificial intelligence — or even a superintelligence more powerful than human intelligence? And can companies get there with current technologies and concepts?

Continues after advertising

Few researchers have as much history on this topic as LeCun, 65. Much of what the industry is trying to do today comes from an idea he has cultivated since the 1970s. While he was still an engineering student in Paris, he embraced the concept of neural networks, at a time when most scientists considered this line of research a lost cause.

Neural networks are mathematical systems that learn tasks by analyzing large volumes of data. At that time, there was no obvious practical application. But about a decade later, as a researcher at Bell Labs, LeCun and colleagues showed that these systems could learn to read handwriting on envelopes and checks.

In the early 2010s, researchers began demonstrating that neural networks could power a range of technologies — from facial recognition systems to digital assistants and self-driving cars. As Google, Microsoft and other tech giants raised the stakes in this field, Facebook brought in LeCun to set up an AI research lab.

Shortly after the launch of ChatGPT, the two researchers who shared the Turing Prize with LeCun in 2018 began to warn that AI was becoming too powerful — to the point of, according to them, threatening the future of humanity. LeCun considered this view exaggerated.

“There was a lot of noise around the idea that AI systems were inherently dangerous and that putting them in everyone’s hands was a mistake,” he said. “I never believed that.”

He was also one of the voices that pressured Meta and other companies to share their research more openly, through scientific articles and open source technologies.

Continues after advertising

As talk grew that AI could pose some kind of threat to humanity, several companies began to put the brakes on open source initiatives. Meta, however, maintained its strategy. LeCun has always argued that open source is the safest path, because it prevents a single company from concentrating control over the technology and allows more people to participate in identifying and mitigating risks.

Now, with several companies — including Meta itself — showing signs of retreating from this stance, in search of competitive advantage and concerned about malicious uses, LeCun warns that American companies could lose ground to Chinese rivals that continue to invest in open source.

“This is a disaster,” he says. “If everyone is open, the field as a whole moves faster.”

Continues after advertising

Meta’s AI work has experienced turmoil over the past year. After external researchers criticized the company’s new generation of technology, Llama 4, accusing the company of exaggerating the system’s capabilities, CEO Mark Zuckerberg decided to invest billions in a new laboratory dedicated to the search for “superintelligence” — a hypothetical type of AI that would surpass the human brain.

Six months after creating the new laboratory, LeCun left the company to found his own startup, Advanced Machine Intelligence Labs, or AMI Labs.

Although his studies helped pave the way for LLMs (large language models), LeCun insists that they are not the final step in the evolution of AI. For him, the big problem with current systems is that they are unable to plan in advance. Trained only on digital data, they have no solid way of understanding the complexity of the physical world.

Continues after advertising

“LLMs are not a path to superintelligence, nor to human-level intelligence. I’ve said that from the beginning,” he said. “The whole industry has become ‘LLM-pilled’” — a reference to the idea that the sector has been “indoctrinated” or “addicted” to LLMs.

In the last few years that he was at Meta, LeCun worked on technologies that try to predict the outcome of the machine’s own actions. According to him, this type of approach can allow AI to go beyond its current level. His new startup intends to deepen precisely this line of research.

“This type of system can plan what it is going to do,” he says. “The current systems — the LLMs — just can’t do it.”

Continues after advertising

Subbarao Kambhampati, a professor at Arizona State University and an AI researcher for almost as long as LeCun, agrees that current technologies alone do not point to intelligence truly comparable to that of humans. But he points out that they have already proven to be extremely useful in highly profitable areas, such as software programming. The newer methods advocated by LeCun, he considers, still need to be proven in practice.

For LeCun, this is precisely where the relevance of his new company lies. The last few decades, he says, are full of AI projects that seemed promising but lost steam along the way. And, in his view, there is no guarantee that Silicon Valley will be the winner of this global race.

“The good ideas are coming from China”, he says. “But Silicon Valley has a certain superiority complex and can’t imagine that good ideas can come from elsewhere.”

c.2026 The New York Times Company

Source link

News Room USA | LNG in Northern BC