Centaur, the “digital brain”, will be subjected to the most ethically questionable experiments

Centaur, the “digital brain”, will be subjected to the most ethically questionable experiments

Centaur, the “digital brain”, will be subjected to the most ethically questionable experiments

“Forbidden” experiences will generate new hypotheses about how we think. The ambition is to arrive at “a model that captures the complexity and variety of human cognition”.

A new computational model, Centauris causing controversy in the scientific community for promising to reproduce human decision patterns and, thus, allowing the simulation of psychological experiments that would be too expensive, time-consuming or, above all, ethically questionable to carry out with real participants.

The work, in a recent article in Nature, was developed by a team from the Institute for Human-Centred AI, at the Helmholtz Center in Germany, in collaboration with around 40 researchers from several countries, including the United Kingdom, the United States and Switzerland. The Centaur is presented by scientists as a kind of “digital brain”: an artificial intelligence (AI)-driven model that was trained with more than 10 million choices made by more than 60,000 participants, collected from 160 psychology experiments.

The authors believe that, by capturing regularities in human behavior in experimental tasks, the system can become a useful tool for predicting how people decide, solve problems and remember information.

“Our ultimate goal is to understand human cognition,” explained the lead author, Marcel Binzdeputy director of the institute. From his perspective, a possible way to get there is to build computational models capable of artificially reproducing what happens in the human mind.

A more comprehensive system

Until today, many computational models in psychology and neuroscience have tended to be focused on specific processes, for example, how someone responds to a particular type of stimulus or problem, precisely because the human brain is notoriously complex. But Centaur seeks to overcome this limitation by trying to function as a single system applicable to different types of tasks.

The ambition, says Binz, is to arrive at “a model that captures the complexity and variety of human cognition” and that, over time, could help generate new hypotheses about how we think.

The construction of the Centaur

The team started with a large-scale language model using the Llama. He then assembled a large database called Psych-101, made up of records of classic and modern experiments in areas such as memory, decision making and problem solving.

The next step was the so-called “fine-tuning”: instead of asking the model just to produce text, the researchers adjusted it to behave like a participant in a psychology study, producing choices comparable to human ones.

The result was Centaur, which, according to the authors, can replicate human responses in a variety of tasks — especially when they resemble those in the data set used in training.

Binz himself recognizes that the model’s capacity is not unlimited. As you move away from the conditions represented in the Psych-101, performance may degrade. Sooner or later the model fails.

What can it be used for

One of the applications most highlighted by the authors is the possibility of using Centaur as a prediction machinecapable of anticipating how real people would react in certain situations, including scenarios in which it would be impractical to recruit participants or in which ethical risks would be high.

In experimental psychology, there are studies that cannot be conducted for ethical reasons: e.g. invasive testing in children or experiences that may harm mental health of volunteers.

In a computational model it is possible to simulate these conditions without exposing anyone to harm, also reducing the cost and time associated with traditional experiments.

Is imitating responses the same as explaining the mind?

Centaur’s promise comes with a crucial controversy: to what extent is a system that provides “human-like” responses reproducing the internal mechanisms of human thought?

The study authors suggest that, as the model was adjusted to behave like a participant, its internal representations began to more closely resemble patterns of human brain activity.

If this is confirmed, it would open a door to “look inside” the model and learn something about cognition.

“It may be that what happens within the model is, to some extent, capturing real cognitive processes,” Binz said. This hypothesis would be particularly exciting because it would allow us to look internally at the system and use this information to better understand mental functioning.

Centaur doesn’t convince everyone

Samuel Forbes, associate professor of developmental science at Durham University, United Kingdom, who was not involved in the work, disputes the equivalence between performance and mechanism: obtaining human-like responses “does not guarantee” that the underlying processes are similar to those in the brain.

The researcher used an analogy: it would be like teaching a robot to play the cello and then trying to learn about cellists from the robot. Even if the music sounds convincing, it would not reveal how a human musician plays, nor their emotional or cognitive processes.

A similar criticism was advanced by Di Fu, professor of cognitive neuroscience at the University of Surrey, also not involved in the study. In your reading, the Centaur it helps predict “what humans can do” but does not explain “how the brain does it.”

There are still concerns about the scale of Centaur itself. Some scientists warn that the model may be too large and complex to be usefully analyzed. Binz accepts the criticism, but considers it an evolving challenge. The researcher also argues that complexity is, paradoxically, part of the value of the system: the human brain is equally complex and, yet, neuroscience has developed methods to study it. The difference, he adds, is that in a computational model it would be possible to measure everything that happens internally, something that is not possible to do with the human brain with the same degree of detail.

For now, Binz stresses that more research is needed to understand the extent to which Centaur and similar models reproduce thought patterns and what this means for the science of the mind.

Source link

News Room USA | LNG in Northern BC