Creating with AI isn’t about pushing a button and this chair shows why

Creating a chair with artificial intelligence is something that may sound automatic. But in the experiment conducted by British designer Ross Lovegrove in partnership with Google DeepMind, this was not what happened.

The process was marked by constant adjustments, frustrated attempts and human decisions at all stages, as the proposal was not to use AI to invent an object from scratch. Instead, the team trained a generative model from Lovegrove’s signature sketches, known for the organic and biomorphic forms that run through his work.

The idea was to observe the extent to which technology could assimilate this visual language and generate coherent variations. In fact, the end result draws attention, as the chair was 3D printed in metal, is functional and can be used normally.

But the path to getting there revealed (or proved) something very important: creating with AI requires mediation, curation and much more human work than the imagination usually suggests.

🔎 The project in a few lines
AI trained with designer’s original sketches
Common design terms were not immediately understood
Abstract commands helped avoid predictable solutions
Hundreds of variations until choosing a single model
Final object produced by 3D metal printing

Teaching design to AI was more difficult than imagined

The first challenge arose in communication, as even though she was trained with Lovegrove’s drawings, she did not understand common technical terms in the studio. Therefore, recurring concepts in the design vocabulary needed to be reformulated several times until they made sense for the system.

To give you an idea, the word “chair” has become a problem. Whenever it appeared in commands, the model tended to reproduce obvious and predictable solutions. To escape this, the team started using more abstract descriptions, such as “continuous single-surface extension”, “biomorphic shape” and “lateral flows”.

Continues after advertising

At this point, the experiment stopped being just technological and became linguistic. It was no longer a question of asking the AI ​​for something, but of learning how to speak so that it did not repeat already known patterns.

Hundreds of images were generated until a specific proposal stood out. Named Seed 6143, it proceeded to the technical in-depth stage. The final visualizations were supported by Geminibefore refinement in industrial software and three-dimensional modeling.

When AI surprises and starts to slip out of control

Not all of the generated variations looked familiar. Some drew attention precisely because they deviated from Lovegrove’s usual repertoire. According to the designer himself, certain results were more reminiscent of the Swiss artist’s universe H. R. Gigerknown for the franchise’s dark aesthetic Alienthan your own work.

Continues after advertising

These deviations were not treated as errors, but required choice. At each round, it was necessary to decide what made sense to continue developing and what should be discarded. In other words, AI generated possibilities, but the curation remained entirely human.

After the definition of Seed 6143, the project underwent structural simulations and adjustments to become viable as a physical object. Final manufacturing took place by 3D metal printing, using a robotic arm, marking the definitive transition from the digital environment to the real world.

The experiment does not resolve debates about authorship nor does it point to a definitive new model of creation. He leaves something simpler and more concrete: even with artificial intelligence, creating remains a process made up of choices, limits and human interpretation.

Source link

News Room USA | LNG in Northern BC