Last week, DeepSeek challenged conventional wisdom on Artificial Intelligence (AI). So far, many have taken the training of state -of -the -art models required more than $ 1 billion and thousands of the latest chips. That AI had to be the owner. That only a handful of companies had the talent for building it-so the secret was essential.
DeepSeek proved the opposite. The news suggests that they have trained their latest model with only 2,000 Nvidia chips at a fraction of the expected cost – about $ 6 million. This reinforces what we have always said: smaller and efficient models can provide real results without massive and owner systems.
Also read:
Continues after advertising
But China’s discovery raises a greater question: who will shape the future of artificial intelligence? AI development cannot be controlled by a handful of competitors – especially when some may not share fundamental values such as business data protection, privacy and transparency.
The answer is not to restrict progress – it is to ensure that AI is built by a wide coalition of universities, companies, research laboratories and civil society organizations.
What is the alternative? Leave the lead in would slip to those with different values and priorities. This would mean giving control of a technology that will reshape each industry and every part of society. Innovation and true progress can only come from the democratization of AI.
Continues after advertising
The time of hype he finished. I believe that 2025 should be the year when we unlock the AI of its confinement among some competitors. By 2026, a large part of society should not just be using AI – it must be building IA.
Read more:
DeepSeek lesson
Smaller and open source models are how this future will be built. Deepseek’s lesson is that the best engineering optimizes for two things: performance and cost. For a long time, AI was seen as a game of scale – where larger models meant better results. But the true innovation is about both size and about efficiency.
Continues after advertising
In our work at IBM, we have seen that appropriate models for a purpose have led to reductions of up to 30 times in AI inference costs, making training more efficient and accessible.
I do not agree that General Artificial Intelligence (AGA) is at the door, nor that the future of the Ia depends on the construction of data centers Manhattan size or nuclear power moved. These narratives create false choices. There is no law of physics that say that AI must remain expensive.
The cost of training and inference is not fixed – it is an engineering challenge to be resolved. Companies, both established and new ones, have ingenuity to reduce these costs and make AI more practical and disseminated.
Continues after advertising
We have seen this happen before. In the early days of computing, storage and processing power were prohibitively expensive. However, through technological advances and economy of scale, these costs plummeted – unlocking new waves of innovation and adoption.
The same will be true for AI. This is promising for companies everywhere. Technology only becomes transformative when it becomes accessible. By adopting open and efficient IA models, companies can take advantage of cost-effective solutions adapted to their needs, unlocking the full potential of AI in various industries.
The opinions expressed in Fortune.com’s comment pieces are exclusively the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.