The ability to solve complex problems depends on several factors that can improve both and make it difficult. Among the most critical considerations are the availability of high quality data, the scope of possible solutions to the problem, the clarity of the goal to achieve and the need to adapt to constantly changing systems.
Also read:
When these elements are absent or poorly defined, they have challenges that require innovative approaches to be overcome.
Continues after advertising
Check out ways to deal with the main challenges brought by AI:
1. Where do the data that feed the AI come from?
Data are the most crucial input for any AI model, but often data size receives much more attention than its quality. Although current trends with large language models have suggested that increasing amounts of data are the key to better quality models and results, it is still an open issue in research if this will remain true.
What has been proven is that high quality data is equally, if no more, important than large amounts of data. In certain cases, if you have a relatively small but very high quality data set as a starting point, you can carefully increase the number of data points generating synthetic data.
Continues after advertising
2. Evaluate the quality of the answers
When a problem has many possible solutions, solving it by methods of “brute force”-exhaustively testing all combinations-becomes impractical. Historically, these problems were treated using heuristics – simple rules designed to provide solutions that are “good enough” to most scenarios, though rarely great.
Also read:
AI offers a promising alternative to dealing with the complexity of problems with numerous potential solutions. However, as the number of possible solutions increases, the challenge of verifying its quality also increases.
Continues after advertising
3. Ask the right questions to IA
A goal, or reward function, is the goal or result that the AI model is trying to achieve. In other words, it is about asking the right question to the system. Formulating what you want the model to do is one of the hardest parts of any machine learning system. Games like chess have a clear and measurable goal, such as a score or a set of rules to determine the winner.
But in the real world, which is often complex and confused, there is no direct metric that we can use to measure progress. Without a clear and measurable goal, it can be difficult to define what is “good” to the model. The more ambiguous the goal, worse the model’s performance will be.
4. The importance of human collaboration
The problems that organizations rarely face are static. Combined with the difficulty of easily identifying whether a proposed solution is good, AI risks offering solutions that progressively deviate from the optimal response.
Continues after advertising
One technique to overcome this challenge that is increasingly adopted is learning by reinforcing human feedback, or RLHF (Reinforcement Learning with Human Feedback).
This technique with the human factor in loop Allows the model to learn and incorporate human insights beyond the data. The RLHF is particularly useful in situations where it is difficult to encode an algorithmic solution, but where humans can intuitively judge the quality of the model output.
As IA continues to evolve, it is crucial that organizations approach their implementation in a careful and reflective way.
Paolo Cervini He is co -founder of Walk The Talk Lab. Chiara Farronato He is an associate teacher of business administration at Harvard Business School. Pushmeet Kohli He is vice president of science and strategic efforts on Google Deepmind. Marshall W. van Alstyne He is a professor of information systems at the University of Boston.
©2025 Harvard Business School Publishing Corp. Distribuído pela New York Times Licensing