
Artificial intelligence has become an ally and a threat at the same time in the routine of companies. If, on the one hand, it expands the ability to protect against digital attacks, on the other, it has placed a new risk on the security radar: the uncontrolled use of AI tools by employees themselves, who can expose sensitive data without even realizing it. The problem is the potential to generate leaks, fines and even stop entire operations.
In the view of Cláudio Martinelli, executive director for the Americas at , a global digital security and privacy company, the problem begins when the company only looks at the “external enemy” and forgets what happens at home.
“While digital security directors worry about major attacks from criminals based on artificial intelligence, they lack a little attention to what their own employees are doing with these tools”, warns Martinelli.
The example is simple and current: the employee who uploads the entire sales base from recent years to an AI report generator, with prices, customers and commercial conditions, to speed up a presentation.
The tool delivers the PPT, but, from that point on, this information becomes part of the AI “data pie”, accessible to anyone who knows how to ask. “From this moment on, your data is already public in this artificial intelligence and you are, automatically, having access to corporate secrets”, says the director.
This type of behavior falls into what companies call shadow IT: the use, by employees, of unapproved applications and services – often on their personal cell phone – to work with corporate data.
Martinelli comments that it is a movement that recalls the beginning of the pandemic, when each team chose its own messenger or videoconferencing platform, without standardization or clear policy. Now, the “fever” is generative AI platforms, not always evaluated by the security team before being adopted in everyday life.
Criminals gain scale and sophistication
At the same time, criminals have also gained scale and sophistication with AI. According to Roberto Rebouças, executive manager in Brazil, the volume of new threats has exploded.
“At that time there were a thousand and a few digital pests around the world. Today, Kaspersky detects around 450 thousand new ones every day”, comments the expert.
In this scenario, it became impossible to rely solely on human analysts to review everything that circulates on company networks and systems.
Therefore, the security solutions themselves incorporated AI and machine learning to do the “heavy lifting” of mass analysis and detection of suspicious behavior, such as hacking attacks. ransomware that can take a company offline from one day to the next.
The difference, the executives emphasize, is that it combines this automated engine with human supervision, and does not completely replace the specialist. “Artificial intelligence doesn’t have common sense. At some point it can do something really stupid”, summarizes Rebouças.
For example, when the machine is unsure about a threat, the manager says that the case is forwarded to a specialized team to undergo human analysis.
This “checks and balances” logic, according to experts, should become standard in any corporate AI project. Two examples cited as models are the aviation and medicine industries, which only reach the market after going through several layers of testing, certification and regulation.
The password is to blame for the Louvre robbery
Another sensitive point is the basics that still fail: weak passwords, outdated systems and lack of process. Cases like the Louvre robbery illustrate a recurring problem. In the episode that occurred in October this year, it was later discovered that the system had not received an update for years and used simple credentials.
Even when the company requires strong passwords, explain experts at , the risk appears when the employee reuses the same combination on third-party websites and applications – which can be hacked and serve as a gateway to the corporate environment.
Here, AI can also be used for security, simulating brute force or dictionary attacks to preventively discover which users have weak or overly standardized passwords. For large companies, with thousands of employees, this type of automation is the only viable way to continually test and reinforce digital hygiene, without relying on sporadic manual audits.
But, even with all these technological layers, the human factor remains a critical link. Rebouças remembers that safety training cannot be one-off events, where the content is forgotten days later. “It’s like a firefighter test: every now and then the fire alarm suddenly rings to see if everyone gets out. You can’t stop waiting for it to catch fire”, explains the manager.
Another example cited is the case of a client who sent a false email about a salary adjustment to test employees’ attention. The result? According to Kaspersky experts, 96% of employees who received the test clicked on the email and more than half opened the attached file.
Using this type of test, the company is able to identify who needs reinforcement and personalize learning paths – today, many of them are also supported by AI. In the end, the combination of people and machines is seen by experts as the main way to deal with the speed and scale of today’s cybercrime.
Martinelli highlights that, when well implemented, this partnership also protects the quality of life of security professionals, who live with the constant pressure of avoiding serious incidents.
“The association between artificial intelligence and well-implemented digital security is a stress and burnout reducer. It’s a companion so you don’t leave your child’s first birthday party on Saturday night because your company was attacked”, concludes the director.
Building a future immune to cybercrime. .
