
We are not realizing that there are privacy risks when using AI at work: “Interactions can be recorded”.
Cybersecurity company NordVPN has revealed new data on the privacy risks facing Portuguese people – as AI tools become an integral part of daily work routines.
These tools have entered the routine of millions of people, who use ChatGPT, Copilot and other generative tools to increase productivity and optimize processes.
But the National Privacy Test shows that over the past year, 92% of Portuguese people do not understand what privacy issues they should take into account when using AI at work.
It’s just that using AI is not talking to the colleague at the next table: “Interactions with AI tools can be registered, analyzed and potentially used to train future models. When employees share customer data, internal strategies or personal information with AI assistants, they could be creating vulnerabilities of privacy unintentionally”, warns Marijus Briedis, technology director at NordVPN.
“People are feeding sensitive information into AI tools without understanding where that data goes, how it is stored, or who might have access to it,” he continues.
The technology that serves to increase productivity at work is also the technology used by cybercriminals to create scams that are more convincing than ever.
The same National Privacy Test indicates that 35% of Portuguese people do not know how to correctly identify common scams carried out using AI technology (deepfakes and voice cloning, above all).
And the truth is that the videos or recordings of voices seem more and more reais – and these mocks become increasingly difficult to detect.
Result: more scams and more victims. THE AI Simplified Cybercrime.
To reduce risks when using AI at work, here are some tips: never enter confidential company data, customer information or personal data; remember that conversations with AI tools can be recorded; and verify the company’s AI usage policies.
