People generally have difficulty recognizing content generated by artificial intelligence (AI). In the case of speech, success is around 60 percent, but the most recognizable are text and pictures. At Tuesday’s press conference, Róbert Móro of the Kempelen Institute of Intelligent Technologies (Kinit) pointed out in connection with the increasing number of frauds. He stressed, however, that the ability to properly distinguish such content is increased by training.
- Artificially generated content is becoming increasingly difficult to recognize by people.
- Training improves people’s ability to detect fraud with AI.
- Fraudsters use AI for personalized attacks on the victim.
- With AI content, you need to notice details and irregularities.
- Prevention includes enlightenment, training and technical measures from operators.
“In the future, personalized attacks may be added where fraudsters will use data from public status on social networks or reports obtained by phishing to personalize content. In this way, they can create answers imitating your style of expression and look even more believable, ”Móro said.
Clinical psychologist and forensic expert Dušan Kešický recalled that whether a person will “fly” by fraudsters, does not depend on age or education. “The fraudsters try to get us into a state of emotional destabilization – whether negative, for example, when they cause fear of our loved ones or positively when they call us with a notice of winning or evaluating our investment,” noted. However, according to him, they always want to act quickly and without thinking.
According to Móra, fraud using AI models that can guide autonomous conversations or real -time to change the face in the video may be increasingly appearing. He added that although it is still true that large language models and speech generators work best for English, many of them can handle even Slovak. As for speech synthesis, the latest AI models need only a few -second recording to “clone” a given voice.
The expert pointed out that in the text it is necessary to notice especially grammatical and stylistic errors, unusual shapes of words or special formulations. A picture generated by artificial intelligence can reveal discrepancies in detail, unrealistic lighting and shadows, illogical background or objects that do not “fit” together. In the video, on the other hand, the movements of the mouth do not fit with a spoken word, the characters have unnatural eye movements, and the illogical details in the background are also present. The language produced by AI can be recognized, for example, according to the unnatural melody of speech or emphasis on individual words.
The psychologist adds that there is no universal recipe to completely avoid fraud. “Training, experience is abraded and becomes much less dangerous, but the situations that these attackers are raised are still new and new, and therefore it is an endless process that cannot be prevented,” noted. According to him, it is crucial for the topic to remain present in the awareness of people.
In this context, the technology director of mobile operator O2 reminded you that fraudsters are constantly changing their techniques. He noted that they would block up to 70,000 fraudulent phone calls a day. Since operators do not know the content of the communication, it is very difficult to evaluate which call is correct and which are realized by fraudsters. “The principle is that We cannot block any correct call that our customers are realized, ”he added. He also described the new information campaign“ Question for a Million ”as part of the prevention to bring people closer to how to respond to a call when the caller sets out as a close one.