New AI model wants to revolutionize espionage and already allows additional information about video surveillance data that collects (and even unsuccessfully asked OpenAi).
An investigation from and local communication agencies +972 Magazine and Local Call reveals that the Unit 8200 of the Israeli Armed Forces trained a AI model to understand the Arabic, Using large volumes of telephone conversations and text messages, obtained through their extensive surveillance of occupied territories.
This system has allowed to create a kind of chatgpt, which answers the military’s questions about, for example, information about the targets that are being spied on Palestine.
The model was already being created by Israel’s espionage agency, and the model was accelerated after the war with Palestine in 2023.
It is a LLM (Great Language Model) – A learning system that generates a text -like text, and in fact, its abilities were partially publicly revealed last year at a Military Artificial Artificial Intelligence Conference.
“We tried to create the largest possible data set and collect all the data that the State of Israel has ever had in Arabic,” said former employee, Chaked Roger Joseph Saydooff. The model, he said, required “psychotic quantities” of data.
An internal source, familiar with the project, told The Guardian that “it is not just avoiding shooting attacks, I can now follow human rights activists, monitor Palestinian construction in the C (West Bank area). I have more tools for know what each person in the West Bank is doing“.
According to the investigation two three communication agencies, to help to collect the team’s difficulties in training Chatbot, the unit even sent a Order – Unsuccessful – to OpenAi to execute ChatgPT in the safe systems of the Armed Forces. OpenAi refused to comment.
But the military eventually told the help of reserve military personnel who had worked in large US technological companieslike Google, the goal and Microsoft.
“There are no calls from WhatsApp conversations on the internet. They do not exist in the amount required to train such a model, ”said an Inteirna source. The challenge, they added, was “to collect all the text spoken in Arabic that the unit has already had and put it in a centralized place.” The training data of the model was eventually made up of about 100 billion words.
As Zap had already reported, the IDF (Israeli Defense Forces) already used IA as a weapon in the conflict. For example, they create AI generated to, in 20 seconds, to decide the Palestinian targets to slaughter.
There are also one of Palestinians based on Google Photos, which creates “target lists.” Other secret agencies and services such as the American Cia also use AI for espionage.
An expert from Human Rights Watch said that using surveillance material to train an AI model is “invasive and incompatible with human rights.”