Google violated its own policies that in 2024 prohibited selling its security systems artificial intelligence for military purposes to assist a contractor Israel to analyze the videos recorded from drones. This is revealed by a former employee of the technology giant in a confidential complaint to which The Washington Post has been able to access.
According to the plaintiff, the company provided assistance to the Israeli technology firm CloudExwhich sells at Israel Defense Forces (FDI) weapons and surveillance systems that have then been deployed in Gaza. The contractor used Geminithe IA from Google, to analyze the images captured and identify objects such as drones, soldiers or armored vehicles.
At the time, Google indicated in its “AI principles” that it would not use its technology to feed systems that “violate internationally accepted standards. According to the plaintiff, the company breached that commitment and, by contradicting what it publicly stated, also misled both investors and regulatory authorities.

Israeli troops on the streets of the West Bank city of Jenin / Europa Press/Contact/Nidal Eshtayeh – Archive
“The process [de revisión ética de la IA] is robust and, as employees, we are regularly reminded of the importance of the company’s AI principles. But when it came to Israel and Gaza, the opposite happened. (…) I filed the complaint with the SEC [Comisión de Bolsa y Valores de los Estados Unidos] because I considered that the company should be held accountable for this double standard,” the plaintiff said in statements to The Washington Post.
Google has refuted the allegations, saying CloudEx spent less than $200 a month on AI products, too little use of its services to be “significant.” “We responded to a general purpose question, as we would with any customer, with standard help desk information, and did not provide any other technical help,” a company spokesperson said.
Subscribe to continue reading