In late 2023, Israel was aiming to assassinate Ibrahim Biari, a top Hamas commander in the northern Gaza Strip who had helped plan the Oct. 7 massacres. But Israeli intelligence could not find Mr. Biari, who they believed was hidden in the network of tunnels underneath Gaza.
So Israeli officers turned to a new military technology infused with artificial intelligence, three Israeli and American officials briefed on the events said. The technology was developed a decade earlier but had not been used in battle. Finding Mr. Biari provided new incentive to improve the tool, so engineers in Israel’s Unit 8200, the country’s equivalent of the National Security Agency, soon integrated A.I. into it, the people said.
Shortly thereafter, Israel listened to Mr. Biari’s calls and tested the A.I. audio tool, which gave an approximate location for where he was making his calls. Using that information, Israel ordered airstrikes to target the area on Oct. 31, 2023, killing Mr. Biari. More than 125 civilians also died in the attack, according to Airwars, a London-based conflict monitor.
The audio tool was just one example of how Israel has used the war in Gaza to rapidly test and deploy A.I.-backed military technologies to a degree that had not been seen before, according to interviews with nine American and Israeli defense officials, who spoke on the condition of anonymity because the work is confidential.
In the past 18 months, Israel has also combined A.I. with facial recognition software to match partly obscured or injured faces to real identities, turned to A.I. to compile potential airstrike targets, and created an Arabic-language A.I. model to power a chatbot that could scan and analyze text messages, social media posts and other Arabic-language data, two people with knowledge of the programs said.
Many of these efforts were a partnership between enlisted soldiers in Unit 8200 and reserve soldiers who work at tech companies such as Google, Microsoft and Meta, three people with knowledge of the technologies said. Unit 8200 set up what became known as “The Studio,” an innovation hub and place to match experts with A.I. projects, the people said.
Yet even as Israel raced to develop the A.I. arsenal, deployment of the technologies sometimes led to mistaken identifications and arrests, as well as civilian deaths, the Israeli and American officials said. Some officials have struggled with the ethical implications of the A.I. tools, which could result in increased surveillance and other civilian killings.
No other nation has been as active as Israel in experimenting with A.I. tools in real-time battles, European and American defense officials said, giving a preview of how such technologies may be used in future wars — and how they might also go awry.
“The urgent need to cope with the crisis accelerated innovation, much of it A.I.-powered,” said Hadas Lorber, the head of the Institute for Applied Research in Responsible A.I. at Israel’s Holon Institute of Technology and a former senior director at the Israeli National Security Council. “It led to game-changing technologies on the battlefield and advantages that proved critical in combat.”
But the technologies “also raise serious ethical questions,” Ms. Lorber said. She warned that A.I. needs checks and balances, adding that humans should make the final decisions.
A spokeswoman for Israel’s military said she could not comment on specific technologies because of their “confidential nature.” Israel “is committed to the lawful and responsible use of data technology tools,” she said, adding that the military was investigating the strike on Mr. Biari and was “unable to provide any further information until the investigation is complete.”
Meta and Microsoft declined to comment. Google said it has “employees who do reserve duty in various countries around the world. The work those employees do as reservists is not connected to Google.”
Israel previously used conflicts in Gaza and Lebanon to experiment with and advance tech tools for its military, such as drones, phone hacking tools and the Iron Dome defense system, which can help intercept short-range ballistic missiles.
After Hamas launched cross-border attacks into Israel on Oct. 7, 2023, killing more than 1,200 people and taking 250 hostages, A.I. technologies were quickly cleared for deployment, four Israeli officials said. That led to the cooperation between Unit 8200 and reserve soldiers in “The Studio” to swiftly develop new A.I. capabilities, they said.
Avi Hasson, the chief executive of Startup Nation Central, an Israeli nonprofit that connects investors with companies, said reservists from Meta, Google and Microsoft had become crucial in driving innovation in drones and data integration.
“Reservists brought know-how and access to key technologies that weren’t available in the military,” he said.
Israel’s military soon used A.I. to enhance its drone fleet. Aviv Shapira, founder and chief executive of XTEND, a software and drone company that works with the Israeli military, said A.I.-powered algorithms were used to build drones to lock on and track targets from a distance.
“In the past, homing capabilities relied on zeroing in on to an image of the target,” he said. “Now A.I. can recognize and track the object itself — may it be a moving car, or a person — with deadly precision.”
Mr. Shapira said his main clients, the Israeli military and the U.S. Department of Defense, were aware of A.I.’s ethical implications in warfare and discussed responsible use of the technology.
One tool developed by “The Studio” was an Arabic-language A.I. model known as a large language model, three Israeli officers familiar with the program said. (The large language model was earlier reported by Plus 972, an Israeli-Palestinian news site.)
Developers previously struggled to create such a model because of a dearth of Arabic-language data to train the technology. When such data was available, it was mostly in standard written Arabic, which is more formal than the dozens of dialects used in spoken Arabic.
The Israeli military did not have that problem, the three officers said. The country had decades of intercepted text messages, transcribed phone calls and posts scraped from social media in spoken Arabic dialects. So Israeli officers created the large language model in the first few months of the war and built a chatbot to run queries in Arabic. They merged the tool with multimedia databases, allowing analysts to run complex searches across images and videos, four Israeli officials said.
When Israel assassinated the Hezbollah leader Hassan Nasrallah in September, the chatbot analyzed the responses across the Arabic-speaking world, three Israeli officers said. The technology differentiated among different dialects in Lebanon to gauge public reaction, helping Israel to assess if there was public pressure for a counterstrike.
At times, the chatbot could not identify some modern slang terms and words that were transliterated from English to Arabic, two officers said. That required Israeli intelligence officers with expertise in different dialects to review and correct its work, one of the officers said.
The chatbot also sometimes provided wrong answers — for instance, returning photos of pipes instead of guns — two Israeli intelligence officers said. Even so, the A.I. tool significantly accelerated research and analysis, they said.
At temporary checkpoints set up between the northern and southern Gaza Strip, Israel also began equipping cameras after the Oct. 7 attacks with the ability to scan and send high-resolution images of Palestinians to an A.I.-backed facial recognition program.
This system, too, sometimes had trouble identifying people whose faces were obscured. That led to arrests and interrogations of Palestinians who were mistakenly flagged by the facial recognition system, two Israeli intelligence officers said.
Israel also used A.I. to sift through data amassed by intelligence officials on Hamas members. Before the war, Israel built a machine-learning algorithm — code-named “Lavender” — that could quickly sort data to hunt for low-level militants. It was trained on a database of confirmed Hamas members and meant to predict who else might be part of the group. Though the system’s predictions were imperfect, Israel used it at the start of the war in Gaza to help choose attack targets.
Few goals loomed larger than finding and eliminating Hamas’s senior leadership. Near the top of the list was Mr. Biari, the Hamas commander who Israeli officials believed played a central role in planning the Oct. 7 attacks.
Israel’s military intelligence quickly intercepted Mr. Biari’s calls with other Hamas members but could not pinpoint his location. So they turned to the A.I.-backed audio tool, which analyzed different sounds, such as sonic bombs and airstrikes.
After deducing an approximate location for where Mr. Biari was placing his calls, Israeli military officials were warned that the area, which included several apartment complexes, was densely populated, two intelligence officers said. An airstrike would need to target several buildings to ensure Mr. Biari was assassinated, they said. The operation was greenlit.
Since then, Israeli intelligence has also used the audio tool alongside maps and photos of Gaza’s underground tunnel maze to locate hostages. Over time, the tool was refined to more precisely find individuals, two Israeli officers said.
Source link