Critics warn of potential dangers as Israel's military claims success with its AI system, “the Gospel,” in identifying enemy combatants and equipment while reducing civilian casualties.
In the ongoing conflict between Israel and Hamas, the Israeli military has been using an artificial intelligence (AI) system called “the Gospel” to select targets in real-time. The military claims that this AI system has helped them rapidly identify enemy combatants and equipment, while minimizing civilian casualties. However, critics argue that the system is unproven and may be providing a technological justification for the killing of Palestinian civilians. As the use of AI in warfare continues to evolve, experts warn that this is just the beginning of a new phase that raises ethical and practical concerns.
The AI targeting system Israel uses can generate targets at a rapid rate:
The Israeli military's use of AI in targeting is not unique, as militaries worldwide have been experimenting with AI for over a decade. The Gospel, developed by Israel's signals intelligence branch, Unit 8200, is one of several AI programs being utilized by Israeli intelligence. The Gospel aggregates vast quantities of intelligence data from various sources, such as cell phone messages, satellite imagery, drone footage, and seismic sensors. It then makes targeting recommendations to human analysts, who decide whether to pass them on to soldiers in the field. This AI system has significantly increased the speed at which targets are identified, with the Gospel and associated AI systems suggesting around 200 targets within 10-12 days, a rate at least 50 times faster than traditional methods.
Israel's latest military operation in Gaza has seen unprecedented use of AI:
Israel's recent military operation in Gaza, which began in response to an attack by Hamas-led militants, has witnessed an escalation in the use of AI-generated targets. The Gospel is being employed on a large scale to rapidly produce targets based on the latest intelligence. The system provides recommendations to human analysts, who then decide whether to pass them on to the air force, navy, or ground forces. This conflict may mark the first time that AI-generated targets are being extensively used to influence a military operation.
Critics question the effectiveness and ethics of the AI system:
While the Israeli military claims that the Gospel has helped reduce civilian casualties, critics argue that the system is unproven and flawed. AI algorithms are known for their high error rates, and relying on them for precision targeting raises concerns about indiscriminate or biased targeting. Some accuse the system of being used to manufacture targets, enabling the continuous bombardment of Gaza and punishing the general Palestinian population. The toll on Palestinian civilians has been devastating, with thousands of deaths and widespread destruction of buildings. The increased volume of targets also puts pressure on human analysts, who may be more likely to accept the AI's recommendations under such circumstances.
The accountability and legal implications of AI in warfare:
The use of AI in targeting raises questions about accountability and legal culpability. While humans retain legal responsibility for strikes, it becomes challenging to assign blame if the targeting system fails. The lack of explainability in AI decision-making makes it difficult to trace decisions back to specific individuals or design points. This lack of accountability further complicates efforts to pursue justice for those affected by the conflict.
Israel's use of AI in targeting represents a significant development in the evolution of warfare. While the Gospel and other associated AI systems have faced criticism for their effectiveness and potential for abuse, they also offer military advantages in terms of speed and efficiency. The use of AI in targeting is not limited to Israel, as other nations, including the United States, are actively exploring similar systems. As AI continues to advance, it is crucial to address the ethical and practical concerns associated with its use in warfare. The future of warfare may involve autonomous systems that can identify and engage targets without human intervention, raising further questions about the nature of war and its impact on civilian populations.
George Smith, with over a decade in tech journalism, excels in breaking down emerging tech trends. His work, spanning tech blogs and print, combines in-depth analysis with clarity, appealing to a wide readership. George's pieces often explore technology's societal impact, showcasing his foresight in industry trends.