EditoRed

Association of media editors of the European Union, Latin America and the Caribbean

THE IA AS A WEAPON: ISRAEL IN GAZA AND THE ‘COLLATERAL DAMAGE’​

Imagen referencial. / Fuente: Freepik

THE IA AS A WEAPON: ISRAEL IN GAZA AND THE 'COLLATERAL DAMAGE'

Óscar Ruiz / Escudo Digital

It is a fact that Artificial Intelligence has revolutionized the way in which battles are being fought in recent years, especially with autonomous systems that have the ability to identify military targets, and that is why because of this type of arms race the concern of the international community to find the necessary governance mechanisms to manage the use of this disruptive technology is increasing.

The use of Artificial Intelligence in the battlefield is not exactly new because air defense systems have been using it for many years to select and eliminate targets in an autonomous way, but the enormous advances of recent years, and especially the results shown in the wars that are currently taking place, have precipitated this technology to evolve in a way unparalleled in recent times.

Very current conflicts such as the war in Ukraine, show us every day the enormous effectiveness of the use of autonomous drones by both sides. Although the images we usually see of this war in Europe are those of these unmanned devices with built-in explosives against enemy troops or vehicles, this artificial intelligence technology is also being used for other military purposes, allowing above all to improve the speed, accuracy and decision-making capabilities in these combat operations.

In addition, deep learning algorithms can process large volumes of data to improve the maintenance of weapons systems, anticipate adversary movements and refine tactics and strategies in military operations. In addition, some of these autonomous systems facilitate the execution of missions such as raids, bombings and state-sponsored assassinations, which would otherwise be difficult to carry out due to the risk of pilot loss, political or diplomatic fallout and international law implications. This paradigm shift raises numerous ethical and legal questions regarding responsibility for the use and control of autonomous weapons systems, generating concern in the international community.

ALGORITHMS TARGETING PEOPLE IN GAZA

Israel’s ongoing war against Hamas in the Gaza Strip, which is costing the lives of thousands of civilians, including women and children, makes the need for mechanisms to regulate the use of AI in the military even more evident. Six Israeli intelligence officials claimed in a report, written by investigative journalist Yuval Abraham and published by the Israeli-Palestinian magazine +972, that artificial intelligence systems have played a key role in the identification, and possible misidentification, of tens of thousands of targets in Gaza.

And we are talking about the “Lavender” system, an artificial intelligence system that during the first days of the Jewish military operation in Gaza used a database to identify up to 37,000 possible targets based on alleged links to Hamas terrorists. The Israeli military may have given its approval for the military to follow these Lavender AI-generated target lists to the letter without any prior verification requirements, giving the system a free pass to eliminate items that this artificial intelligence designated as terrorists, although Israel has always officially denied the use of AI in the Gaza war.

Here we have two important questions about the ethics of using this type of weapon systems with artificial intelligence; the first is the “ease” and the “saving of human labor and time” that these systems provide, because according to some Israeli military, the Lavender spent about 20 seconds to designate and destroy each target carrying out several dozens of them a day without providing any added value to the human factor, and basically being used as a seal of approval. The second worrying factor would be the military’s acceptance of high levels of civilian collateral damage when using these AI systems. High civilian collateral casualties were accepted, quite consciously, in the pursuit of Hamas middle and lower commanders, without assessing the proportionality of each attack.

INTERNATIONAL REGULATION OF ARTIFICIAL INTELLIGENCE IN WARFARE

Currently, several countries have begun to establish official limits on the use of artificial intelligence in the military. One notable example is the “Political Declaration on the Responsible Use of Military Artificial Intelligence and Autonomy” which has been signed by more than 60 countries, including the United States, China and several members of the European Union. This declaration, while not legally binding, sets standards for the use of AI in the military field, emphasizing the need to comply with international legal obligations, conduct rigorous testing, and maintain human accountability in decision-making.

Specific limitations that have been discussed and proposed in various conferences and declarations include:

– Ensuring human control. It is essential that autonomous weapons systems maintain adequate levels of human judgment, especially in decisions involving the use of lethal force. This implies that critical decisions must be overseen and approved by humans .

– Rigorous evaluation and testing. Military AI systems must undergo rigorous testing to ensure that they operate within expected parameters and that risks of unintended behavior are minimized. This includes implementing emergency shutdown mechanisms to disable systems in the event of malfunction.

– Transparency and accountability. Countries should ensure transparency in the development and use of AI technologies in the military domain. This includes clear and auditable documentation of how the systems work, as well as adequate training of military personnel to understand their capabilities and limitations.

– Compliance with International Humanitarian Law. Any use of AI in military operations must strictly comply with international humanitarian law, which includes assessing the proportionality of attacks and minimizing collateral damage. This is critical to ensure that attacks do not cause a disproportionate number of civilian casualties.

– Prevention of bias and error. It is very important that AI systems are designed to avoid biases and errors that can lead to misidentification of targets, as has been observed in recent conflicts. This requires the development of ethical and regulatory standards to guide their use.

It seems obvious that there is an international will and these efforts reflect a growing global concern about the ethical and legal implications of the use of AI on the battlefield, underscoring the need for global governance, as robust as possible, to prevent abuses and protect civilian populations. The Ukraine and Russia conflict, but especially Israel’s conflict in Gaza against Hamas, is showing us that despite technological advances (or perhaps as a result of them), collateral damage can multiply if we do not legislate or keep AI within strict and monitored parameters, although in the end it is not AI that kills women and children, but the human being who pushes the button.

————

This article was originally published in Escudo Digital, with whose authorization we reproduce it here.

Member access

Member access