top of page
Writer's pictureROSHAN PHILIP 2233112

Disarming the Terminator: Ethics in the AI-powered warfare

Updated: May 26


If a soldier shoots a civilian, we know to hold them accountable. What if, instead, the bullet was fired by an AI-powered drone? With the arrival of AI into military and warfare, we are left uncertain of where the future of mankind is headed. In each stage of the evolution of weapons of warfare, mankind has been fearful of the implications of what it was yet to witness. We stand today at a similar crossroads with the assimilation of AI into warfare. Until recently in the history of human warfare, man was an essential component of destruction and chaos. AI today eliminates this need and automates this process. 

 

With the Ukraine-Russia conflict being dubbed “the world’s first AI war,"  we are faced with worries of the ethical considerations of integrating AI into warfare. From the use of facial recognition and automated cyberattacks to AI-powered drones and artillery, the face of warfare has been altered altogether. With such unprecedented accounts of technological applications in security and defence, we must consider concerns around accountability, unintended harm, and human control. Additionally, we must account for the possibility of an AI arms race escalating, leading to adversaries competing for a military edge over the rest of the world. This could usher in the next wave of increasingly destructive, autonomous weapons systems powered by artificial intelligence. 

 


AI in warfare

Designed by freepix


Who do we hold accountable?


One of the major concerns we see arising over utilising AI for defence is factoring in accountability. With the traditional system of security, defence, and artillery, there existed a clear hierarchy or chain of accountability. With the onset of AI-powered weapons capable of "thinking,"  these lines blur. Who can we hold accountable for machinery that causes unintended harm or civilian casualties? Fear remains over the potential for these AIs to violate the humanitarian laws of warfare. We see this operational in real-life situations, such as in 2021, when the Israeli military used an AI-powered database, Lavender, in their bombing campaigns in Gaza. This caused heavy scrutiny, claiming that Israel allowed for a large number of innocent civilians to be killed in cold blood. 

 

Testimonies of soldiers present in the conflict reflect how military advancements due to technology have come to rely more on “statistical mechanisms than a grieving soldier." In pursuit of efficiency and ease, we are potentially giving away accountability to technology that is incapable of looking at individuals as more than a potential target. With traditional systems, there would be a person at the helm, deciding strategies and taking into account factors that a piece of technology could potentially overlook, such as avoiding civilian casualties and minimising destruction to life. Similarly, in the Ukraine-Russia conflict, accounts of Ukrainian AI-powered drones attacking and destroying oil and gas plants in Russia have been recorded. This could have profound implications on the Russian economy or for those living in and around the areas of these plants where the attacks happened. As the capabilities of AI increase, ensuring transparency in accountability and maintaining human judgement are of paramount importance. 


 

Unintended harm and unpredictability


A troubling prospect for AI-powered weaponry is the potential for technical failures or unintended harm. With human supervision, chances of harm exist, but due to the scale of destruction and loss to life and property, we tend to be more careful in our examinations and account for biases. An AI-powered weapon, if trained on biased data or left unsupervised, could give discriminatory results or targets that could cause catastrophic effects on civilians and those involved in the war front. This could materialise in the form of biases that reflect the social realities in which they were built. For example, if the data on which an AI is trained happens to show people pertaining to a particular skin colour to be dangerous, this bias is internalised by the AI, which could then deem people of that skin colour to be potential threats. 

 

Concerns grow not only over the potential for biassed or discriminatory data or AI, but also for the areas in which these AIs are utilised. We see in facial recognition software an inherent bias against black women, but what could be potentially more dangerous are the avenues in which these technologies are utilised. If such AI were utilised to power weaponry, it would clearly lead to unwarranted targeting and racial profiling of black people. Additionally, this could help propagate pre-existing power imbalances, perpetuate systemic injustices, and escalate conflicts, potentially leading to a threat to global stability. 


 

Where are we headed?


The potential for AI integration into security and defence leading to catastrophic effects on humanity requires us to ensure that we have stringent measures to protect ourselves, utilising strategies such as establishing ethical guidelines, ensuring transparency and accountability, allocating human control and risk assessment measures, and, above all, international cooperation. By creating a robust set of ethical guidelines to be strictly followed by any and all parties involving themselves in the use of AI in weaponry, we can ensure the safe integration of AI. This would not be possible without the creation of an international body that oversees the use of AI in security. This body could consequently be tasked with ensuring risk assessment and holding regular checkups to regulate AI-driven artillery and systems of defence. 

 

As we stand today in an ever-evolving world of technology, where AI inevitably integrates itself into all fields and industries, the effects it could have on security and defence stand to rank the most destructive. If left unchecked, it could cause massive destruction to human life, and thus it becomes essential to create regulatory bodies that keep in check this integration of AI into security and defence. As warfare begins to transcend the human today, so too must our moral and ethical guardrails evolve to ensure the progress we see today does not turn into a pandora’s box for the future.


Comments


bottom of page