Activists from around the world have gathered in Costa Rica to participate in the 4th global meeting of the Stop Killer Robots Coalition on Digital Dehumanisation.

The international coalition, which brings together 160 civil society organisations from around the world, is pushing for the implementation of a binding legal instrument to ban fully autonomous weapons systems.

The activists’ meeting opened with a series of panels on the current landscape of autonomous weaponry and possible steps to initiate an effective process towards having an Autonomous Weapons Treaty.

After two days of debate and conclusions, the “Regional Conference on the Social and Humanitarian Impact of Autonomous Weapons”, sponsored by the Costa Rican government, will begin on 23 and 24 February.

The initiative, according to the Central American country’s foreign ministry, is the first of its kind in the region and will attract government experts from Latin America and the Caribbean, as well as from observer countries, representatives of the International Committee of the Red Cross, the United Nations, academics and civil society leaders.

This exchange will serve as a basis for regional dialogue on the humanitarian and social impacts of autonomous weapons, as well as the challenges that this type of weaponry poses to peace, security and humanitarian law.

Digital dehumanisation

Digital dehumanisation is the process by which humans are reduced to data, which is then used to make decisions and/or take actions that negatively affect our lives.

The digitisation of information about people and the use of automated decision-making technology based on this digitised information is not always problematic. However, it carries an additional risk of becoming dehumanising and causing automated harm.

This process strips people of their dignity, degrades the humanity of individuals and removes or replaces human involvement or responsibility through the use of automated decision-making in technology. Automatic harm is done when these decisions affect us negatively.

Such is the result of using “artificial intelligence” processes in weapons development.

Discussions at the United Nations

For more than nine years, autonomous weapons systems have been the subject of international debate in various fora, including the UN Human Rights Council, the UN General Assembly’s First Committee on Disarmament and International Security, and the Group of Governmental Experts on Emerging Technologies in the Field of Lethal Autonomous Weapons Systems (GGE on LAWS) of the Convention on Certain Conventional Weapons (CCW).

In these discussions, states, UN agencies, international organisations and non-governmental organisations have highlighted the various serious ethical, moral, humanitarian and legal implications of artificial intelligence (AI) and autonomous weapon systems. Despite a majority of states supporting the negotiation of a legal instrument, the Sixth CCW Review Conference in December 2021 failed to agree on a mandate to work towards any form of regulation.

The report on Artificial Intelligence and automated decisions

At the initiative of the Stop KillerRobots Campaign, the Automated Decisions Research team produced a report, released in September 2022, which summarises the current state of play and the common challenges to be addressed in both the civilian and military spheres in the face of the risks posed by these recent but rapid developments.

In the report’s executive summary, the researchers summarise the key messages as follows:

Firstly, it notes that “the extent to which many states and international bodies recognise the serious risks and challenges posed by the use of AI and automated decision-making technologies in the civilian space should be seen as validation of parallel and related concerns in the military space”.

Secondly, it is noted that “given the nature and magnitude of the harms at stake in automated processing in the context of military objectives, and the difficulties in applying civilian oversight mechanisms in military space, the challenges associated with autonomous weapon systems are particularly acute.”

It goes on to state that “The development of human rights-focused responses to AI and automated decision-making in civilian space should prompt states to pay attention to the rights of those affected as a fundamental starting point for generating the necessary norms in military space.”

Finally, it concludes that “The importance of accountability and responsibility for the use of autonomous weapon systems has implications for how certain rules should be developed to avoid an erosion of accountability and to ensure the protection and enforcement of international humanitarian law and fundamental human rights norms.”