“A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Azimov’s First Rule of Robotics

From autocorrect to social media algorithms, Artificial Intelligence (AI) is everywhere. The technology is growing exponentially – potentially soon to overtake human levels of intelligence. In many ways, it can be a force for good. But what happens when the military designs AI to kill autonomously? The Campaign to Stop Killer Robots is taking a stand.

Weaponised artificial intelligence: as dangerous as nuclear weapons?

Elon Musk believes Artificial Intelligence to be far more dangerous for the future of humanity than nuclear weapons.

This claim seems debatable when you consider the long-lasting destructive consequences of the Hiroshima and Nagasaki bombings – but the game changes when it’s weaponised artificial intelligence in question.

Right now, the future of humanity hinges precariously upon world leaders’ willingness to “press the button”, an action that could result in nuclear war.

The decision to kill millions rests in human hands, and there have already been many near misses.

In each of these instances, the human emotions of doubt, curiosity or downright common sense prevented that decision from being made.

The difference between a human and a robot is that a human can change their mind.

When faced with the task of killing another human being, artificial intelligence doesn’t possess that kind of humanity.

It’s not that these robots are evil – it’s simply that they don’t know what a human is. They don’t value human life nor understand what it means to destroy a soul.

They are metal and wires, a binary on-off system that either acts or doesn’t. If artificial intelligence is programmed to kill, there is no grey area, no wiggle room for reconsideration.

The Campaign to Stop Killer Robots

It’s from this dystopian landscape that the Campaign to Stop Killer robots emerges.

They recently launched a new website https://automatedresearch.org/, which provides reports and updates on the use of weaponised robot technology.

For now, the military claims that these robots are “here to help humans”.

Jodie Williams, spokesperson for the Campaign the Stop Killer Robots gives a chilling response: “And then they will be helping humans kill.”

For years, the military have psychologically conditioned soldiers to kill without remorse. Just read Gwynne Dyer’s, The Shortest History of War.

With techniques ranging from the use of humanoid shaped targets for shooting practice to marching to the beat of “Kill. Kill. Kill.”, it would be delusional to consider that the military wouldn’t consider using killer robots.

Comparatively speaking to be fair, programming a robot to kill is probably more ethical than brainwashing a person.

What do you think?