TAt 3:30 PM on November 27, 2020, his security convoy turned towards Tehran’s Imam Khomeini Boulevard. The VIP was Iranian scientist Mohsen Fakhrizadeh, known as the head of Iran’s secret nuclear weapons program. He was driving his wife to their country property, riding behind bodyguards in other vehicles. They were near the house when the killers attacked.

Several shots rang out in Fakhrizadeh’s black Nissan, causing it to stop. The gun fired again, striking the scientist in the shoulder and ejecting him from the vehicle. As Fakhrizadeh lay in the open, the assassin fired the fatal shots, leaving Fakhrizadeh’s wife uninjured in the passenger seat.

Then something strange happened. A pickup truck parked on the side of the road exploded for no reason. During a subsequent search of the wreckage, Iranian security forces found the remains of a robotic machine gun, which had multiple cameras and a computer-controlled mechanism to pull the trigger. Was Fakhrizadeh assassinated by a robot?

A subsequent report The New York Times It turns out that the robot machine gun is not fully autonomous. Instead, an assassin some 1,000 km away was fed images from a truck and decided when to pull the trigger. But the AI ​​software compensated for the target’s movements in 1.6 seconds for the image from the truck to be transmitted via satellite to the killer, and signaled a pull to return the trigger.

It’s the stuff of nightmares, and footage from the war in Ukraine does nothing to ease the fear. Drones are ubiquitous in warfare, from the Turkish-made Bayraktar TB2 used to attack Russian forces occupying Snake Island, maritime drones that attack Russian ships in Sevastopol harbor, and modified quadcopters that drop grenades on unsuspecting infantry and other targets. And if the footage on the internet is anything to go by, things could get worse.

The ceremony for Iranian nuclear scientist Mohsen Fakhrizadeh, who was killed by a robot machine gun operated by an assassin 1,000 km away.
The ceremony for Iranian nuclear scientist Mohsen Fakhrizadeh, who was killed by a robot machine gun operated by an assassin 1,000 km away. Photo: Anadolu Agency/Getty Images

In a video Posted on Weibo, a Chinese defense contractor is seen demonstrating a drone by placing a robot dog on the ground. Robots get life. He has a machine gun on his back. In another video, a commercially available robot dog is seen replaced by a Russian shoot the gunTurned around and picked the robot up on its hind legs.

In response to these horrific videos, Boston Dynamics and five other robotics companies issued an open letter in October: “We believe that adding arms to robots that are operated remotely or autonomously, are widely available to people, and are able to navigate previously inaccessible places. New risks of harm and serious ethical issues arise in places where people live and work. Weaponized applications of these newly capable robots will also undermine public trust in the technology, thereby undermining its enormous benefits to society.”

In a statement to Inspector, the company further explained: “We have seen a tentative increase in attempts by individuals to weaponize commercially available robots, and this letter indicates that the broader advanced mobile robotics industry opposes weaponization and is committed to preventing it. We hope that the strength of our numbers will encourage policymakers to engage with this issue to help promote the safe use of mobile robots and prevent their misuse.

However, Boston Dynamics is effectively owned by Hyundai Motor Group, which bought a controlling interest in the company in June 2021, and another part of that group, Hyundai Rotem, has no such doubts. In April this year, Hyundai Rotem announced a collaboration with another South Korean firm, Rainbow Robotics, to develop multi-legged defense robots. The promotional image shows a robot dog with a gun attached.

In addition, defense analyst and military historian Tim Ripley wonders what the commitment to the Boston Dynamic means in practice. Even if you don’t arm these robots, they can still be tools of war, he says.

“If the robot is a surveillance drone and it finds a target and you fire artillery at it and it kills people, that drone is as much a part of a weapon system as a missile on a drone. It is still a part of the kill chain,” he says.

Drone surveillance plays an important role in the Ukraine war, being used by both sides to track enemy movements and locate targets for artillery bombardment.


WComputerized military hardware always consists of two parts: the hardware itself and the control software. Although robots are not yet a common feature on the battlefield beyond drones, more and more intelligent software is being widely used.

“Our system already has a whole range of autonomy built into it. It is considered essential because it enables humans to make quick decisions,” says Mike Martin, senior war studies fellow at King’s College, London.

Dogs of War: A young girl and her mother interact with a robot dog made by Ghost Robotics in Seoul, South Korea.
Dogs of War: A young girl and her mother interact with a robot dog made by Ghost Robotics in Seoul, South Korea. Photo: AFP/Getty Images

He gave the example of an Apache helicopter scanning the landscape for heat signatures. The onboard software will quickly identify it as a potential target. It can also recommend how to prioritize those targets and then present that information to the pilot to decide what to do next.

If defense conventions are anything to go by, the military has an appetite for more such systems, especially if they can be paired with robots. US firm Ghost Robotics makes robot dogs, or quadruped robots as the industry calls them. As well as being known as surveillance devices to aid in patrolling potentially hostile areas, they are also being touted as killing machines.

At the Association of the United States Army’s 2021 annual conference last October, Ghost Robotics showed off a quadruped with a gun on top. This gun is manufactured by the American company Sword Defense Systems and is called the Special Purpose Unmanned Rifle (Spur). On the Sword Defense Systems website, the Spur is called “the future of unmanned weapon systems, and that future is now”.

In the UK, the Royal Navy is currently testing an autonomous submarine called Manta. The nine meter long vehicle is expected to carry sonar, cameras, communications and jamming equipment. Meanwhile, UK troops are currently participating in war games with their American counterparts in the Mojave Desert. Known as Project Convergence, the focus of the exercise is the use of drones, other robotic vehicles and artificial intelligence to “help make British forces more lethal on the battlefield”.

Yet in today’s most sophisticated systems, humans are always involved in decision making. There are two levels of involvement: “In the loop” systems where computers select potential targets and present them to a human operator who then decides what to do. However, with an “on the loop” system, the computer tells the human operator which target it recommends taking out first. A human can always override a computer, but a machine is more active in decision making. The Rubicon to cross is where the system is fully automated, selecting and prosecuting its own targets without human intervention.

“Hopefully we never get to that point,” says Martin. “If you hand over decision-making authority to autonomous systems, you lose control, and who’s to say that the system won’t decide that the best thing to do to prosecute war is to remove their own leadership?” It is a terrifying situation that creates the image of the film TerminatorIn which artificially intelligent robots decide to go to war to destroy mankind.

Feras Batarseh is an associate professor and co-author at Virginia Tech University. AI Assurance: Towards Trustworthy, Explainable, Safe and Ethical AI (Elsevier). While he believes fully autonomous systems are a long way off, he cautions that artificial intelligence is reaching dangerous levels of development.

“Technology is at a point where it’s not intelligent enough to be fully trusted, yet not so stupid that a human automatically knows it should be in control,” he says.

In other words, a soldier who currently trusts an AI system may be putting himself at greater risk because the current generation of AI fails in situations where its meaning has not been clearly taught. Researchers refer to unexpected situations or events as externalities, and war greatly increases their number.

“In war, unexpected things always happen. Outliers are the name of the game and we know that current AIs don’t do well with outliers,” says Batarseh.

Even if we solve this problem, there are still major ethical issues to be faced. For example, how do you decide if the AI ​​made the right choice when it decided to kill? This is similar to the so-called trolley problem that is currently hindering the development of automated vehicles. It comes in many forms but essentially asks whether it is morally right to take an action in an impending accident in which many people may be killed, or to save such people but risk killing a small number of others. People take such questions to a whole new level when the system involved is actually programmed to kill.

Sorin Mattei of Purdue University, Indiana, believes that one step toward a solution is to make each AI warrior aware of his own vulnerability. The robot will then appreciate its continued existence and extrapolate it to humans. Matei also suggests that this could lead to a more humane trial of war.

A member of a Ukrainian volunteer battalion learns how to fly a drone.
A member of a Ukrainian volunteer battalion learns how to fly a drone. Drone surveillance plays an important role in war, being used by both sides to identify targets. Photo: Sergey Kozlov/EPA

“We can program them to be as sensitive as the Geneva Conventions want human actors to be,” he says. “In order to trust AIs, we need to give them something that will make them feel threatened.”

But even a morally programmed killer robot – or a civilian robot for that matter – is vulnerable to one thing: hacking. “The thing about weapons system development is that you develop a weapon system and at the same time someone tries to counter it,” says Ripley.

With this in mind, an army of hackable robot warriors would be the most obvious target for an enemy cyberattack, which could turn them against their creators and erase all morality from their microchip memories. The consequences can be dire. Yet it seems that manufacturers and defense contractors are pushing hard in this direction.

To gain meaningful control over such formidable weapons, Martin suggests, we must look to military history.

“If you look at other weapons systems that humans are really afraid of – say, nuclear, chemical, biological – the reason we’ve reached arms control treaties on them is not because we stopped their development early, but because in the arms race their development got so scary that everybody went, OK. , right, let’s talk about it,” says Martin.

Until that day comes, it seems certain that there are worrisome times ahead, as drones and robots and other unmanned weapons increasingly find their way onto the world’s battlefields.