MORE than 100 of the world’s leading technologists have written an open letter to governments around the world warning of the risks of an artificial intelligence (AI) arms race in autonomous weapons – or killer robots.

CEOs from 116 tech companies – including Elon Musk of SpaceX and Tesla and Mustafa Suleyman, founder and head of applied artificial intelligence (AI) at DeepMind, UK – say the governments should address their concerns over the development and use of fully autonomous weapons.

Under the banner of the Campaign to Stop Killer Robots, their letter says: “Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.

“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”

The letter adds to the growing number of warnings given by business leaders and AI experts on the consequences of misusing the technology. In January, 100 AI researchers and leaders in economics, law, ethics and philosophy met at a conference organised by the Future of Life Institute to address and formulate principles of beneficial AI.

They have developed the Asilomar AI Principles as guidelines to govern its future development. These include avoiding an arms race in lethal autonomous weapons and that superintelligence should only be developed in the service of widely shared ethical ideals, for the benefit of all humanity rather than one state or organisation.

Musk, along with physicist Stephen Hawking, Apple co-founder Steve Wozniak and futurologist Ray Kurzweil are among the 3500 signatories to the principles.

SpaceX chief Musk is an outspoken critic of the risks AI poses to humanity and has said his worst nightmare scenario was deep intelligence in the network, with robots and computers able to learn at an extremely fast rate and potentially without human oversight.

“What harm could deep intelligence in the network do?” he asked US government delegates at a conference last month. “Well, it could start a war by doing fake news and spoofing email accounts and just by manipulating information.”

The letter describes the ominous risks of robotic weaponry and says there is an urgent need for strong action.

It is aimed at a group of UN officials considering adding robotic weapons to the UN’s Convention on Certain Conventional Weapons. The 1981 convention and parallel treaties restrict chemical weapons, blinding laser weapons, mines, and other arms deemed to cause “unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately”.

The letter was released yesterday at an AI conference in Melbourne, Australia.

Robotic warriors could arguably reduce casualties among human soldiers from the world’s wealthiest and most advanced nations, but the risk to civilians is the headline concern of the technologists, with fears they could be used against innocent populations or hacked.

Toby Walsh, the Australian professor who organised the conference, warned that such an “arms race” was already under way.

According to Human Rights Watch, the UK, US, China, Israel, South Korea and Russia are already building an arsenal of autonomous weapons or precursor technologies, which were already available or under development from firms including BAE Systems, Raytheon, Dassault and MiG.

Ryan Gariepy, the founder of Clearpath Robotics, who was the first person to sign the letter, said: “Unlike other potential manifestations of AI, which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability.”