WORLD-LEADING scientists have called for a ban on killer robots in case they wipe out the human race.

Weapons built with Artificial Intelligence (AI) could run amok just like Ultron in the recent Avengers movie, they claim.

More than 1000 scientists and technology experts have backed the call which was presented in a letter to an international AI conference this week.

Apple co-founder Steve Wozniak, scientist Stephen Hawking and entrepreneur Elon Musk are among those who have signed the letter brought to delegates at the Buenos Aires conference.

Google AI chief Demis Hassabis, MIT professor Noam Chomsky and consciousness expert Daniel Dennett also backed the call.

The boffins want a ban on the use of AI in weapons as they would be “beyond meaningful human control”.

“Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons – and do not want others to tarnish their field by doing so,” their letter states.

“Artificial Intelligence technology has reached a point where the deployment of [autonomous] systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

The letter added that AI should benefit the human race – rather than advance military capabilities.

“Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control,” the letter concluded.

The United Nations has also raised the possibility of a ban on some autonomous weapons.




BIGGEST THREAT

PROFESSOR Hawking, along with other experts, has previously voiced concerns about AI and questions on the issue are being asked on Reddit’s Ask Me Anything which is currently being hosted by the scientist.

Hawking has warned: “The development of full artificial intelligence could spell the end of the human race.”

While he concedes that some types of AI have proved extremely useful he believes there is the potential for a machine that could supersede humans.

“It would take off on its own, and re-design itself at an ever increasing rate,” he said. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Microsoft’s Bill Gates has expressed similar concerns.

“I am in the camp that is concerned about super intelligence,” he says.

“First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well.

“A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Last year Musk, boss of rocket maker SpaceX and car firm Tesla, claimed that AI was the “biggest existential threat” facing humankind.

“With artificial intelligence, we are summoning the demon,” he said. “In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”

Sir Clive Sinclair, inventor of the Spectrum computer, goes further, stating that a total destruction of humans by AI is unavoidable.

“Once you start to make machines that are rivalling and surpassing humans with intelligence, it’s going to be very difficult for us to survive,” he said. “It’s just an inevitability.”

This total annihilation may not be too far away either, according to Professor Nick Bostrom of Oxford University who predicts an AI-led apocalypse could finish off the human race within a century.

Ray Kurzweil, Google’s director of engineering, agrees it will be extremely difficult to create an algorithmic ethical code that would be able to rein in super-smart software.




FEW SAFEGUARDS

SOME scientists argue the risks can be averted by being proactive although others believe that not enough priority is being given to working out safeguards.

“It’s best to do that before the technologies are fully developed and AI and robotics are certainly not fully developed yet,” said Neil Jacobstein, AI expert at California’s Singularity University. “The possibility of something going wrong increases when you don’t think about what those potential wrong things are.

“I think there is a great opportunity for us to be proactive about anticipating those possible negative risks and doing our best to develop redundant, layered thoughtful controls for those risks.”

Murray Shanahan, professor of cognitive robotics at London’s Imperial College believes there is no need to panic.

“I do not think we are about to develop human-level AI within the next 10-20 years,” he said. “On the other hand it’s probably a good idea for AI researchers to start thinking about the issues that Stephen Hawking and others have raised.”

Unlike Bill Gates, Microsoft Research chief Eric Horvitz believes AI is not a threat.

“There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences. I fundamentally don’t think that’s going to happen,” he said.

“I think that we will be very proactive in terms of how we field AI systems, and that in the end we’ll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.”

Rollo Carpenter, the creator of Cleverbot, software designed to talk like people, believes that humans will remain in charge for some time, using AI to solve many of the world’s problems.

However, Carpenter added: “We cannot quite know what will happen if a machine exceeds our own intelligence, so we can’t know if we’ll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it.”