You have twenty seconds to comply

Robocop looks a pushover compared with a robotic security guard that shoots at will

Robocop looks a pushover compared with a robotic security guard that shoots at will

It’s been sixty years since writer Isaac Asimov dreamed up his laws governing robot behaviour. But the message still hasn’t sunk in. Researchers in Thailand have developed a robot security guard that comes armed with a gun, and has no qualms about whom it shoots.

Called “Roboguard”, the gun-toting sentinel is designed as a cheap alternative to a human guard. It can be ordered to fire at will, or told to check first with a human via a secure Internet connection.

As they appeared in Asimov’s science fiction writings in 1940, the three laws of robotics were meant to prevent robots from harming people (see Table). Roboguard appears to have the potential to flout all three.

The machine was built by Pitikhate Sooraksa of King Mongkut’s Institute of Technology in Ladkrabang, Bangkok. It consists of a handgun and a small video camera mounted on a motorised holder that can direct them automatically.

“It has two modes, manual and automatic,” says Sooraksa. Using the weapon in manual mode, he can control the gun from a computer anywhere in the world. A laser pointer on top of the gun marks its current target.

For automatic operation, Roboguard is fitted with infrared sensors that allow it to track people as they move. Sooraksa has password-protected the “fire” command for when the robot is operated over the Internet. “We think the decision to fire should always be a human decision,” he says. “Otherwise it could kill people.”

Issac Asimov’s laws of robotics
  • First Law: A robot may not injure a human being, or through inaction, allow a human being to come to harm.
  • Second law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
  • Third law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

This doesn’t reassure Kevin Warwick, a cyberneticist at Reading University who has long warned of the dangers of robots gaining too much power over human beings. “Things can always go wrong,” he says. You can never allow for all eventualities. “We need to think about introducing laws like Asimov’s, but even then robots will find ways to get round them.”

Other researchers were equally concerned about Roboguard. “I find this quite horrific,” says Chris Czarnecki of the Centre for Computational Intelligence at De Montfort University in Leicester. “What about time delays across the Internet when it’s busy? What you’ll be seeing and what the gun’s pointing at will be two different things. You could end up shooting anything.”

Czarnecki also suspects the robot’s tracking system might be error-prone. “If the tracking’s infrared, what happens when the Sun comes out? It’s a big source of infrared radiation.”

At the moment, Roboguard is tooled up with nothing more powerful than an air gun. To test its accuracy, Sooraksa pinned balloons to the walls and took potshots at them from a computer. “It’s very similar to a real gun,” he says. It could easily be upgraded to a more powerful weapon such as a machine gun, he adds.

Sooraksa says Roboguard might be of interest to private companies, but sees the armed forces as a more likely buyer. “We’d like to show it to the military,” he says. “It should be in good hands.”

The current, static version of Roboguard could be just the start. Sooraksa hopes to develop his prototype further. “You could make it mobile, it could be designed as a walking system,” he says. “We have the technology.”

Author: Ian Sample

News Service: New Scientist Magazine


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.