Imagine being able to have a bodyguard that is fearless and tireless. He doesn't need sleep, can survive without food and water and never needs to take time off. Hes strong, agile and obeys your every command. He can take a bullet for you and brush off a group of well-built attackers like they were toothpicks. Just the stuff movies are made of.
But don't reach for the popcorn just yet. Todays robots have come of age, and can be programmed to help fight crime, up to a point. Techcrunch.com has reported that a Palo Alto startup called Knightscope has developed a fleet of crime-fighting machinery it hopes may keep us safe. Sarah Buhr explains that Knightscopes K5 security bots resemble a mix between R2D2 and a Dalek from Doctor Who and the system behind these bots is a bit Orwellian.
The K5's have broadcasting and sophisticated monitoring capabilities to keep public spaces in check as they rove through open areas, halls, and corridors for suspicious activity.
The units upload what they see to a backend security network using 360-degree high-definition and low-light infrared cameras and a built-in microphone can be used to communicate with passersby. An audio event detection system can also pick up on activities like breaking glass and send an alert to the system as well.
Malls and office buildings are also starting to employ the K5 units as security assistants. But as would be expected, robots will still not be able to replace mall cops or security guards anytime soon.
Criminals or mischievous folk who try to kick or push the robots over may be shocked to find the robots can talk back to them, capture their behavior on film and alert authorities behind the scenes as well.
Other robots can be far more dangerous. Robots that can kill are close to being licensed for use. Last October, Ellie Fagharifard, writing in the dailymail.co.uk, warned that killer robots could become a reality if the UN delays talks. The caution came from Christof Heyns, the UN special rapporteur.
The issue is that the UK and the US are currently attempting to water down a pre-emptive ban on autonomous weapons at the UN general assembly in New York. But this is delaying an agreement on banning the technology, which could allow nations to possess killer robots before any laws come into force.
Semi-autonomous weapons are already in use by countries such as South Korea and the U.K.
Heyns pointed out that a lot of investment has gone into the development on Artificial Intelligence (AI) weapons, which is causing negotiations to stall. China reportedly wants to address both 'existing and emerging technologies,' but the UK and US want only 'existing' weaponry to be included.
Professor Stuart Russell from the University of California at Berkeley said that drones will be limited only by their physical abilities, not by the shortcomings of AI, and he called on experts to take a stance in order to prevent the development of such unstoppable killing machines. Professor Russell further claimed that the deadly drones are the likely 'endpoint' of the current technological march towards lethal autonomous weapons systems (LAWS).
Then there are the ethical and moral questions, regardless of the form the robots may take. Will robots get away with war crimes? If a robot unlawfully kills someone in the heat of battle, who would be liable for the death? How would they make choices or exercise discretion?
Professor Russell stressed that it was 'difficult or impossible' for current AI systems to satisfy the subjective requirements of the 1949 Geneva Convention on humane conduct in war, which emphasize the need for military necessity, discrimination between combatants and non-combatants, and proper regard for the potential of collateral damage.
All the more reason why robots should be restricted to their current 'day jobs' as closely-watched human assistants. Should Artificial Intelligence be brought under the control of criminals or other rogues, let alone simply malfunction, the consequences could be devastating indeed.