19 september 2010

Cheating ? Not me !!


Would it ever be acceptable for a robot or a computer program to deliberately deceive a human being?
A couple of researchers, funded by the U.S. Office of Naval Research, are training robots in the art of deception:
Robots can perform an ever-increasing number of human-like actions, but until recently, lying wasn’t one of them. Now, thanks to researchers at the Georgia Institute of Technology, they can. More accurately, the Deep South robots have been taught “deceptive behavior.”
This might sound like the recipe for a Philip K. Dick-esque disaster, but it could have practical uses. Robots on the battlefield, for instance, could use deception to elude captors. In a search and rescue scenario, a robot might have to be deceptive to handle a panicking human. For now, however, the robots are using their new skill to play a mean game of hide-and-seek.

Regents professor Ronald Arkin and research engineer Alan Wagner utilized interdependence theory and game theory to create algorithms that tested the value of deception in a given situation. In order for deception to be deemed appropriate, the situation had to involve a conflict between the deceiving robot and another robot, and the deceiving robot had to benefit from the deception. It carried out its dastardly deeds by providing false communications regarding its actions, based on what it knew about the other robot…

The full results of the Georgia Tech experiment were recently published in the International Journal of Social Robotics.

“The experimental results weren’t perfect, but they demonstrated the learning and use of deception signals by real robots in a noisy environment,” said Wagner. “The results were also a preliminary indication that the techniques and algorithms described in the paper could be used to successfully produce deceptive behavior in a robot.”


read more

Geen opmerkingen: