28 februari 2013

Milgram as NAO avatar



Treating Machines Like Social Beings


Many people have studied machine-human relations, and at this point it's clear that without realizing it, we often treat the machines around us like social beings.

Consider the work of Stanford professor Clifford Nass. In 1996, he arranged a series of experiments testing whether people observe the rule of reciprocity with machines.
Assimilation: Siri And Your Life With The Machines"Every culture has a rule of reciprocity, which roughly means, if I do something nice for you, you will do something nice for me," Nass says. "We wanted to see whether people would apply that to technology: Would they help a computer that helped them more than a computer that didn't help them?"

So they placed a series of people in a room with two computers. The people were told that the computer they were sitting at could answer any question they asked. In half of the experiments, the computer was incredibly helpful. Half the time, the computer did a terrible job.

After about 20 minutes of questioning, a screen appeared explaining that the computer was trying to improve its performance. The humans were then asked to do a very tedious task that involved matching colors for the computer. Now, sometimes the screen requesting help would appear on the computer the human had been using; sometimes the help request appeared on the screen of the computer across the aisle.

"Now, if these were people [and not computers]," Nass says, "we would expect that if I just helped you and then I asked you for help, you would feel obligated to help me a great deal. But if I just helped you and someone else asked you to help, you would feel less obligated to help them."

What the study demonstrated was that people do in fact obey the rule of reciprocity when it comes to computers. When the first computer was helpful to people, they helped it way more on the boring task than the other computer in the room. They reciprocated.

"But when the computer didn't help them, they actually did more color matching for the computer across the room than the computer they worked with, teaching the computer [they worked with] a lesson for not being helpful," says Nass.

Very likely, the humans involved had no idea they were treating these computers so differently. Their own behavior was invisible to them. Nass says that all day long, our interactions with the machines around us — our iPhones, our laptops — are subtly shaped by social rules we aren't necessarily aware we're applying to nonhumans.

"The relationship is profoundly social," he says. "The human brain is built so that when given the slightest hint that something is even vaguely social, or vaguely human — in this case, it was just answering questions; it didn't have a face on the screen, it didn't have a voice — but given the slightest hint of humanness, people will respond with an enormous array of social responses including, in this case, reciprocating and retaliating."

So what happens when a machine begs for its life — explicitly addressing us as if it were a social being? Are we able to hold in mind that, in actual fact, this machine cares as much about being turned off as your television or your toaster — that the machine doesn't care about losing it's life at all?

Bartneck's Milgram Study With Robots

Geen opmerkingen: