How Rude Humanoid Robots Can Mess With Your Head

A pair of clever studies show how the development of advanced social robots is far outpacing our understanding of how they’re going to make us feel.
Image may contain Toy and Robot
Casey Chin

The little humanoid robot’s name is Meccanoid, and it is a scoundrel. The well-meaning human test subject asks the robot: If you were to make a friend, what would you want them to know?

“That I’m bored,” Meccanoid says.

Alright, let’s start over. A new participant asks Meccanoid the same question, but now the robot is programmed to be nice.

What does this robot want the friend to know? “I already like him a lot,” Meccanoid says. Much better.

Researchers in France have been exposing human subjects to nasty and pleasant humanoids for good reason: They’re conducting research into how a robot’s attitude affects a human’s ability to do a task. On Wednesday, they published their research in the journal Science Robotics, an issue that also includes research on how robots can pressure children into making certain decisions. The pair of studies show how the development of advanced social robots is far outpacing our understanding of how they’re going to make us feel.

First, back to Meccanoid. The participants began with an exercise where they had to identify the color in which a word is printed, as opposed to the word itself. So for instance the word “blue” printed in green ink. The temptation may be to blurt out “blue,” when you need to say green. This is known as a Stroop task.

The participants initially did the test on their own, and then had a little conversation with Meccanoid—questions volleyed back and forth between the bot and the participant. But each participant only got to experience one of Meccanoid’s mercurial moods.

Then they returned to the Stroop testing while the robot watched. “What we've seen is that in the presence of the bad robot, the participants improved their performance significantly compared to the participants in the presence of the good robot,” says study lead author Nicolas Spatola, a psychologist at the Université Clermont Auvergne in France.

So what’s going on here? “When we were doing the experiment, we saw how a person could be emotionally impacted by the robot,” says Spatola. “The bad robot is seen as more threatening.” Despite the fact that this is a nonsentient robot, its human beholder seems to actually care what and how it thinks. Well, kinda. “Because the robot is bad, you will tend to monitor its behavior and its movement more deeply because he's more unpredictable,” says Spatola. That is, the participants who tangled with the bad robot were more alert, which may have made them better at the test.

In the second study published Wednesday, the robots were much less ornery. Three small humanoids, the Nao model from SoftBank Robotics, sat around a table (adorably, the machines sat on booster seats when interacting with adults to boost them up to the same level as the big kids). They looked at a screen that showed a single vertical line on the left, and three vertical lines of various lengths on the right. Participants had to choose which of those three lines matched the length of the one on the left.

But first, their robot peers had to choose. The autonomous machines, which ran on custom software, all gave the wrong answer two thirds of the time, but that didn’t faze the adult participants. Compared to a group of participants who did the same experiment with human adults giving wrong answers in the place of robots, these participants conformed more to their fellow humans than the machines.

Children, on the other hand, followed the robots down the path of incorrectness. Fully three quarters of their answers matched the robots’ incorrect answers. In other words, the researchers say, the kids gave in to peer pressure. Children, after all, are prone to suspend disbelief, says Bielefeld University's Anna-Lisa Vollmer, lead author on the study. "We know something similar is going on with robots: rather than seeing a robot as a machine consisting of electronics and plastic, they see a social character," she says. "This might explain why they succumb to peer pressure by the robots."

Is this really peer pressure, though, if the kids’ peers are robots? This is where things get tricky. “I think that makes a big assumption about the children’s reactions, because it doesn't necessarily have to have that social aspect of peer pressure,” says Julie Carpenter, who studies human-robot interaction, but who wasn’t involved in these studies. “Children and adults can over-rely on technology.” Maybe the kids didn’t think of the humanoids as peers, but simply as useful technological tools.

Still, both this robot and the mean/nice robots are eliciting a reaction from the human subjects. Which is what’s so interesting and daunting about a near future in which we interface with machines, particularly humanoids, more and more. What these studies suggest is that humanoid robots can manipulate us in complex ways. And scientists are just barely beginning to understand those dynamics.

Consider a super smart robotic doll that a kid develops an intense bond with. Great, fine, kids have been loving dolls for millennia. But what if that robot doll starts to exploit that bond by, say, trying to convince the kid to spend $19.99 to upgrade its software to be even smarter and even more fun?

Machines don’t just do things out of the blue. Someone at some point has programmed them to behave a certain way, whether that’s picking the wrong line on a screen or just being mean or bilking unsuspecting kids. “What you have to ask yourself is, what are the robot's goals?” says Carpenter. “Are they aligned with my own?”

Something to keep in mind the next time a robot seems a little too rude.


More Great WIRED Stories