The little dinosaur robot blinks its big blue eyes and stretches its neck. When the animal tilts its head toward the human, the human responds by stroking it lightly. The dinosaur closes its eyes in contentment. But when it suddenly freezes, humans turn it over and check the battery.
Why do we react to social robots the way we do, sometimes treating them as real and sometimes recognizing them as machines? That’s the question at the heart of a new study by Professor Emeritus Albert Ray Lang. Kerstin Fischer, professor of language and technological interaction at the Faculty of Humanities and his long-time collaborator, the University of Southern Denmark;
“It’s puzzling how people respond socially to things that are actually machines,” Fisher said. “There’s a lot of emotion and sociability in interacting with robots. How can we treat these machines as if they were living humans?”
Clark and Fisher argue that people interpret social robots designed to interact with humans as depictions of characters similar to puppets, stage actors, and ventriloquist dummies.
Their views are debatable.Clark and Fisher’s paper was recently published in the journal Behavioral science and brain science At the same time, a public peer commentary was held in which dozens of researchers from multiple disciplines from around the world responded to their conclusions.
This discussion is important in a world where humans are encountering robots and robots are becoming more capable. Understanding how and why people interact socially with robots can inform how future robots are designed and how we can interpret people’s reactions to them. may be able to decide.
Basics of drawing models
Anyone who has seen Michelangelo’s David knows that it is a carved block of marble. But the viewer also understands it as a depiction of a Biblical character preparing for battle with Goliath.
Similarly, Clark and Fisher said people recognize that social robots are made of wires and sensors depicting characters such as small dinosaurs, pet dogs, and human caretakers and tutors. . However, when people interact with these robots, most people will be willing to treat the robots as the characters they portray.
“We understand what an image is, we understand what a painting is, we understand what a movie is, and therefore we understand what a robot is. Because you build a robot character in exactly the same way you build a character,’ a painting or a movie,” Fisher said.
Clark said people are also aware that the characters are specifically designed to interact with humans.
“People understand that these robots are ultimately the responsibility of the people who are designing and operating them,” he says.
This knowledge will help when something goes wrong, such as when the robot shares incorrect information or hurts someone. People are not responsible for robots. They blame owners and operators and re-emphasize our understanding of objects and personalities.
Another opinion from a colleague at Stanford University
One commentary that extends the representational model comes from Byron Reeves, another Stanford University researcher who studies how humans psychologically process media characters and avatars, including robots. I am Professor Paul C. Edwards of the Department of Communication in the Faculty of Humanities and Sciences.
People know exactly how to combine descriptions, gestures, glances and mutual attention when communicating where things are. Well, it would be really, really hard to get a robot to be as skilled at even something as simple as that.
—Herbert Clark
Professor Emeritus of Psychology
Reeves argues that while people may treat robots as representations, they may also have a quick, natural reaction to them and put their thoughts on the back burner. It’s like when you jump in fear when a dinosaur appears on screen in a movie, and then you convince yourself that it’s not a dinosaur. It’s not real.
“It’s really fast thinking. I mean, it’s milliseconds fast,” Reeves said. “To be fair,[Clarke]thinks his depiction model applies to those quick reactions as well. I don’t think it fits well with their main concept. In the depiction, ‘gratitude,’ ‘ Words like “interpretation” and “imagination” are emphasized, and they seem slower and more thoughtful. It’s a kind of literary response. “I’m going to actively pretend this is real because it’s funny.”
In their response to commentary, Clark and Fisher say, for example, that people’s immersion in the story world of a novel is “continuous and ongoing.” You don’t have to re-immerse yourself in each new sentence or paragraph. The same goes for social robots. Humans do not need extra “time and effort” to “reflect” on each new step of interaction with a robot. ”
They argue that the understanding of depictions is immediate and rapid, and can be understood by children from a very young age.
“I have a granddaughter who is 6 years old now, and by the time she was 18 months or 2 years old, she was already able to pick up dolls and treat them as characters,” Clark said.
Reeves said his model is likely to predict how social robot technology will advance in the future.
“Dinosaurs in movies are getting better and better, juicier and scarier,” he said. “I think robots will go there, too.”
Lessons for designers and interactors
Clark and Fisher said that while humans may treat social robots like real people or animals, the technology falls far short of replicating real human interactions.
“Even something as simple as spatial description requires real skill for people to communicate effectively,” Clark says. “People know exactly how to tell where things are through a combination of descriptions, gestures, glances, and mutual attention. It would be really, really hard to get them skilled at it.”
Even highly social robots have very limited limitations. However, when people interpret them as characters, they tend to overestimate their abilities.
“Even if you have a robot math tutor, you can’t leave your child alone with a robot. Why? Because when your child is choking, climbing on a balcony, or doing something else, And that’s because you don’t realize it,” Fisher said.
This kind of overestimation also causes problems with other common but limited technologies, such as voice assistants and AI chatbots. Clark said those who design robots and similar technologies need to make constraints more clear to users.
Clark and Fisher said their model not only recognizes the level of work that goes into designing social robots, but also encourages a positive view of the people who interact with social robots. Under the descriptive model, a person who treats a small dinosaur robot like a pet behaves normally.
“Our model shows respect for people interacting with robots in a social way,” Fisher said. “There’s no need to think that they’re lonely or irrational or confused or in some way deficient.”