Jump to content
Science Forums

Uncanny Valley Of The Mind


Recommended Posts

Roboticists have found that making robots too humanlike, but not humanlike enough, tends to disturb people -- a phenomenon named the "uncanny valley" by Japanese roboticist Masahiro Mori. An article from AAAS SCIENCE Online ("Beware Emotional Robots" by Matthew Hutson, 13 March 2017), new research hints that a robot that behaves too humanlike is equally disturbing.

 

Jan-Philipp Stein and Peter Ohler, psychologists at the Chemnitz University of Technology in Germany, conducted a study in which they handed out virtual-reality headsets to 92 participants, and asked them to observe a short conversation between a virtual man and woman in a public plaza. The characters discussed their weariness from hot weather, the woman complained about her lack of free time, with the man sympathizing with the woman's annoyance at waiting for a friend. All 92 subjects watched the same scene, but they were given four different descriptions of what was going on:

  • Half were told they were watching avatars controlled by humans.
     
  • Half were told the characters were controlled by computer.
     
  • Within each group, half were told the conversation was scripted, half were told it was spontaneous.

Those who thought the characters were controlled by computers not working from a script found the scene the most unsettling. Although the virtual humans in the scene looked the same in all four cases, people had a lot more trouble with computers pretending to be humans than they did when they thought the scene was either generated by, or scripted from, actual interactions with humans. Stein and Ohler call the phenomenon the "uncanny valley of the mind."

 

There has been considerable work in developing computers and robots that can interact more naturally with humans, but this study suggests that there's a limit, that nobody wants to interact with a computer that is pretending to communicate on a personal level, or simulate emotions that it obviously doesn't feel. With social skills, there may not be an uncanny valley, but instead an "uncanny cliff".

 

The two researchers suspect that people were disturbed by watching, as they thought, two computers carrying on a spontaneous conversation, because they didn't like the idea that computers could decide what they were going to do on their own. In future work, Stein plans to see whether people feel more comfortable with humanlike virtual agents when they believe they have control over the agents' behavior.

 

Jonathan Gratch -- a computer scientist at the University of Southern California in Los Angeles, who was not involved with the work -- sees the experiment as significant for future work on human-machine interface: "There's going to be a lot more human-machine interactions, human-machine teams, machines being your boss, machines writing newspaper articles. And so this is a very topical question and problem."

 

MRG: In 1950 Alan Turing, the British mathematician who was one of the founders of modern computing, suggested what would become known as the "Turing test" -- saying that if we could communicate with a computer as if we were talking to a person, then we could say it was a thinking machine.

 

The Turing test is much misunderstood, critics saying it proves nothing. Actually, Turing merely presented the scenario as a definition -- that if we could have a natural conversation with a computer, we would believe it thinks. Turing understood that nobody would be able to come up with an persuasive argument to destroy that belief, or come up with a better test.

 

The Turing test, as phrased, is too strict, saying that only a computer that could not be told from a human would be able to pass it. However, it is not unusual to find people who can't open their mouths and make sense, and so they would flunk the Turing test. Similarly, perfectly sensible people who don't speak English as a first language may not be able to understand or reproduce the nuances in a conversation between two English-speakers, even if their English is very good. Conversing with such non-English speakers can be done, but it requires a more careful and literal approach to communications.

 

In other words, the notion of a machine that is a duplicate of sorts of a human -- like actor Brent Spiner's Data the android on STAR TREK, one of the best realizations of the concept -- is something of a non-issue. We just want machines that we can comfortably communicate with, not machines that can perfectly simulate humans. It would be expensive or impractical to build such imitations, and since they would be obtained to do a particular job or sets of jobs, we would have little reason to do so. We would still assume, whether we realized our perception or not, that we were dealing with thinking beings. After all, nobody seriously doubts that animals think to a degree; we would judge a conversational machine as much more mindful than animals are; or for that matter, than incoherent humans.

 

Incidentally, fans of comics, animations, and other lightweight fiction -- such as myself -- recognize the "uncanny valley" in watching dramas or other supposedly heavyweight fictions: as was once said by somebody else, we laugh at all the wrong places. When watching an animation or the like, it's obvious it's not to be taken seriously, it's just for fun. A drama, in contrast, is supposed to be taken seriously, which is setting the bar higher; if it's not honestly convincing, it just seems silly.

 

There are suggestions that the "uncanny valley" with machines has something to do with human fear of machines being too much like us. Possibly so, but it may have more to do with "willing suspension of disbelief" -- which writers of fantasy and science fiction try to achieve with readers, attempting to persuade them to accept scenarios that are unrealistic, even preposterous. The irony is that readers find it easier to accept something that is knowingly preposterous than something that pretends to represent reality, but fails to convincingly do so.

 

On similar lines, modern animated features are often purely digital; they may feature human characters that are 3D rendered, with full shading, or implemented to resemble traditional drawn characters, with no great concern for shading. Even if the traditional characters are rendered as realistically as possible, they seem more convincing than 3D-rendered characters, which have the awkward and unconvincing appearance of being puppets -- very sophisticated puppets, but still obviously puppets.

 

-------------------------------------------------------------------------------------------------------------------------

 

MRG:  This is taken from my private notes.   The comments from AAAS SCIENCE are a condensation, not a copy.  If there's any complaints about this posting, let me know.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...