As the use of humanoid robots proliferates, an increasing amount of people may find themselves face-to-“face” with a robot in everyday life.
Although there is a plethora of information available on facial social cues and how we interpret them in the field of human-human social interaction, we cannot assume that these findings flawlessly transfer to human-robot interaction.
Therefore, more research on facial cues in human-robot interaction is required. This study investigated deception in human-robot interaction context, focusing on the effect that eye contact with a robot has on honesty toward this robot.
In an iterative task, participants could assist a humanoid robot by providing it with correct information, or potentially secure a reward for themselves by providing it with incorrect information.
Results show that participants are increasingly honest after the robot establishes eye contact with them, but only if this is in response to deceptive behaviour. Behaviour is not influenced by the establishment of eye contact if the participant is actively engaging in honest behaviour.
These findings support the notion that humanoid robots can be perceived as and treated like, social agents since the herein described effect mirrors one present in human-human social interaction.