Robots’ eye contact elicits human-like responses in infants

Researchers recently published a study in the journal Biological Psychology investigating how infants respond to eye contact from both humans and humanoid robots. The study found that infants, even at just 6 to 8 months old, recognize the significance of eye contact, not only from human faces but also from robots with human-like features.

The ability to interpret eye contact is a critical part of social development. From a very young age, infants are already attuned to the eyes of those around them. Eye contact can signal social intentions, such as a desire to communicate or form an emotional connection. Past research has shown that even newborns prefer faces with direct gazes over faces with averted gazes, suggesting that eye contact plays a role in shaping early social interactions.

However, much of the research on infants’ response to eye contact has focused on interactions with humans. As humanoid robots become more prevalent in settings like caregiving and education, it raises an important question: Do infants view eye contact from robots as socially meaningful in the same way they do with humans?

“Humanoid robots are becoming increasingly common in social environments, and people are suddenly expected to engage in social interactions with these artificial agents. We are interested in how the human brain understands the ‘sociality’ of artificial humanoid robots,” said study author Samuli Linnunsalo, a doctoral researcher in Tampere University and member of the Human Information Processing Laboratory.

“We believe that, to fully explore people’s instinctive interpretations of humanoid robots’ sociality, it is necessary to use physiological measures to investigate their responses to robots’ nonverbal social cues, such as eye contact. After finding initial evidence that adult humans’ psychophysiological responses to eye contact with a humanoid robot were similar to their responses to eye contact with a human, we sought to investigate whether young infants react similarly to a humanoid robot’s and a human’s eye gaze. This was particularly interesting to us, because infants do not have knowledge of the humanoid robots’ purpose as social interaction partners, nor do they understand that people are expected to treat humanoid robots as social agents.”

The study involved 114 infants, aged between 6 to 8 months. The researchers invited the infants to a laboratory where they were exposed to three different types of stimuli: a human, a humanoid robot called Nao, and a non-human object, in this case, a vase. The researchers used live stimuli rather than videos or images to make the experience more realistic for the infants.

Each of the human and robot models was presented to the infant either looking directly at them (direct gaze) or looking away (averted gaze). To ensure the infants were engaged, the researchers used a carefully controlled environment with an interactive introduction for both the human and the robot. The robot would introduce itself, mimicking natural social gestures like nodding and hand movements, while the vase served as a non-interactive control object.

The infants’ reactions were recorded using several different measures. Researchers tracked how long the infants looked at the different stimuli, measured their heart rate to assess attention, and used electrodes to capture changes in facial muscles to gauge emotional responses. These muscle measures focused on two areas of the face: the cheek muscles associated with smiling and the eyebrow muscles often linked to frowning or concentration.

The researchers found that the infants looked longer at both the human and the robot than they did at the vase, which suggests that they found the human and robot more socially engaging than the inanimate object. However, their looking times did not differ between direct and averted gazes from either the human or the robot. This suggests that while the infants were interested in both, the direction of the gaze didn’t capture their attention for longer durations in this context.

In terms of heart rate, which can reflect how much attention someone is paying, the infants’ heart rates slowed down more when they saw averted gazes compared to direct gazes. This suggests that infants may have been paying closer attention to the averted gaze, possibly because it signals an opportunity for joint attention—learning where the other person (or robot) is looking. This might represent an early developmental step toward joint attention skills, which are key in social development.

“We were surprised to find that infants at this age (6–8 months) attended more intensively to the averted gaze of a humanoid robot or a human, compared to direct gaze (eye contact),” Linnunsalo told PsyPost. “Previous research has shown that newborns, children, and adults orient their attention more strongly toward direct gaze than averted gaze. We interpreted this finding as reflecting the development of joint attention skills in 6–8-month-old infants’ (i.e., looking where the other person/robot is looking), which may require heightened interest in averted gaze.”

Facial muscle activity provided another layer of insight. The infants’ cheek muscles—associated with smiling—became more active in response to direct eye contact from both the human and the robot. Meanwhile, their eyebrow muscles, linked to frowning or concentration, showed more activity when the gaze was averted. This pattern suggests that eye contact, whether from a human or a robot, prompted more positive or affiliative facial expressions in the infants.

“Infants responded to a humanoid robot’s eye gaze similarly to how they responded to the eye gaze of another human,” Linnunsalo said. “Specifically, eye contact with either a humanoid robot or a human led to greater activity in the smiling muscles compared to viewing averted eye gaze. On the other hand, a humanoid robot’s or a human’s averted eye gaze captured infants’ attention more intensely than eye contact. These results suggest that, even in infancy, the human brain may interpret humanoid robots’ eye gaze signals as if the robots were human.”

Interestingly, skin conductance, which measures emotional arousal, did not show significant differences between direct and averted gazes for either the human or the robot. This result suggests that while infants recognized the social significance of eye contact, it may not yet produce strong emotional arousal at this early age. Emotional responses to eye contact, as seen in adults, might develop later in childhood as infants gain more experience with social interactions.

While the study offers valuable insights, it has a few limitations. One limitation is the use of a vase as a control object. “Since the control stimulus did not have eyes, we cannot rule out the possibility that infants’ responses to the humanoid robot’s eye gaze were driven primarily by its eyes, independent of its humanlike, social behavior,” Linnunsalo noted. “In other words, we do not know how much simpler the humanoid robot could have been in appearance or behavior for its eye gaze to still elicit these responses.”

Future research could examine how infants’ responses to humanoid robots evolve as they grow older. For instance, do children continue to interpret robots as social agents, or do they begin to differentiate between robots and humans as they develop more complex social understanding? Additionally, researchers could explore more interactive robots that can engage in more dynamic social behaviors, such as following the infant’s gaze or responding to the infant’s expressions.

“Our long-term goal for this line of research is to understand the human social brain and the extent to which it adapts to social interactions with artificial agents,” Linnunsalo said. “We would like to express our gratitude to the parents who brought their infants to the laboratory and managed to keep them relatively still during the experiment.”

The study, “Infants’ psychophysiological responses to eye contact with a human and with a humanoid robot,” was authored by Samuli Linnunsalo, Santeri Yrttiaho, Chiara Turati, Ermanno Quadrelli, Mikko J. Peltola, and Jari K. Hietanen.