Connecting with Other People
How can we create better social interactions and conversations, where people feel seen, heard, and appreciated?
This question drives my research on perspective taking, prosocial behavior, conversation, and inclusion. From these projects, I have learned that words and actions can often have (sometimes surprisingly) large impacts!
Surprisingly Happy to Have Helped:Underestimating Prosociality Creates a Misplaced Barrier to Asking for Help
Xuan Zhao and Nicholas Epley
Psychological Science (2022). [paper] [data and materials]
Undersociality: Miscalibrated social cognition can inhibit social connection
Nicholas Epley, Michael Kardas, Xuan Zhao, Stav Atir, and Juliana Schroeder
Trends in Cognitive Sciences. (2022). [paper]
Belonging is a basic need satisfied by signals of warmth and appreciation. Compliments can satisfy others’ need to belong, but recent research suggests that people may underestimate their positive impact on recipients, creating a barrier to giving them more often. Here we assess how people expect compliment recipients to react to receiving multiple compliments over time, compared to the actual experience of recipients. Although people generally expect recipients to adapt to multiple compliments, with each compliment feeling a little less positive and sincere (Experiment 1), an experiment (Experiment 2) in which one person from an acquainted pair received one new compliment for five consecutive days found no evidence of adaptation. Expressers in this experiment also underestimated how positive their recipients would feel overall. An additional experiment (Experiment 3) examining only peoples’ expectations found that people expected less adaptation among recipients when they saw the actual compliments shared in Experiment 2, suggesting that mistaken beliefs about adaptation may stem from an abstract sense that multiple compliments are more similar to each other than they actually are. Belonging is a need that can be satisfied by repeated signs of warmth and appreciation. Underestimating their power may lead people to refrain from expressing these signs more often in daily life. [paper]
Insufficiently Complimentary?: Underestimating the Positive Impact of Compliments Creates a Barrier to Expressing Them
Xuan Zhao and Nicholas Epley
Journal of Personality and Social Psychology. (2021). [paper] [data and materials]
Kind Words Do Not Become Tired Words: Undervaluing the Positive Impact of Frequent Compliments
Xuan Zhao and Nicholas Epley
Self & Identity. (2021). [paper] [data and materials]
Leaving a Choice for Others: Children’s Evaluations of Considerate, Socially-Mindful Actions
Xin Zhao, Xuan Zhao, Hyowon Gweon, and Tamar Kushnir
Child Development. (2021). [paper] [data and materials]
Easing into Another Mind: Goal Inference Facilitates Perspective Taking
Xuan Zhao, Corey Cusimano, and Bertram F. Malle
Working paper. [abstract]
* A portion of this manuscript was published at CogSci '15. [paper]
Is It a Nine, or a Six? Prosocial and Selective Perspective Taking in Four-Year-Olds
Xuan Zhao, Bertram F. Mall, and Hyowon Gweon
Cog Sci '16. [paper]
“Thank You, Because…”: Discussing Disagreement While Finding Common Ground
Xuan Zhao, Heather Caruso, and Jane Risen
Manuscript in preparation. [abstract]
Large-Scale Inclusion Training for Online Community Moderators
Xuan Zhao, MarYam Hamedani, Cinoo Lee, Hazel Markus, and Jennifer Eberhardt
Manuscript in preparation. [abstract]
Predicting People’s Perceptions of Organizational Statements Following George Floyd’s Death
Xuan Zhao, Rachel Song, Amrita Maitreyi, Clarissa Gutierrez, MarYam Hamedani, Hazel Markus, and Jennifer Eberhardt
Manuscript in preparation. [abstract]
Connecting with and through Humanlike Machines
Machines are entering homes, schools, and offices at a rapid pace, and many seem intriguingly humanlike. How do we perceive and interact with machines that look/act/think like humans? What can our interactions with machines teach us about being human? This line of research explores the potentials and perils of making machines humanlike.
Spontaneous Perspective Taking Toward Robots: The Unique Impact of Humanlike Appearance
Xuan Zhao and Bertram F. Malle
Cognition (2022). [paper] [data and materials]
* A portion of this manuscript was published at HRI '16. However, we have revised our stance since then.
As robots rapidly enter society, how does human social cognition respond to their novel presence? Focusing on one foundational social-cognitive capacity—visual perspective taking—six studies reveal that people spontaneously adopt a robot’s unique perspective and do so with patterns of variation that mirror perspective taking toward humans. As with human agents, visual perspective taking of robots is enhanced when they display goal-directed actions (gaze and reaching vs. mere presence) and when the actions dynamically unfold over time (video vs. photograph). Importantly, perspective taking increases when the robot looks strikingly humanlike (an android) but is absent when the robot looks machine-like. This appearance-driven perspective taking is not due to inferences about the agent’s mind, because it persists when the agent obviously lacks a mind (e.g., a mannequin). Thus, the sight of robots’ superficial human resemblance may trigger and modulate social-cognitive responses in human observers originally evolved for human interaction. [paper] [data and materials]
Zhao, X., Cusimano, C., & Malle, B. F. (2016). Do people spontaneously take a robot’s visual perspective? In Proceedings of the Eleventh Annual ACM/IEEE International Conference on Human-Robot Interaction, 335-342. [paper]
* The most recent manuscript, "Spontaneous Perspective Taking Toward Robots," reflects our most updated findings and views about human perspective taking toward robots. As a result, please read and cite the newest paper instead of the conference proceeding paper in 2016.
A Primer for Conducting Experiments in Human–Robot Interaction
Guy Hoffman and Xuan Zhao
ACM Transactions on Human-Robot Interaction. (2020). [paper]
* Featured as a lead article.
We provide guidelines for planning, executing, analyzing, and reporting hypothesis-driven experiments in Human–Robot Interaction (HRI). The intended audience are researchers in the eld of HRI who are not trained in empirical research but who are interested in conducting rigorous human-participant studies to support their research. Following the chronological order of research activities and grounded in updated research practices in psychological and behavioral sciences, this primer covers recommended methods and common pitfalls for defining research questions, identifying constructs and hypotheses, choosing appropriate study designs, operationalizing constructs as variables, planning and executing studies, sampling, choosing statistical tools for data analysis, and reporting results. [paper]
What is Human-Like?: Decomposing Robot Human-Like Appearance Using the Anthropomorphic roBOT (ABOT) Database
Elizabeth Phillips, Xuan Zhao, Daniel Ullman, and Bertram F. Malle
Proceedings of the Eleventh Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI'18)
* Nominated for Best Paper Award in Theory and Methods in HRI [paper]
Anthropomorphic robots, or robots with human-like appearance features such as eyes, hands, or faces, have drawn considerable attention in recent years. To date, what makes a robot appear human-like has been driven by designers’ and researchers’ intuitions, because a systematic understanding of the range, variety, and relationships among constituent features of anthropomorphic robots is lacking. To fill this gap, we introduce the ABOT (Anthropomorphic roBOT) Database—a collection of 200 images of real- world robots with one or more human-like appearance features (http://www.abotdatabase.info). Harnessing this database, Study 1 uncovered four distinct appearance dimensions (i.e., bundles of features) that characterize a wide spectrum of anthropomorphic robots and Study 2 identified the dimensions and specific features that were most predictive of robots’ perceived human-likeness. With data from both studies, we then created an online estimation tool to help researchers predict how human-like a new robot will be perceived given the presence of various appearance features. The present research sheds new light on what makes a robot look human, and makes publicly accessible a powerful new tool for future research on robots’ human-likeness. [paper]
From Trolley to Autonomous Vehicle: Perception of Responsibility and Moral Norms in Traffic Accidents with Autonomous Cars
Jamy Li, Xuan Zhao, Mu-Jun Cho, Wendy Ju, Bertram F. Malle
SAE Technical Paper. (2016) [paper]
Autonomous vehicles represent a new class of transportation that may be qualitatively different from existing cars. Two online experiments assessed lay perceptions of moral norms and responsibility for traffic accidents involving autonomous vehicles. In Experiment 1, 120 US adults read a narrative describing a traffic incident between a pedestrian and a motorist. In different experimental conditions, the pedestrian, the motorist, or both parties were at fault. Participants assigned less responsibility to a self-driving car that was at fault than to a human driver who was at fault. Participants confronted with a self-driving car at fault allocated greater responsibility to the manufacturer and the government than participants who were confronted with a human driver at fault did. In Experiment 2, 120 US adults read a narrative describing a moral dilemma in which a human driver or a self-driving car must decide between either allowing five pedestrians to die or taking action to hit a single pedestrian in order to save the five. The “utilitarian” decision to hit the single pedestrian was considered the moral norm for both a self-driving and a human-driven car. Moreover, participants assigned the obligation of setting moral norms for self-driving cars to ethics researchers and to car manufacturers. This research reveals patterns of public perception of autonomous cars and may aid lawmakers and car manufacturers in designing such cars. [paper]
How People Infer a Humanlike Mind from a Robot Body
Xuan Zhao, Elizabeth Phillips, and Bertram Malle
Manuscript under review. [working paper] [ABOT Database]
Robots are entering a wide range of society’s private and public settings, often with a strikingly humanlike appearance and emulating a humanlike mind. But what constitutes humanlikeness—in both body and mind—has been conceptually and methodologically unclear. In three studies based on a collection of 251 real-world robots, we report the first programmatic, bottom-up investigation of what constitutes a robot’s humanlike body, how people reason about robot minds, and critically, how specific dimensions of physical appearance are intricately yet systematically related to specific dimensions of inferred mind. Our results challenge three widespread assumptions about robot humanlikeness. First, we show that humanlike appearance is not a unitary construct; instead, three separate appearance dimensions—Body-Manipulators, Face, and Surface—each consist of a unique constellation of human appearance features and jointly constitute a robot’s humanlike body. Second, we find that the widely adopted two-dimensional structure of mind perception (i.e., agency and experience) does not capture systematic variations of inferences about robot minds; rather, a three-dimensional structure, encompassing mental capacities related to Affect, Social-Moral Cognition, and Reality Interaction, emerges from people’s inferences across a wide range of robots. Third, humanlike appearance does not uniformly lead to a global impression of a humanlike mind; instead, people specifically infer robots’ affective and moral capacities from facial and surface appearance, and their reality interaction capacities from body-manipulator appearance. Our findings reveal how physical appearance gives rise to the impression of a mind, even for a robot. [preprint] [ABOT Database]
“Hello! How May I Helo You?”: How (Corrected) Errors Humanize a Communicator
Shirly Bluvstein*, Xuan Zhao*, Alexandra Barasch, and Juliana Schroeder [*equal authorship]
Manuscript under review. [working paper] [data and materials]
Today more than ever before, online text-based interactions (e.g., text messages, emails, social media) have become a primary means of communication. Because written communication lacks human nonverbal cues such as appearance, voice, and identity, consumers may struggle to distinguish whether they are interacting online with a human or a chatbot. The current research investigates how typographical errors (“typos”), a common yet overlooked feature in text communication, can humanize a communicator. Across five experiments (N = 2,515) that used ambiguous conversational counterparts (i.e., customer service agents that might be bots), agents (either chatbots or real humans) who made and subsequently corrected a typo were perceived to be more humanlike than ones who made no typo or did not correct the typo. Participants consequently perceived those agents as warmer and more capable of understanding and helping their issues, were more likely to endorse a reward for the agent, and even perceived the company they represented more favorably. These findings provide novel insights into how conversational features may influence customers’ perception of online agents and the brands that use them. The authors discuss theoretical implications for anthropomorphism and social perception and practical implications for companies wishing to humanize their customer service agents. [preprint] [data and materials]
Tugging at the Heartstrings: Feeling Human Heartbeat Promotes Prosocial and Cooperative Behaviors
Xuan Zhao, Malte Jung, Desmond C. Ong, Nina Diepenbrock, Jean Costa, Oriel FeldmanHall, Bertram F. Malle
Manuscript in preparation.
* Received First Prize in the live grant competition at the Annual Meeting of the Society of Personality and Social Psychology (SPSP'17).
Watch my 2-minute pitch for the SPSP annual grant competition on how feeling another other person's heartbeat increases prosocial behavior (which led to a shank-tank-style live "interrogation" on the main stage at the 2017 SPSP annual convention and, eventually, the first prize).
“Every intellectual has a very special responsibility. He has the privilege and the opportunity of studying. In return, he owes it to his fellow men (or ‘to society’) to represent the results of his study as simply, clearly and modestly as he can.” —Carl Popper
Do people spontaneously take a robot's perspective? Find out in my 6-minute talk at the "Research Matters!" event at Brown University.
In this video interview with Anita Nowak, Ph.D, an empathy expert who runs Purposeful Empathy on Youtube, I discussed some of my recent research findings and their implications in everyday life.
I enjoy collaborating with and learning from people from diverse backgrounds. Below are people that I have written papers with (or who I currently owe a paper to).
Bertram Malle (Brown University, Department of Cognitive, Linguistic & Psychological Sciences)
Nicholas Epley (University of Chicago, Booth School of Business)
Jane Risen (University of Chicago, Booth School of Business)
Hyowon Gweon (Stanford University, Department of Psychology)
Hazel Markus (Stanford University, Department of Psychology)
Jennifer Eberhardt (Stanford University, Department of Psychology)
MarYam Hamedani (Stanford University, SPARQ, Department of Psychology)
Malte Jung (Cornell University, Department of Information Science)
Juliana Schroeder (University of California Berkeley, Hass School of Business)
Alixandra Barasch (New York University, Stern School of Business)
Guy Hoffman (Cornell University, Sibley School of Mechanical and Aerospace Engineering)
Heather Caruso (University of California Los Angeles, Anderson School of Management)
Elizabeth Phillips (U.S. Air Force of Academy, Department of Behavioral Sciences and Leadership)
Oriel FeldmanHall (Brown University, Department of Cognitive, Linguistic & Psychological Sciences)
Desmond Ong (National University of Singapore, Department of Information Systems and Analytics School of Computing)
Tamar Kushnir (Cornell University, Department of Human Development)
Alice (Xin) Zhao (Eastern China Normal University, Department of Educational Psychology)
Jamy Li (University of Twente, Department of Human-Media Interaction)
Wendy Ju (Cornell University, Jacobs Technion-Cornell Institute)
Roseanna Sommers (University of Chicago, Law School)
Corey Cusimano (Princeton University, Department of Psychology)
Jean Costa (Cornell University, Department of Information Science)
Shirley Bluvstein (New York University, Stern School of Business)