Connecting with Other People
How do we connect and misconnect with other people? This line of research examines when we take other people's perspectives, identifies psychological barriers that hinder meaningful and positive social engagement, and suggests practical ways to create better communication.
Insufficiently Complimentary?: Underestimating the Positive Impact of Compliments Creates a Barrier to Expressing Them
Xuan Zhao and Nicholas Epley
Manuscript under review. [abstract]
Compliments increase the well-being of both expressers and recipients, yet people report giving fewer compliments than they should give, or would like to give. Seven experiments suggest that a reluctance to express genuine compliments may stem from underestimating the positive impact that compliments will have on recipients. Participants in three experiments wrote genuine compliments and then predicted how happy and awkward those compliments would make recipients feel. Participants consistently underestimated how positive the recipients would feel while overestimating how awkward recipients would feel (Experiments 1, 2, 6). These miscalibrated expectations are driven partly by an egocentric bias in which expressers primarily focus on how competent—compared to how warm—their compliments will be perceived by recipients (Experiments 1 and 2), creating an empathy gap between those who imagine how a compliment will be received compared to those who actually receive one (Experiments 3a and 3b). Because people’s interest in expressing a compliment is at least partly driven by their expectations of the recipient’s reaction, undervaluing a compliment creates a barrier to expressing them (Experiments 4 and 5). As a result, informing participants about a compliment’s surprisingly positive impact encouraged them to express more compliments (Experiment 6). We believe our findings reflect a more general tendency to underestimate the positive impact of their prosocial actions on others, leading people to be less prosocial than would be optimal for both their own and others’ well-being. [debriefing]
Kind Words Do Not Become Tired Words: Unwarranted Concerns about Giving Too Many Compliments
Xuan Zhao and Nicholas Epley
Manuscript under review. [abstract]
Belonging is a basic need satisfied by signals of warmth and appreciation. Compliments can satisfy others’ need for belonging, but recent research suggests that people may underestimate their positive impact on recipients, creating a barrier to giving them more often (Zhao & Epley, 2019a). Here we assess how people expect compliment recipients to react to receiving multiple compliments over time, comparing expectations against actual experience. Although a pilot survey suggested that people generally expect recipients to adapt to multiple compliments, with each compliment feeling a little less positive and sincere, an experiment (Exp. 1) with friends who received one new compliment for five consecutive days found no evidence of adaptation, with recipients feeling more positive overall than expressers expected. An additional experiment (Exp. 2) again found that people expected adaptation to repeated compliments, but providing the compliments that recipients actually received reduced expected adaptation. Mistaken beliefs about adaptation stems partly from assuming that multiple compliments are more similar to each other than they actually are. Belonging is a need that can be routinely satisfied by signs of warmth and appreciation. Underestimating the power of these signs may lead people to refrain from expressing them more often in daily life.
Easing into Another Mind: Goal Inference Facilitates Perspective Taking
Xuan Zhao, Corey Cusimano, and Bertram F. Malle
Working paper. [abstract]
* A portion of this manuscript was published in Proceedings of the 37th Annual Meeting of the Cognitive Science Society, 2811-2816. [author note] [paper]
Mental state inference is a ubiquitous but challenging component of social interaction. In this paper, we propose a facilitating relationship among mental state inferences: Engaging in an initial, easier mental inference makes people more likely to engage in a more difficult one. Drawing on previous evidence, we tested the possibility of a facilitating relationship between two mental state inferences that are known to vary in difficulty: inferring another person’s goals and inferring that person’s unique visual experiences (i.e., “Level-2 perspective taking”). Five studies provided evidence for the hypothesized facilitating relationship: Goal inference increased people’s likelihood of adopting the actors’ perspectives regardless of task complexity, time pressure, and presentation modality. This facilitating relationship suggests new venues for investigating the causal relationship among mental state inferences.
Zhao, X., Cusimano, C., & Malle, B.F. (2015). In search of triggering conditions for spontaneous visual perspective taking. In Proceedings of the 37th Annual Meeting of the Cognitive Science Society, 2811-2816. [paper]
* The "Easing into Another Mind" manuscript will provide our updated view on Spontaneous Level-2 perspective taking. But feel free to cite the CogSci 2015 paper for our new paradigm!
Is It a Nine, or a Six? Prosocial and Selective Perspective Taking in Four-Year-Olds
Xuan Zhao, Bertram F. Mall, and Hyowon Gweon
Proceedings of the 38th Annual Conference of the Cognitive Science Society, 924-292. [abstract] [paper]
To successfully navigate the complex social world, people often need to solve the problem of perspective selection: Between two conflicting viewpoints of the self and the other, whose perspective should one take? In two experiments, we show that four-year-olds use others’ knowledge and goals to decide when to engage in visual perspective taking. Children were more likely to take a social partner’s perspective to describe an ambiguous symbol when she did not know numbers and wanted to learn than when she knew numbers and wanted to teach. These results were shown in children’s own responses (Experiment 1) and in their evaluations of others’ responses (Experiment 2). By preschool years, children understand when perspective taking is appropriate and necessary and selectively take others’ perspectives in social interactions. These results provide novel insights into the nature and the development of perspective taking. [paper]
Leaving a Choice for Others: Children’s Social Evaluations of Considerate Actions
Xin Zhao, Xuan Zhao, Hyowon Gweon, and Tamar Kushnir
Manuscript under review. [abstract]
Humans live in an interdependent world where even actions that are primarily self-serving (i.e., intended to fulfill one’s own needs) can have direct or indirect consequences for others. Thus, it seems critical that one be able to read these nuanced social signals and evaluate actions that are primarily self-serving based on the consequences those actions have for others. Over three studies (N = 566 children between ages 4 and 6 and N = 222 adults, from the U.S. and China), we investigated the mentalistic nature, developmental origins, and cultural dependency of such evaluations. We found that, by age 6 but not younger, both U.S. and Chinese children positively evaluate someone who takes something for themselves (a self-serving action) in a way that leaves a choice for another agent over someone who leaves no choice. We also found that these evaluations reflect a genuine understanding of the agent’s considerate intention, rather than a mere preference for item diversity. Furthermore, in light of the similar developmental patterns across cultures, we conclude that evaluations for others’ considerateness in self-serving actions may rely on the critical development of social-cognitive capacities between 4 and 6 years old independent from cultural influences.
“Thank You, Because…”: Discussing Disagreement While Finding Common Ground
Xuan Zhao, Heather Caruso, and Jane Risen
Manuscript in preparation. [abstract]
For individuals in diverse communities, engaging one another in open conversation can sometimes be quite difficult. Intending to promote harmony, many are simply taught to avoid initiating or pursuing discussion of differing viewpoints altogether. When such discussions arise, people tend to negate one other’s viewpoints in advocating for their own, creating a combative atmosphere where people feel misunderstood and undervalued. Seeking a conversational technique that would allow a more inclusive dialogue about differences to arise, we developed a novel procedure called “Thank You Because” (TYB). Inspired by the collaborative spirit in improvisational theater, TYB encourages people who have different perspectives to engage gratefully—by identifying and acknowledging value of dialogue. We tested the impact of TYB in lab and field settings, where pairs of strangers engaged in face-to-face conversations about various interpersonal differences (e.g., in personal preferences, or in support for public policies). Compared to a “No, Because” technique, which encouraged the common conversational instinct of poking holes in one another’s arguments, participants using the “Thank You, Because” technique engaged in more inclusive conversations, felt more heard and valued, and perceived more common ground (Studies 1 & 2). Furthermore, compared to a “I Hear That…” technique (Study 2), where participants aimed to show their partner that they understood their viewpoint accurately, the “Thank You, Because” technique showed unique advantages in eliciting the perception of common ground. [debriefing]
Connecting with and through Humanlike Machines
Machines are entering homes, schools, and offices at a rapid pace, and many seem intriguingly humanlike. How do we perceive and interact with machines that look/act/think like humans? What can our interactions with machines teach us about being human? This line of research explores the potentials and perils of making machines humanlike.
How People Infer a Humanlike Mind from a Robot Body
Xuan Zhao, Elizabeth Phillips, and Bertram Malle
Working paper. [abstract] [preprint] [ABOT Database]
Robots are entering a wide range of society’s private and public settings, often with a strikingly humanlike appearance and emulating a humanlike mind. But what constitutes humanlikeness—in both body and mind—has been conceptually and methodologically unclear. In three studies based on a collection of 251 real-world robots, we report the first programmatic, bottom-up investigation of what constitutes a robot’s humanlike body, how people reason about robot minds, and critically, how specific dimensions of physical appearance are intricately yet systematically related to specific dimensions of inferred mind. Our results challenge three widespread assumptions about robot humanlikeness. First, we show that humanlike appearance is not a unitary construct; instead, three separate appearance dimensions—Body-Manipulators, Face, and Surface—each consist of a unique constellation of human appearance features and jointly constitute a robot’s humanlike body. Second, we find that the widely adopted two-dimensional structure of mind perception (i.e., agency and experience) does not capture systematic variations of inferences about robot minds; rather, a three-dimensional structure, encompassing mental capacities related to Affect, Social-Moral Cognition, and Reality Interaction, emerges from people’s inferences across a wide range of robots. Third, humanlike appearance does not uniformly lead to a global impression of a humanlike mind; instead, people specifically infer robots’ affective and moral capacities from facial and surface appearance, and their reality interaction capacities from body-manipulator appearance. Our findings reveal how physical appearance gives rise to the impression of a mind, even for a robot. [preprint] [ABOT Database]
What is Human-Like?: Decomposing Robot Human-Like Appearance Using the Anthropomorphic roBOT (ABOT) Database
Elizabeth Phillips, Xuan Zhao, Daniel Ullman, and Bertram F. Malle
Proceedings of the Eleventh Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI'18)
* Nominated for Best Paper Award in Theory and Methods in HRI [abstract] [paper]
Anthropomorphic robots, or robots with human-like appearance features such as eyes, hands, or faces, have drawn considerable attention in recent years. To date, what makes a robot appear human-like has been driven by designers’ and researchers’ intu- itions, because a systematic understanding of the range, variety, and relationships among constituent features of anthropomorphic robots is lacking. To ll this gap, we introduce the ABOT (Anthro- pomorphic roBOT) Database—a collection of 200 images of real- world robots with one or more human-like appearance features (http://www.abotdatabase.info). Harnessing this database, Study 1 uncovered four distinct appearance dimensions (i.e., bundles of fea- tures) that characterize a wide spectrum of anthropomorphic robots and Study 2 identi ed the dimensions and speci c features that were most predictive of robots’ perceived human-likeness. With data from both studies, we then created an online estimation tool to help researchers predict how human-like a new robot will be perceived given the presence of various appearance features. The present research sheds new light on what makes a robot look hu- man, and makes publicly accessible a powerful new tool for future research on robots’ human-likeness. [paper]
Seeing Through a Robot’s Eyes: Spontaneous Perspective Taking Toward Humanlike Machines
Xuan Zhao and Bertram F. Malle
Manuscript under review. [abstract] [preprint] [data and materials]
* A portion of this manuscript was published in Proceedings of the Eleventh Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI '16), 335-342. [author note] [paper]
As robots rapidly enter society, how does human social cognition respond to their novel presence? Focusing on one foundational social-cognitive capacity—visual perspective taking—six studies reveal that people spontaneously adopt a robot’s unique perspective and do so with patterns of variation that mirror perspective taking toward humans. As with human agents, visual perspective taking of robots is enhanced when they display goal-directed actions (gaze and reaching vs. mere presence) and when the actions dynamically unfold over time (video vs. photograph). Importantly, perspective taking increases when the robot looks strikingly humanlike (an android) but is absent when the robot looks machine-like. This appearance-driven perspective taking is not due to inferences about the agent’s mind, because it persists when the agent obviously lacks a mind (e.g., a mannequin). Thus, the sight of robots’ superficial human resemblance may trigger and modulate social-cognitive responses in human observers originally evolved for human interaction. [preprint] [data and materials]
Zhao, X., Cusimano, C., & Malle, B. F. (2016). Do people spontaneously take a robot’s visual perspective? In Proceedings of the Eleventh Annual ACM/IEEE International Conference on Human-Robot Interaction, 335-342. [paper]
* The "Seeing Through a Robot's Eyes" manuscript contains all studies reported in this peer-reviewed HRI conference proceeding paper and four additional studies. That manuscript reflects our most updated findings and views about human perspective taking toward robots. Which one should you cite? Well, your call.
“Hello! How May I Helo You?”: How (Corrected) Errors Humanize a Communicator
Shirly Bluvstein*, Xuan Zhao*, Alexandra Barasch, and Juliana Schroeder [*equal authorship]
Manuscript under review. [abstract] [preprint] [data and materials]
Today more than ever before, online writing (e.g., emails, texts, and social media posts) has become a primary means of communication. Because written communication lacks human nonverbal cues (e.g., voice), people frequently struggle to distinguish whether they are interacting with a human or chatbot online. The current research suggests a novel way to humanize writers: typographical errors (“typos”). Across four experiments (N = 1,253) that used ambiguous conversational counterparts (e.g., customer service agents that might be bots), communicators who made and subsequently corrected a typo, rather than making no typo or not correcting a typo, appeared more humanlike. Respondents consequently believed that the communicator was warmer and were more likely to disclose personal information to the communicator. These findings provide insight into when people are willing to share their personal data online. We discuss theoretical implications for humanization and practical implications for Internet privacy and building trust in organizations. [preprint] [data and materials]
Tugging at the Heartstrings: Feeling Human Heartbeat Promotes Prosocial and Cooperative Behaviors
Xuan Zhao, Malte Jung, Desmond C. Ong, Nina Diepenbrock, Jean Costa, Oriel FeldmanHall, Bertram F. Malle
Manuscript in preparation.
* Received First Prize in the live grant competition at the Annual Meeting of the Society of Personality and Social Psychology (SPSP'17).
Watch my 2-minute pitch for the SPSP annual grant competition on how feeling another other person's heartbeat increases prosocial behavior (which led to a shank-tank-style live "interrogation" on the main stage at the 2017 SPSP annual convention and, eventually, the first prize).
From Trolley to Autonomous Vehicle: Perception of Responsibility and Moral Norms in Traffic Accidents with Autonomous Cars
Jamy Li, Xuan Zhao, Mu-Jun Cho, Wendy Ju, Bertram F. Malle
SAE Technical Paper, 2016-01-0164 [abstract] [preprint]
Autonomous vehicles represent a new class of transportation that may be qualitatively different from existing cars. Two online experiments assessed lay perceptions of moral norms and responsibility for traffic accidents involving autonomous vehicles. In Experiment 1, 120 US adults read a narrative describing a traffic incident between a pedestrian and a motorist. In different experimental conditions, the pedestrian, the motorist, or both parties were at fault. Participants assigned less responsibility to a self-driving car that was at fault than to a human driver who was at fault. Participants confronted with a self-driving car at fault allocated greater responsibility to the manufacturer and the government than participants who were confronted with a human driver at fault did. In Experiment 2, 120 US adults read a narrative describing a moral dilemma in which a human driver or a self-driving car must decide between either allowing five pedestrians to die or taking action to hit a single pedestrian in order to save the five. The “utilitarian” decision to hit the single pedestrian was considered the moral norm for both a self-driving and a human-driven car. Moreover, participants assigned the obligation of setting moral norms for self-driving cars to ethics researchers and to car manufacturers. This research reveals patterns of public perception of autonomous cars and may aid lawmakers and car manufacturers in designing such cars. [paper]
“Every intellectual has a very special responsibility. He has the privilege and the opportunity of studying. In return, he owes it to his fellow men (or ‘to society’) to represent the results of his study as simply, clearly and modestly as he can.” —Carl Popper
Do people spontaneously take a robot's perspective? Find out in my 6-minute talk at the "Research Matters!" event at Brown University.
I enjoy collaborating with and learning from people from diverse backgrounds. Below are people that I have written papers with (or that I currently owe a paper to).
Bertram Malle (Brown University, Department of Cognitive, Linguistic & Psychological Sciences)
Nicholas Epley (University of Chicago, Booth School of Business)
Jane Risen (University of Chicago, Booth School of Business)
Hyowon Gweon (Stanford University, Department of Psychology)
Malte Jung (Cornell University, Department of Information Science)
Juliana Schroeder (University of California Berkeley, Hass School of Business)
Alixandra Barasch (New York University, Stern School of Business)
Guy Hoffman (Cornell University, Sibley School of Mechanical and Aerospace Engineering)
Heather Caruso (University of California Los Angeles, Anderson School of Management)
Elizabeth Phillips (U.S. Air Force of Academy, Department of Behavioral Sciences and Leadership)
Oriel FeldmanHall (Brown University, Department of Cognitive, Linguistic & Psychological Sciences)
Desmond Ong (National University of Singapore, Department of Information Systems and Analytics School of Computing)
Tamar Kushnir (Cornell University, Department of Human Development)
Alice (Xin) Zhao (Eastern China Normal University, Department of Educational Psychology)
Jamy Li (University of Twente, Department of Human-Media Interaction)
Wendy Ju (Cornell University, Jacobs Technion-Cornell Institute)
Roseanna Sommers (University of Chicago, Law School)
Corey Cusimano (Princeton University, Department of Psychology)
Jean Costa (Cornell University, Department of Information Science)
Shirly Bluvstein (New York University, Stern School of Business)