RESEARCH HIGHLIGHTS
Connecting with Other People
How can we create better social interactions and conversations, where people feel seen, heard, and appreciated? This question drives my research on perspective taking, prosocial behavior, and conversation. From these projects, I have learned that seemingly small words and actions can often have (surprisingly) large impacts! Insufficiently Complimentary?: Underestimating the Positive Impact of Compliments Creates a Barrier to Expressing Them
Xuan Zhao and Nicholas Epley Manuscript under revision. [abstract]
Kind Words Do Not Become Tired Words: Undervaluing the Positive Impact of Frequent Compliments
Xuan Zhao and Nicholas Epley Self & Identity. (2020). [paper] [abstract] [data and materials]
Easing into Another Mind: Goal Inference Facilitates Perspective Taking
Xuan Zhao, Corey Cusimano, and Bertram F. Malle Working paper. [abstract] * A portion of this manuscript was published in Proceedings of the 37th Annual Meeting of the Cognitive Science Society, 2811-2816. [author note] [paper]
Is It a Nine, or a Six? Prosocial and Selective Perspective Taking in Four-Year-Olds
Xuan Zhao, Bertram F. Mall, and Hyowon Gweon Proceedings of the 38th Annual Conference of the Cognitive Science Society, 924-292. [abstract] [paper]
Leaving a Choice for Others: Children’s Evaluations of Considerate, Socially-Mindful Actions
Xin Zhao, Xuan Zhao, Hyowon Gweon, and Tamar Kushnir Child Development. (2021). [abstract] [paper] [data and materials]
“Thank You, Because…”: Discussing Disagreement While Finding Common Ground
Xuan Zhao, Heather Caruso, and Jane Risen Manuscript in preparation. [abstract]
Happy to have Helped: How Underestimating Prosociality Creates a Misplaced Barrier to Help-Seeking
Xuan Zhao and Nicholas Epley Manuscript in preparation. [abstract] At some point, even the best of us need help. Yet people often struggle to ask for help, partly out of the concern that others would be unwilling and unhappy to do so. Such expectations guide people’s decision for help-seeking (Study 1). Six experiments systematically contrasted the perspective of help-seekers to that of helpers and showed that people’s concern is misplaced: Regardless of imagining hypothetical scenarios (Studies 2a & 2b), recalling recent life events (Study 3), or engaging in live interaction in the field (Studies 4 & 5) or in the laboratory (Study 6), those in need of help consistently underestimated how willing strangers—and even friends—would be to help them, how positive the helpers would feel after helping, while overestimating how much helpers would feel inconvenienced by helping. People’s miscalibration arose at least partially from underestimating how prosocially motivated others were, while overly attributing others’ motivation for help to social compliance. As a result, underestimating others’ prosociality creates a barrier to seeking help and improve the well-being for both parties. Connecting with and through Humanlike Machines Machines are entering homes, schools, and offices at a rapid pace, and many seem intriguingly humanlike. How do we perceive and interact with machines that look/act/think like humans? What can our interactions with machines teach us about being human? This line of research explores the potentials and perils of making machines humanlike. How People Infer a Humanlike Mind from a Robot Body
Xuan Zhao, Elizabeth Phillips, and Bertram Malle Manuscript under review. [abstract] [preprint] [ABOT Database] Robots are entering a wide range of society’s private and public settings, often with a strikingly humanlike appearance and emulating a humanlike mind. But what constitutes humanlikeness—in both body and mind—has been conceptually and methodologically unclear. In three studies based on a collection of 251 real-world robots, we report the first programmatic, bottom-up investigation of what constitutes a robot’s humanlike body, how people reason about robot minds, and critically, how specific dimensions of physical appearance are intricately yet systematically related to specific dimensions of inferred mind. Our results challenge three widespread assumptions about robot humanlikeness. First, we show that humanlike appearance is not a unitary construct; instead, three separate appearance dimensions—Body-Manipulators, Face, and Surface—each consist of a unique constellation of human appearance features and jointly constitute a robot’s humanlike body. Second, we find that the widely adopted two-dimensional structure of mind perception (i.e., agency and experience) does not capture systematic variations of inferences about robot minds; rather, a three-dimensional structure, encompassing mental capacities related to Affect, Social-Moral Cognition, and Reality Interaction, emerges from people’s inferences across a wide range of robots. Third, humanlike appearance does not uniformly lead to a global impression of a humanlike mind; instead, people specifically infer robots’ affective and moral capacities from facial and surface appearance, and their reality interaction capacities from body-manipulator appearance. Our findings reveal how physical appearance gives rise to the impression of a mind, even for a robot. [preprint] [ABOT Database] What is Human-Like?: Decomposing Robot Human-Like Appearance Using the Anthropomorphic roBOT (ABOT) Database
Elizabeth Phillips, Xuan Zhao, Daniel Ullman, and Bertram F. Malle Proceedings of the Eleventh Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI'18) * Nominated for Best Paper Award in Theory and Methods in HRI [abstract] [paper] Anthropomorphic robots, or robots with human-like appearance features such as eyes, hands, or faces, have drawn considerable attention in recent years. To date, what makes a robot appear human-like has been driven by designers’ and researchers’ intu- itions, because a systematic understanding of the range, variety, and relationships among constituent features of anthropomorphic robots is lacking. To ll this gap, we introduce the ABOT (Anthro- pomorphic roBOT) Database—a collection of 200 images of real- world robots with one or more human-like appearance features (http://www.abotdatabase.info). Harnessing this database, Study 1 uncovered four distinct appearance dimensions (i.e., bundles of fea- tures) that characterize a wide spectrum of anthropomorphic robots and Study 2 identi ed the dimensions and speci c features that were most predictive of robots’ perceived human-likeness. With data from both studies, we then created an online estimation tool to help researchers predict how human-like a new robot will be perceived given the presence of various appearance features. The present research sheds new light on what makes a robot look hu- man, and makes publicly accessible a powerful new tool for future research on robots’ human-likeness. [paper] Seeing Through a Robot’s Eyes: Spontaneous Perspective Taking Toward Humanlike Machines
Xuan Zhao and Bertram F. Malle Manuscript under review. [abstract] [preprint] [data and materials] * A portion of this manuscript was published in Proceedings of the Eleventh Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI '16), 335-342. [author note] [paper] As robots rapidly enter society, how does human social cognition respond to their novel presence? Focusing on one foundational social-cognitive capacity—visual perspective taking—six studies reveal that people spontaneously adopt a robot’s unique perspective and do so with patterns of variation that mirror perspective taking toward humans. As with human agents, visual perspective taking of robots is enhanced when they display goal-directed actions (gaze and reaching vs. mere presence) and when the actions dynamically unfold over time (video vs. photograph). Importantly, perspective taking increases when the robot looks strikingly humanlike (an android) but is absent when the robot looks machine-like. This appearance-driven perspective taking is not due to inferences about the agent’s mind, because it persists when the agent obviously lacks a mind (e.g., a mannequin). Thus, the sight of robots’ superficial human resemblance may trigger and modulate social-cognitive responses in human observers originally evolved for human interaction. [preprint] [data and materials] Related publication: Zhao, X., Cusimano, C., & Malle, B. F. (2016). Do people spontaneously take a robot’s visual perspective? In Proceedings of the Eleventh Annual ACM/IEEE International Conference on Human-Robot Interaction, 335-342. [paper] * The "Seeing Through a Robot's Eyes" manuscript contains all studies reported in this peer-reviewed HRI conference proceeding paper and four additional studies. That manuscript reflects our most updated findings and views about human perspective taking toward robots. Which one should you cite? Well, your call. “Hello! How May I Helo You?”: How (Corrected) Errors Humanize a Communicator
Shirly Bluvstein*, Xuan Zhao*, Alexandra Barasch, and Juliana Schroeder [*equal authorship] Manuscript under review. [abstract] [preprint] [data and materials] Today more than ever before, online writing (e.g., emails, texts, and social media posts) has become a primary means of communication. Because written communication lacks human nonverbal cues (e.g., voice), people frequently struggle to distinguish whether they are interacting with a human or chatbot online. The current research suggests a novel way to humanize writers: typographical errors (“typos”). Across four experiments (N = 1,253) that used ambiguous conversational counterparts (e.g., customer service agents that might be bots), communicators who made and subsequently corrected a typo, rather than making no typo or not correcting a typo, appeared more humanlike. Respondents consequently believed that the communicator was warmer and were more likely to disclose personal information to the communicator. These findings provide insight into when people are willing to share their personal data online. We discuss theoretical implications for humanization and practical implications for Internet privacy and building trust in organizations. [preprint] [data and materials] Tugging at the Heartstrings: Feeling Human Heartbeat Promotes Prosocial and Cooperative Behaviors
Xuan Zhao, Malte Jung, Desmond C. Ong, Nina Diepenbrock, Jean Costa, Oriel FeldmanHall, Bertram F. Malle Manuscript in preparation. * Received First Prize in the live grant competition at the Annual Meeting of the Society of Personality and Social Psychology (SPSP'17). Watch my 2-minute pitch for the SPSP annual grant competition on how feeling another other person's heartbeat increases prosocial behavior (which led to a shank-tank-style live "interrogation" on the main stage at the 2017 SPSP annual convention and, eventually, the first prize). From Trolley to Autonomous Vehicle: Perception of Responsibility and Moral Norms in Traffic Accidents with Autonomous Cars
Jamy Li, Xuan Zhao, Mu-Jun Cho, Wendy Ju, Bertram F. Malle SAE Technical Paper. (2016) [abstract] [preprint] Autonomous vehicles represent a new class of transportation that may be qualitatively different from existing cars. Two online experiments assessed lay perceptions of moral norms and responsibility for traffic accidents involving autonomous vehicles. In Experiment 1, 120 US adults read a narrative describing a traffic incident between a pedestrian and a motorist. In different experimental conditions, the pedestrian, the motorist, or both parties were at fault. Participants assigned less responsibility to a self-driving car that was at fault than to a human driver who was at fault. Participants confronted with a self-driving car at fault allocated greater responsibility to the manufacturer and the government than participants who were confronted with a human driver at fault did. In Experiment 2, 120 US adults read a narrative describing a moral dilemma in which a human driver or a self-driving car must decide between either allowing five pedestrians to die or taking action to hit a single pedestrian in order to save the five. The “utilitarian” decision to hit the single pedestrian was considered the moral norm for both a self-driving and a human-driven car. Moreover, participants assigned the obligation of setting moral norms for self-driving cars to ethics researchers and to car manufacturers. This research reveals patterns of public perception of autonomous cars and may aid lawmakers and car manufacturers in designing such cars. [paper] A Primer for Conducting Experiments in Human–Robot Interaction
Guy Hoffman and Xuan Zhao ACM Transactions on Human-Robot Interaction. (2020). [abstract][manuscript][media interview] We provide guidelines for planning, executing, analyzing, and reporting hypothesis-driven experiments in Human–Robot Interaction (HRI). The intended audience are researchers in the eld of HRI who are not trained in empirical research but who are interested in conducting rigorous human-participant studies to support their research. Following the chronological order of research activities and grounded in updated research practices in psychological and behavioral sciences, this primer covers recommended methods and common pitfalls for defining research questions, identifying constructs and hypotheses, choosing appropriate study designs, operationalizing constructs as variables, planning and executing studies, sampling, choosing statistical tools for data analysis, and reporting results. [paper] Related media interview: forthcoming. |
MEDIA
“Every intellectual has a very special responsibility. He has the privilege and the opportunity of studying. In return, he owes it to his fellow men (or ‘to society’) to represent the results of his study as simply, clearly and modestly as he can.” —Carl Popper
|
|
Do people spontaneously take a robot's perspective? Find out in my 6-minute talk at the "Research Matters!" event at Brown University. |
COLLABORATORS
I enjoy collaborating with and learning from people from diverse backgrounds. Below are people that I have written papers with (or that I currently owe a paper to).
Bertram Malle (Brown University, Department of Cognitive, Linguistic & Psychological Sciences) Nicholas Epley (University of Chicago, Booth School of Business) Jane Risen (University of Chicago, Booth School of Business) Hyowon Gweon (Stanford University, Department of Psychology) Malte Jung (Cornell University, Department of Information Science) Juliana Schroeder (University of California Berkeley, Hass School of Business) Alixandra Barasch (New York University, Stern School of Business) Guy Hoffman (Cornell University, Sibley School of Mechanical and Aerospace Engineering) Heather Caruso (University of California Los Angeles, Anderson School of Management) Elizabeth Phillips (U.S. Air Force of Academy, Department of Behavioral Sciences and Leadership) Oriel FeldmanHall (Brown University, Department of Cognitive, Linguistic & Psychological Sciences) Desmond Ong (National University of Singapore, Department of Information Systems and Analytics School of Computing) Tamar Kushnir (Cornell University, Department of Human Development) Alice (Xin) Zhao (Eastern China Normal University, Department of Educational Psychology) Jamy Li (University of Twente, Department of Human-Media Interaction) Wendy Ju (Cornell University, Jacobs Technion-Cornell Institute) Roseanna Sommers (University of Chicago, Law School) Corey Cusimano (Princeton University, Department of Psychology) Jean Costa (Cornell University, Department of Information Science) Shirley Bluvstein (New York University, Stern School of Business) |