Embodied Simulations in Virtual Reality as Tools for Facilitating Empathy
connect to download
Embodied Simulations in Virtual Reality as Tools for Facilitating Empathy
Embodied Simulations in Virtual Reality as Tools for Facilitating Empathy
Embodied Simulations in Virtual Reality
Lynda Joy Gerry
1 INTRODUCING EMBODIED SIMULATIONS
This chapter describes embodied simulations as one method for facilitating empathy in virtual
reality. Embodied simulations (Bertrand, Gonzelez-Franco, Pointeau, & Cherene, 2014), also sometimes
called “embodied experiences” (Ahn, Le, & Bailenson, 2015) or “virtual embodiment” (Peck, Seinfeld,
Aglioti, & Slater, 2013), are virtual environments in which users experience being in a different body
from a first-person perspective. For the purposes of this thesis, I divide embodied simulations into two
types: embodying an avatar (avatar embodiment) and embodying a real person (virtual alterity). This
chapter focuses on avatar embodiment. The structure of this chapter is to first describe the mechanisms
behind embodied simulations, review findings from avatar embodiment studies, and analyze the model of
empathy that avatar embodiment studies utilize. Then I discuss limitations of avatar embodiment studies,
specifically that the studies involve self-focused versus other-focused perspective taking and empathy.
Ultimately, I suggest that virtual alterity systems offer an important contribution to VR research
pertaining to empathy by operating under a different model for empathy that is other-focused and takes
the psychological experience of a specific real person as the target for empathy. As stated in the previous
chapter, other-focused empathy has important consequences for altruistic motivations and helping
behaviors.
Avatar embodiment studies have become one of the most-cited examples of the efficacy of VR in
facilitating empathy. The theory of empathy in avatar embodiment studies is that barriers to empathy can
be overcome by altering perceived similarity, self-other overlap, and identification with an out-group
other. These barriers to empathy are based on intergroup biases, a phenomenon deeply embedded in our
brains and in society (Amodio, 2014; Molenberghs, 2013). Research indicates that intergroup bias can
block certain automatic empathic processes such as spontaneous mimicry and neurological responses in
vicarious pain (Aventi, 2010). Intergroup biases are also observed in racism, often related to the lack of
empathy (Cosmides et al., 2013). Avatar embodiment studies measure empathic outcomes through
attitude changes, specifically decreases in negative implicit out-group biases and increases in positive
implicit attitudes towards outgroups. Importantly, avatar embodiment studies foster empathy by way of
identification with the other without highlighting the ways in which another person’s experiences,
thoughts, and emotions may differ from one’s own (self-other distinctions).
This chapter ultimately proposes an alternative design for facilitating empathic processes and
outcomes in VR using embodied simulations with real people under a project entitled “virtual alterity” to
highlight the self-other distinction as a primary focus. Virtual alterity systems implement other-focused
perspective taking and empathy in VR. Alterity is understood in the anthropological literature on empathy
to indicate the importance of maintaining an awareness of the other as distinct from oneself, specifically
when attempting to understand others who have very different cultural heritage and practices, value
systems, lifestyles, mental or physical abilities, etc. (Gadamer, 1960; Rothfuss, 2009). I argue that
recognizing and appreciating another’s alterity is important for empathy, and propose virtual alterity
systems as a tool for this interpersonal exchange. Rather than just seeing the other as like oneself, virtual
alterity systems facilitate empathy through heightened awareness of another as bearing a distinct
subjective experience. As stated in the previous chapter, cognitive neuroscience research identifies the
matured capacity for empathy as being able to recognize other people as distinct from, yet similar to
oneself (Decety, 2005). Virtual alterity systems are designed with this in mind. Because avatar studies
involve seeing and imagining oneself as the other (self-focused perspective taking) rather than seeing and
imagining the other’s experience (other-focused perspective taking), avatar studies are not other-focused
and do not highlight ways that another’s experience may differ from one’s own. Regardless, avatar
embodiment studies provide the groundwork for design utilized in virtual alterity systems and offer
compelling findings as a starting point for facilitating empathy in VR.
2 EMBODIED SIMULATION SETUPS
2.1 AVATAR EMBODIMENT SETUPS
Avatar embodiment studies incorporate motion
tracking to induce visual-motor synchrony while
participants take on a first-person perspective of an avatar
body. Participants wear motion-capture bodysuits and VR
head-mounted displays (HMDs) within a 2 x 2 M
physical walk-around space. Users embody an avatar, a
computer-generated representation of a person, such that
when they look down they see the torso, legs, and feet of
the avatar body. Motion-capture bodysuit and motion
tracking systems track and render user’s physical
movements onto the virtual body so that users can move
the avatar body and walk around the virtual world. Users
also see the avatar body reflected within a virtual mirror
(Figure 1). Avatar embodiment has become one of the
major paradigms in VR research labs, specifically Mel
Slater’s Experimental Virtual Environments Lab for
Neuroscience and Technology (EVENT) lab at University
of Barcelona, University College London’s VR Labs, and
Jeremy Bailenson’s Virtual Human Interaction Lab Figure 1. The top image shows the motion-capture
(VHIL) at Stanford University. It is a paradigm also used body suit and VR headset worn by subjects in avatar
embodiment studies. The top left and bottom images
by Nonny de la Pena in her Immersive Journalism VR show what the user sees in the virtual environment,
simulations of real news events constructed from eye- looking down and seeing the hands of the avatar body
witness testimonies and on-cite recordings (See Figure 2). and the avatar body reflected in a virtual mirror.
Figure 1 shows the setup used in avatar embodiment Image from Peck et al.’s (2013).
studies. This image is of Peck, Seinfeld, Aglioti, and
Slater’s (2013) study, “Putting Yourself into the Skin of a Black Avatar Reduces Implicit Racial Biases”
conducted at EVENT LAB, which showed that experiencing a sense of ownership over a dark-skin avatar
body reduced implicit racial biases in white participants.
2.2 VIRTUAL ALTERITY SETUPS
Virtual alterity systems use video-generated
virtual environments to allow a user to embody
another real person (Figure 2). Like avatar
embodiment, virtual alterity systems use synchronized
touch and movement and sometimes also a mirror so
that the user can see themselves reflected as another
person. One such setup, The Machine to Be Another
(MBA), is an art performance installation inspired by
neuroscience protocols for bodily and perceptual
illusions designed to trick the brain’s perception of
one’s own body (BeAnother Labs, 2014; Bertrand et Figure 2. This image shows what a user would see within a
virtual alterity setup. Like avatar embodiment, in virtual
al., 2014). MBA is based on research that combines
alterity the user looks down and sees the torso, feet, and
art, cognitive sciences and accessible technology. hands of another person within a video-based virtual
Studies reveal significantly higher levels of presence environment. Image is from The Machine to Be Another.
in MBA than in other VR environments (Collaço et
al., 2016). Presentations of MBA have been fascinating participants, especially enthusiastic when between
6 and 20 years old, in places like Mexico, Spain, Slovenia and Israel. The MBA has been applied in issues
such as mutual respect, generational conflicts, gender identity, immigration, and physical disability bias.
The Machine to Be Another Classic Setup
The MBA classic setup is designed with a
“performer” wearing a front-facing (either head or
chest-mounted) camera recording video that is live
streamed into the virtual reality headset of a “user”
(Figure 3). This allows two people share a first-
person perspective simultaneously. The MBA classic
setup is an embodied simulation that puts the user in
synch with another person by sharing bodily
comportments and movements, creating the illusion
of embodying another (real person). In MBA classic Figure 3. In the Machine to Be Another classic setup, a
setup, the “performer” imitates and follows the performer (left) wears chest-mounted cameras (just below
eye-level) with video live streamed into the display of the
movements of the “user”, creating a visual-motor
Oculus Rift DK2 VR headset worn by a user (right). The
synchrony effect (discussed below). performer follows the movements of the user.
Paint With Me
The MBA classic setup inspired the
setup for Paint With Me (PWM), a virtual
environment where users see a video from a
painter’s embodied point of view with a
tracked rendering of their own hand (using
Leap Motion) while they listen to the painter
describe her creative process and follow the
painter’s movements on their own physical
canvas (Figure 4). The setup uses stereoscopic
video (filmed with two Go Pro Hero 4
cameras and stitched in Autopano) displayed
into an Oculus DK2 VR headset, Roald
binaural microphones worn by the performer
and played into headphones worn by the user
Figure 4. The Paint With Me Setup. Whereas in the MBA classic setup
to capture perspectival audio from the the performer follows the user’s movements, in PWM the user
painter’s point-of-view, and Leap Motion to follows the movements of the performer (the painter). Figure from
capture user hand motion. With the Leap Gerry (2017).
Motion, users see a tracked rendering of their
own hand on top of the video of the painter, such that the user can move their hand in tandem with the
painter and follow her movements while painting along with her.
The Machine to Be Another Body Swap
The Machine to Be Another Body
Swap is a setup in which two individuals wear
an Oculus Rift Developer Kit 2 VR head-
mounted display (HMD) with a camera
mounted on front to see from one another’s
embodied point-of-view in a live video stream
(Figure 5). In Phase 1, which lasts about 5
minutes, there is a curtain separating the two
users. Two facilitators perform coordinated
movements so that the users receive Figure 5. In Phase 1 of the Body Swap, users are separated by a
synchronous visual and touch stimulation on curtain and engage in a movement imitation task. Each user wears a
the hands, arms, and feet to induce a Body VR headset with a front-mounted camera.
Ownership Illusion (BOI) in the body of
another. Users are instructed in an imitation procedure (see Dumas et al., 2010) to move very slowly and
to try to move with their partner, resulting in
intervals of turn-taking and spontaneous synchrony.
In Phase 2, the curtain is removed and the facilitators
guide the users to stand up, walk towards one
another, and shake hands (Figure 6). Because the
experience is a body swap, this involves seeing
oneself from the point of view of the other user.
When the two users meet, the facilitators guide the
users to shake hands and provide synchronous VT
stimulation to the arm, wrist, and hand of each user
to induce a subtle BOI while the user sees themselves
from the other’s point of view. After a couple
Figure 6. In Phase 2 of the MBA Body Swap, the curtain
minutes, the users are told that the experience is separating users is removed, users see themselves from the
ending, the facilitators fade the screen and help the other person's point of view, shake hands, and then move
users remove their headsets. freely. Photo depicts children in the MBA body swap. Photo
used with permission of BeAnother Labs.
3 HOW EMBODIED SIMULATIONS FACILITATE EMPATHY
Laboratory experiments have demonstrated that body perception can be altered through
perceptual illusions inducing multisensory conflict (Aspell, Leggenhager, & Blanke, 2009; Aspell,
Ijsselsteijn, de Kort, & Haans, 2006). These experiments indicate that representations of the body,
including the relative size, location, and appearance of the body and its parts, may be malleable. Through
perceptual illusions, external objects such as rubber hands, mannequin bodies, and virtual body parts or
whole bodies may be incorporated into one’s body image, or their mental model of their own body and its
features normally experienced as distinct from other objects and bodies. One such perceptual illusion is
the Body Ownership Illusion, or the sense of owning a body different from one’s own real body. When
the body ownership illusion is induced with a virtual or fake body or body part representing an out-group,
this produces more positive appraisals of and attitudes towards out-groups. These positive attitude
changes are important for intergroup harmonization and conflict resolution.
Virtual Reality can induce embodiment by creating multi-sensory stimuli combining first-person
perspective with synchronous visual and tactile stimuli, as well as visual and motor synchrony (Maselli et
al., 2013). In avatar embodiment studies, the body ownership illusion is hypothesized to be a mechanism
for increasing self-other overlap of body representations (shared representations). The claim is that
virtually embodying an out-group avatar body causes subjects to incorporate features of the body of an
out-group member into their own body image, and this incorporation of the other into an aspect of oneself
(one’s body image) impacts high-level conceptual self-other overlap (Maister et al., 2013). Maister et al.
(2013) write, “We argue that these [social attitude] changes occur via a process of self-association, first in
the physical, bodily domain as an increase in perceived physical similarity between self and outgroup
member, and then in the conceptual domain, leading to a generalization of positive self-like associations
to the outgroup.” (p. 9) The key motivation behind avatar embodiment studies is to experimentally
manipulate body representations by inducing body ownership illusions over an avatar body representing
an out-group member. This is a technique to increase inclusion of out-group members. Avatar
embodiment studies have demonstrated embodied simulations as a successful tool for reducing negative
implicit biases and promoting positive attitudes for stigmatized out-groups. These studies induce self-
other merging (SOM) by way of identification with an out-group.
First, I describe the mechanisms currently deployed in avatar embodiment studies through a
history of findings regarding the malleability of bodily self-consciousness in Out-of-Body illusions and
Body Ownership Illusions. This malleability is the cornerstone of experimentally altering shared
representations, and it informs design possibilities within virtual environments.
4 HISTORY OF LABORATORY BODY ILLUSIONS
Virtual environments (VEs) cause a user’s perception of their body to be re-oriented around a
new, virtual perspective within a virtual world. These effects are optimized based on research on bodily
illusions that impact bodily self-consciousness. Bodily self-consciousness involves non-conceptual and
pre-reflective representations of the body (Gallagher 2000; Haggard et al. 2003). The components of
bodily self-consciousness that have been empirically studied and utilized in VEs include: self-location,
body ownership, and perspective (Serino et al., 2013; Aspell, Leggenhager, & Blanke,). Jeannerod (2007)
includes sense of agency (SoA) as an aspect of bodily self-consciousness, agency illusions in VEs may
impact a user’s sense of agency. Normally one’s experience of these aspects of bodily self-consciousness
are unified, but research indicates that this unity can be disrupted or altered through specific types of
multisensory stimulation and perceptual illusions (Mandrigin & Thompson, 2015). Bodily self-
consciousness involves the awareness of oneself as the sole subject of one’s primary, bodily experiences.
Thus bodily self-consciousness has a special privacy and primacy, but VEs can allow two people to share
aspects of one another’s bodily self-consciousness.
The sense of being located at a specific point in the environment is called self-location, and
pertains to the question, “Where am I in space?” (Ionta et al. 2011). Self-location refers to the experience
of conscious awareness centered within the body and of having a body that takes up space within the
external physical environment. Another aspect of bodily self-consciousness is self-identification, which
involves the recognition of one’s body or body parts as one’s own, as separate from other objects and
other bodies, and as belonging to oneself (body ownership). Tsakaris (2010) defines body ownership as
the feeling that the body a person inhabits is one’s own, an integral part of ‘me’, in ways that other objects
and other people are not. In laboratory experiments, multisensory stimulations and perceptual illusions
can alter a subjects’ perceived self-location (creating an out-of-body illusion) and their self-identification,
leading to a sense of ownership over an illusory body, or a body ownership illusion (BOI; Guterstam &
Ehrsson, 2012). These studies allow scientists to investigate the nature of the bodily self, and how the
experience of the body as mine is developed, maintained, or disturbed (Tsakiris, 2010). Serino et al.
(2013) stipulate perspective as the first-person, body-centered locus of access and orientation within an
environment. The first-person perspective involves the question, “From where do I perceive the world?”
(Ionta et al., 2011). Perspectives can be uniquely manipulated within virtual environments (VEs) by
pairing self-location and self-recognition to a new perspective. Sense of Agency (SoA) is the feeling of
authorship for self-generated movements and the external events they cause (de Vignemont & Fourneret,
2004). Motion tracking in VEs allows users to have a sense of agency over actions and events within the
virtual world, creating agency illusions that reinforce and strengthen BOIs.
Experimentally manipulating body ownership has important consequences for social perception
and inclusivity. Empathy research in VEs use a first-person perspective (1PP) on a virtual body to induce
a full body ownership illusion (FBOI) such that users experience an avatar body as their own. Users see
their avatar body spatially aligned with the user’s visuospatial first-person perspective and matching their
movements. First, I review the history on bodily illusions in order to explain how these laboratory
findings developed into the paradigms currently used in VEs designed to promote empathy.
4.1 THE RUBBER HAND ILLUSION
Experimental research on OBEs and
BOIs is inspired by the protocol used in the
Rubber Hand Illusion (RHI; Botvinick & Cohen,
1998). The RHI is an experiment that causes
subjects to experience an external object (a rubber
hand) as part of their own body (body ownership).
Subjects sit at a table, place their real hand on the
table such that it is hidden from view behind a
dividing panel, and see a rubber hand placed on
the visible side of the divider (see Figure X). The
illusion occurs as the experimenter either taps or
brushes the subject’s real hand and the rubber
hand synchronously with a rod, and after about
15-20 seconds most people start to experience the Figure 7. In the Rubber Hand Illusion, the participant cannot see
rubber hand as though it were their own hand. their own real hand (box on the right), which is brushed by an
This effect is measured qualitatively, after the experiment synchronously with a rubber hand (box on the left)
that the subject can see. Image from Begley (2016).
illusion (post-induction), through a self-report
Likert Scale Ownership Illusion Questionnaire,
with items such as “I felt as if the rubber hand were my hand” (Botvinick & Cohen, 1998). The RHI is
also measured quantitatively through a strong physiological and psychological response when the
experimenter stabs the rubber hand with a knife (the knife threat paradigm), eliciting physiological skin
conductance responses as though one’s real hand were threatened. Another quantitative response is to
measure the distance between the felt position of the hand as blindly pointed out by the participant after
the illusion, causing changes in perceived self-location towards the fake hand called proprioceptive drift
(Aspell et al., 2009). The RHI works by pairing visual information with synchronous tactile information
to create multisensory integration between the seen rubber hand and felt real hand. This causes the brain
to confuse the cross-modal sensory signals, misidentifying the rubber hand as the source of the felt haptic
sensation. This effect is described as the “spatial remapping of touch” (Botvinick, 2004; Ehrsson et al.,
2004; Ehrsson et al., 2005). Importantly, the illusion is significantly weakened by or ineffective with
asynchronous tactile stimulation. The RHI has been replicated within VR; Subjects experience a virtual
arm as part of their body when tactile stimulation is applied synchronously to their unseen real arm and
the seen virtual arm (Slater, Perez-Marcos, Ehrsson, & Sanchez-Vives, 2008).
The spatial re-mapping of touch observed in the RHI could facilitate a sense of inclusion of the
rubber hand into the wearer’s body image. The RHI has been used as a tool for the inclusion of out-group
members under an “inclusion of the other in the self” model (Aron, Aron, & Smollan, 1992) by eliciting
an experience of an out-group skin color rubber hand as one’s own. Longo et al. (2009) examined the
relationship between RHI experiences and perceived similarity between the fake rubber hand and the
participants’ real hand. This perceived similarity was hypothesized to impact the strength of the illusion.
Skin luminance and hand shape of the rubber hand did not affect the RHI, but post-RHI similarity ratings
indicated that those who perceived the fake hand as more similar to their own hand experienced a stronger
illusion of ownership.
4.1.1 Constraints to the RHI
There are certain constraints to the RHI. As noted, it requires synchronous stimulation. The RHI
only works with hand-shaped objects; the effect will not work with just a wooden block, for instance
(Tsakiris, Carpenter, James, & Fotopoulou, 2010). Morphological similarity and corporeality of the
rubber hand influences the illusion of body ownership (Haans, Ijsselsteijn, & de Kort, 2008). Greater
discrepancies between the posture and spatial position of the rubber hand relative to the subject’s real
hand diminish the BOI effect (Austen, Soto-Faraco, Enns, & Kingstone, 2004; Lloyd, 2007). Thus, the
rubber hand must be spatially congruent with one’s real hand from a first-person perspective (Pavani,
Spence, & Driver, 2000). Thus, the rubber hand has to look enough like a real hand for the illusion to
work. Interestingly, the premotor cortex becomes activated as subjects begin to identify the rubber hand
as their own hand (Ehrsson, Spence, & Passingham, 2004). During the illusion, physiological reactions
such as decreased temperature of the subject’s real hand (Moseley et al., 2008; Hohwy & Paton, 2010)
and changes in temperature sensitivity thresholds (Llobera, Sanchez-Vives, & Slater, 2013) indicate a
body transfer effect. The strength of the BOI can be tested through a Knife Threat Paradigm (Armel &
Ramachandran), whereby the illusory body is harmed with a knife, or by other means, evoking
physiological responses and neural responses that mimic the anxiety response to a threat to one’s own real
hand (Ehrsson, Wrech, Weiskopf, Dolan, & Passingham, 2007). Various physiological reactions to the
body transfer effect in the RHI been measured, specifically in response to threat, such as skin conductance
(Armel & Ramachandran, 2003), histamine reactivity (Barnsley et al., 2011), and electrocardiogram heart
rate deceleration (Maselli & Slater, 2013). Moreover, physiological findings in response to threat to a fake
hand or fake body indicate that subjects have some degree of transfer away from their veridical body/hand
towards the fake hand, an effect that Guterstam and Ehrsson (2012) refer to as “disownership.”
4.2 VIDEO-BASED BODILY ILLUSIONS BASED ON THE RHI
Laboratory-evoked OBEs and BOIs inspired by the RHI include experimentally induced out-of-
body illusions (Ehrsson, 2007) and full-body illusions (Legenhager et al., 2007), and body swap illusions
(Petkova & Ehrsson, 2008). These illusions both involve synchronous stimulation presented through two
sensory streams simultaneously, causing a perceptual illusion whereby a subject can experience him or
herself at different point in space (illusory self-location) and out of his or her own real body (illusory
body ownership), such as in a mannequin, an avatar body, or another person’s body. Subjects in these
bodily illusions react to peri-personal space around the new bodily axis point of multisensory integration
(Ehrsson, 2007; Lenggenhager, Tadi, Metzinger & Blanke, 2007). These illusions create the foundation
for effects used in embodied simulations in VR.
4.2.1 The Out-of-Body Illusion
The ‘out-of-body illusion’ (Ehrsson, 2007)
involves using real-time video from 2 cameras located
2 meters behind the subject’s physical body, and then
screened to the participant in a head-mounted display
(HMD) while the subject sees an experimenter
touches the participant’s chest with a small rod
(Figure 8). The participant pairs the felt sensations
with the observed movements because the seen
movement of the rod and the felt touches to the
subject’s real body are synchronous and spatially
congruent from a first-person perspective.
Interestingly, the rod is simply approaching the
cameras that are 2 meters away, and it is only the
approach of the rod observed in the cameras,
synchronous with one’s felt sensation on their real Figure 8. Ehrsson’s (2007) Out-of-Body Illusion setup.
chest, that elicits this out-of-body illusion and makes
subjects feel that the rod approaching the cameras is directly causing the felt touch. This illusion involves
the feeling of having an unseen body that is being touched in the location below the cameras, which is
called the illusory body, as well as the experience of being located in that position (2 meters behind real
body), which is called illusory self-location. Interestingly, asynchronous stimulation modulates this effect,
weakening it quite significantly, and thus the stimuli must be synchronous for the effect to occur.
Specifically, the ratings used to assess the change in perceived self-location was simply to rate how
strongly they experienced themselves to be located at their veridical location and at the illusory location
from 0 to 100, simply “I did not experience being located here at all” to “I had a very strong experience of
being located here.” During an out-of-body illusion, Guterstam and Ehrsson (2013) report that subjects
experience of being located at the veridical location of their real body was dramatically reduced by
synchronous stimulation, while the experience of being located in the illusory location significantly
increased.
The Temporo-Parietal Junction (TPJ) becomes activated during illusions that alter perceived self-
location (Blacke, 2012). The TPJ is involved in integrating multisensory streams of information to form a
unified, coherent sense of body and self (Tskaris, date). Inhibiting TPJ activation with transcranial
magnetic stimulation (TMS) impairs subjects’ ability to mentally transform their body during OBE
visualization tasks (Blanke et al., 2005). Blanke et al. (2005) postulate that mental transformation of one’s
own body has a crucial role in perspective taking. Mental transformations of one’s own body is related to
imaginative self-transposal in cognitive empathy, which is the process of imaging oneself in another
person’s situation based on swapping between one’s own orientation (physical, emotional, psychological)
to that of another person. The TPJ has also been targeted as an important structure for awareness of self-
other distinctions that aid emotion understanding and empathy (Decety, 2010). Mai et al. (2016) found
that transcranial direct current stimulation (tDCS) used to inhibit activation of the right TPJ (cathodal)
decreased accuracy on mental state attribution (Theory of Mind) and cognitive empathy. Thus, there may
be important links between OBEs, imaginative self-transposal in cognitive perspective taking, and
awareness of self-other distinctions since these processes all involve a common neural region, the TPJ.
The Out-of-Body Illusion and seeing oneself
from above was shown to reduce fear of death after a
VR simulation designed to replicate autoscopic
phenomena (seeing one’s own body from above)
reported in near-death experiences (Bourdin, Barberia,
Oliva, & Slater, 2017). Film production company
Makropol frequently uses Out-of-Body illusions in
film-based VR to capture the sensation of fainting and
nausea, and anxiety. San Francisco-based OuterBody
Experience Lab created in 2012 by Jason Wilson (see
http://outerbody.org) is a gamified creative technology
installation using aerial cameras with live-streamed
video into head-mounted video display goggles.
Because the cameras are placed in corners above the Figure 9. Two players navigating a maze in Jason Wilson's
OuterBody Experience Lab. On the screen projection we
user, the user sees their own body in space from a
can see what the players see in their video display
bird’s eye view. Again, this aerial viewpoint is goggles.
consistent with descriptions of out-of-body and
autoscopic experiences in neural pathologies and near-death experiences, where individuals describe
seeing their own body from above. The user’s body appears like a video game character moving around.
The user can observe their own body, third-personally and from above, as they move around a room and
other people. In the game, users are tasked with retrieving various objects and moving them to specific
locations, while cooperating with other users and racing against a clock. Successful navigation in the
game requires understanding the movements of one’s body in relation to other people, floor marks, and
objects in the room. Interestingly, users can adapt and engage in motoric team-based coordination
throughout a room space, such as playing dodgeball. While there is no empirical data on the project, it is
likely that users may experience an out-of-body illusion. The game involves third-person matching of
their seen body in the video-display goggles to muscular and proprioceptive feedback from their real
body, creating a new perspective on self-movement.
4.3 THE FULL-BODY ILLUSION
In the full-body illusion (Leggenhager
et al., 2007), participants see their own back
projected 2 meters in front of them in an
HMD live video feed of cameras that are
actually 2 meters behind them, and again
synchronous tactile stimulation with a rod to
their observed back (in the HMD live video
stream) and the felt sensation induces an out-
of-body experience (Figure 9). Leggenhager Figure 10. Leggenhager et al.'s Full Boyd Illusion Setup
and colleagues (2007) then take off the HMD, blindfold the subject, walk the subject around in the room,
and then ask the subject to walk back to the place where their body had initially been. The researchers
observed that subjects would walk closer to the illusory location of where their body had been projected
to be (2 meters in front) in the HMD (proprioceptive drift). In a similar paradigm, researchers will show
participants a simple architectural aerial line drawing of the room space, indicating the location of the
cameras and equipment, and ask the participant to indicate where their body had been located within the
space during the experiment with similar effects of proprioceptive drift towards the illusory body.
Temporally and spatially congruent visual and somatic signals in egocentric reference frames cause a
change in perceived self-location from the veridical location in the room to the location where the
cameras were placed.
4.4 BODY SWAP ILLUSIONS
In the ‘body swap illusion’ (Petkova &
Ehrsson, 2008), participants experience a
mannequin body as their own such that when the
subject looks down they see a mannequin body
being touched. This is accomplished by
positioning two downward-facing cameras on
the head of a mannequin body and streamed into
the user’s head-worn video display goggles,
such that the user looked down to see the
mannequin torso. Petkova & Ehrsson (2008)
then induced the illusion by applying Figure 11. The Body Swap Illusion by Petkova & Ehrsson using VT
synchronous touch to the mannequin’s torso and stimulation to transfer into a mannequin body.
the subject’s real torso. In a second experiment,
a confederate wore head-mounted cameras with a live video streamed into head-mounted video-display
goggles worn by a subject to induce a similar effect. This now qualifies as virtual alterity since it involves
a body swap into the body of another real person. An experimenter applied synchronous VT stimulation
to the confederate’s arm and the subject’s arm, inducing a
body ownership illusion. Then researchers asked subjects to
shake hands with the confederate, which in the body swap
appears as though they are shaking their own hand. An
important component of this second handshake experiment is
that the user can see themselves third-personally from the point
of view of the swapped-into body of the confederate, and yet
the ownership illusion still persists. This study inspired designs
for the The Machine to Be Another Classic Setup and Body
Swap, which involves egocentric perspective sharing that
could open new possibilities for being with another, impacting
communication and empathy. Importantly, Petkova and
Ehrsson’s experiment only involves swapping-into the body of
another person, and is thus closer to the MBA classic setup, Figure 11. The handshake task used in Petkova
versus the mutual swap that the MBA body swap uses. Thus, I & Ehrsson’s (2008) second body swap illusion
experiment, swapping-into the body of another
would align Petkova & Ehrsson’s experiment more as a body real human subject and seeing oneself from that
transfer illusion into the body of a real person than a full body view.
swap.
Simo Ekholm (2014) conducted a similar
body swap experiment for his Master’s thesis,
Meeting Myself from Another’s Point of View, in
which two real people experience a body swap
illusion and see from one another’s point of view
(Figure 12). Ekholm used a stereoscopic camera setup
to simulate stereoscopic vision, as well as binaural
microphones to simulate an auditory perspectival
swap rather than just a visual swap, which inspired
protocols for the Paint With Me virtual alterity setup.
Both subjects wore Oculus Drift DK2 VR headsets
with two small cameras mounted in front, and the
HMCs from one subject were live-streamed into a
video seen in the VR headset worn by their partner
Figure 12 Ekholm’s (2014) body swap setup where two
(thus it is a full swap versus a one-way body transfer). people see and hear from one another's point of view.
Binaural microphone input was also swapped into the
binaural earphones worn by their partner. Thus, both
subjects see and hear from one another’s embodied, first-person point of view (such that when they look
down they see the torso of the other person), and, as the title suggests, also see themselves from the other
person’s point of view. Extending Petkova & Ehrsson’s (2008) findings, in Ekholm’s setup, subjects still
experience illusory self-location in the body of the other person even though they can see their own real,
physical body in full view. Specifically, in a handshake task, subjects can see their own arm, hand, torso,
and face (partially obstructed by the HMD), and yet still have a body transfer effect of ‘feeling from’ the
arm, hand, and body of the other person. Thus, even seeing one’s own real body does not seem to
interfere with the strong effects of cross-modal synchronous stimulation in inducing a sense of being
outside one’s own body. Interestingly, Ekholm (2014) found that the strength of the body ownership
illusion was decreased in subjects who had a regular body-awareness practice, such as dancers, athletes,
and meditators.
Maša Jazbec and colleagues
(2016) conducted a body-swap
experiment that involved swapping
bodies with an anthropomorphically and
aesthetically realistic humanoid robot,
Geminoid HI-2 (see Figure), at Hiroshi
Ishiguro Laboratories. Jazbec's et al.’s
(2016) study investigates the relationship
between agency and sense of ownership
towards a different body. The team used
a OvrVision stereoscopic camera live
streamed into an Oculus DK2 VR
headset with head motion tracking
rendered directly into the movements of
Figure 13. Jazbec et al. (2016) body swap experiment with a Geminoid HI-2
the Geminoid HI-2 robot. Jazbec’s team humanistic robot.
replicated the Phase 1 and Phase 2
protocols from The Machine to Be Another Body Swap. Initially, a black curtain separates the subject’s
body from the robot body. In the first phase of the experiment, subjects are instructed to look around and
look at their newly adopted robot body. Two experimenters choreograph synchronous and spatially
congruent touch stimuli on the Geminoid HI-2 robot body and the subject’s body. After one minute of
synchronous VT stimuli, phase two of the experiment begins. The experimenters remove the curtain and
position the subject closer to the robot. As in Ekholm's (2014) and Petvokva and Ehrsson’s (2008) setups,
in Jazbec's setup the user can see themselves from the point of view of the robot. Subjects are asked to
touch their own real body through the Genimoid HI-2 robot. Experimenters controlled the Geminoid HI-2
through a motor synchronization task such that the robot would mirror and imitate the movements of the
human subject (VM synchrony). This movement imitation procedure is based on The Machine to Be
Another’s classic setup and body swap.
In Jazbec et al.’s (2016) study, participants reported that they felt like they were inhabiting two
bodies simultaneously. Participants also confused their sense of whether they were the one being touched
or the one doing the touching within the illusion (an agency illusion, discussed below). Jazbec et al.
(2017) conclude that as opposed to compromising subjects’ body awareness (of their real body), which
Guterstam and Ehrsson argue for as a ‘disownership’ effect of body ownership illusions, the body swap
illusion temporarily disorients subjects’ self-recognition. Jazbec also employed a pointing paradigm to
measure perceived self-location by asking subjects to point to where they were. In a compelling finding,
Jazbec found that subjects would point towards the robot, towards the point of view they had in the IVE
setup, rather than to their seen real body, indicating some either habitual or primary tendency to point
inwards to indicate self-location.
5 KEY MECHANISMS FOR BODY OWNERSHIP ILLUSIONS
The previous section presented examples of bodily
illusions that have been evoked using fake body parts
and video feeds to induce a sense of being somewhere
else (illusory self-location) and of being in a different
body. This section reviews the key mechanisms of that
bodily illusions that have been employed in VR setups
and experiments. These key mechanisms involve
Visuomotor Synchrony (VM), First-Person Perspective,
and Agency Illusions. Figure 14 from Figure from
Kilteni, Maselli, Kording, & Slater (2015) shows an
overview of the RHI triggered by VT stimulation (A),
Proprioceptive Drift (B), and a Body Ownership Illusion
(C), and how a similar effect is created in VR with 1PP
(D), Visuomotor synchrony (E), and an Agency Illusion
(F). Moreover, this section explores factors that are most
Figure 14. Schematic representing key features of the
important to create convincing bodily illusions in RHI that have been incorporated into body ownership
embodied simulations. and agency illusions in VR. Figure from Kilteni, Maselli,
Kording, & Slater (2015).
5.1 VISUOMOTOR SYNCHRONY
While synchronous visuotactile (VT) stimulation can induce a body ownership illusion (BOI),
other factors that affect body awareness have been shown to induce the illusion of ownership towards a
surrogate body part. Specifically, in addition to touch, sense of agency during active, voluntary movement
also constitute sources of body awareness (Tskaris & Haggard, 2005). Combinations of sensory input
from vision, touch, proprioception, and motor control impact body perceptions (Klackert & Ehrsson,
2012). Researchers have found that synchronous visuo-proprioceptive correlations through passive and
active movements can also induce the illusion of ownership towards a surrogate body part (Dummer,
Picot-Annand, Neal & Moore, 2009; Tsakaris, Prabhu, & Haggard, 2006). For example, Sanchez-Vives,
Spanlang, Frisoli, Bergamasco, and Slater (2010) evoked the illusion of ownership of a virtual hand
through synchrony between movements of the real hand and the virtual hand, termed visuomotor (VM)
synchrony.
5.1.1 Relative Importance of VT and VM Synchrony in Producing a BOI
Kokkinara and Slater (2014) conducted an experimental study to test the relative importance of
VT and VM synchrony on evoking a BOI. They used VR to integrate visual, motor, and tactile feedback
in a paradigm where subjects wore a VR Head-Mounted Display (HMD) showing a virtual avatar body
from a first-person perspective (1PP) such that the subject’s visual field of their own body was replaced
by the avatar body. The experimenter used a Wand Intersense device with a foam ball attached to deliver
tactile stimulation by tapping the subject’s real legs. The movements of the Wand Intersense were tracked
and rendered in the virtual environment, represented simply as a red ball. The participant’s feet were
tracked with Optitrack infrared cameras so that the virtual legs would move congruently with the
participant’s real legs. Participants sat in a chair and either moved their leg (VM stimulation) or had it
tapped or stroked by an experimenter (VT stimulation). For the VM stimulation, participants were
instructed to trace a line of different shapes that would appear on the left or right side of the virtual table
(replicating the real table) with their heel. The experimenters used synchronous and asynchronous
conditions for VM stimulation with motion tracking in the synchronous condition and a pre-recorded
virtual leg animation in the asynchronous condition. In order to test for moments of breaking from the
illusion during stimulation, the experimenters employed an intermittent measure whereby participants
verbally reported “Now” when the body ownership illusion was lost, broken, or interrupted. This allowed
experimenters to gain a deeper sense of the temporal scale and subjective experience of the illusion during
stimulation, rather than just after stimulation. As such, the researchers also attempted to study factors that
contribute to and disrupt BOIs.
In a 2 by 2 factorial design with
factors VM (asynchronous, synchronous)
and VT stimulation (asynchronous,
synchronous), there were four experimental
conditions: VM and VT both synchronous,
VM and VT both asynchronous, VM
synchronous with asynchronous VT, and VT
synchronous with asynchronous VM. The
researchers found that VM stimulation has a
greater determining role than VT stimulation
in generating the body ownership illusion.
Specifically, the subjects in the VM
synchronous and VT asynchronous condition
still experienced a BOI, but when VM was
not synchronous the illusion rapidly broke Figure 15. Design setup for Kokkinara & Slater’s (2014) study. On the
down. Thus, the researchers conclude that top right (a), we see the user’s legs with motion-capture points. The
“...asynchronous VT may be discounted top left (b) shoes the user tracing shapes with his/her foot in the VM
when synchronous VM cues are provided.” synchrony condition. The bottom right (c) shows the experimenter
(p. 56) These findings are consistent with an using a Wand Intersense, tracked as a red ball (c) in VR in the
synchronous VT condition.
earlier study by Kilteni, Normand, Sanchez-
Vives, and Slater (2012), where subjects’
movements were congruent with a virtual hand, and the BOI was not compromised when the subject’s
real hand grasped an object not rendered in the virtual world/hand. Thus, incongruent and asynchronous
tactile stimulation appears to not have as strong of a constituting role in BOIs when there is synchronous
VM stimulation. The researchers also suggest that VM stimulation has greater effect on agency and self-
localization than touch alone. Kokkinara and Slater’s (2014) study on the relative importance of VT and
VM synchrony on BOIs indicates that the sensorimotor correspondences may have a more important role
in contributing to the effects in embodied simulation studies than sensory correspondences.
5.2 IMPORTANCE OF FIRST-PERSON PERSPECTIVE
Empirical findings indicate that the illusion of owning an entire artificial body is much stronger
when the body is viewed from the first-person perspective, compared to when the same body is viewed
from the third-person perspective (Petkova & Ehrsson, 2008; Petkova et al., 2011; Slater, Spanlang,
Sanchez-Vives, & Blanke, 2010). Similarly, episodic memory strength is linked to egocentric (first-
person) viewpoint, versus an allocentric (third-person) viewpoint, such that subjects have qualitatively
and quantitatively stronger information retention for experiences presented in an egocentric viewpoint, as
opposed to an allocecntric viewpoint. Combined, these findings suggest the strength of egocentric
perspective in body ownership illusions and learning retention. These experiences allow VR developers
and researchers to use the technologies to manipulate spatial sense of self and enhance the feeling of
having a body that is located within a simulated world (Sanchez-Vives & Slater, 2005).
5.3 AGENCY ILLUSIONS
Baileya, Bailenson, and Casasanto (2016) write that in motion-tracked VEs, users are able to map
sensorimotor contingencies and body perceptions to an avatar through two means: 1) afferent or sensory
signal correspondences (as we have seen with VT pairing in the RHI), or 2) sensorimotor
correspondences between the physical body and the virtual body. Sensorimotor correspondences occur
when participants see virtual or artificial body movements that are synchronous with the user's own real
physical body (VM synchrony). Sensorimotor contingency matching to a virtual or artificial body creates
a new effect, called an Agency Illusion, that researchers study in addition to body ownership illusions.
The agency illusion is measured through questions asking subjects to rate the extent to which they felt
like the movements of the fake body or body part were their own, and the extent to which they felt they
could control the movements of the fake body or body part (for example, see Casper et al, 2015). Agency
illusions occur through active, voluntary movements that allow participants to control an artificial or
avatar body. Sense of Agency (SoA) is the feeling of authorship for self-generated movements and the
external events they cause (de Vignemont & Fourneret, 2004). Importantly, Kalckert and Ehrsson (2012)
found that participants who had passive control over artificial limb movement during a RHI experienced
ownership but not agency, but when they had active control during the RHI they felt both ownership and
agency. This finding is important because The Machine to Be Another and Paint With Me projects find an
agency illusion in the absence of a BOI, which is a unique finding in the literature.
6 BOIS AND EMPATHY
BOI experimental manipulations are used in empathy research because these illusions can alter
perceived similarity to others, which has been shown to increase social influence and empathy-related
responses such as decreasing out-group biases, specifically racial biases and stereotypes. VR interventions
using BOIs (specifically, the full body ownership illusion, FBOI, discussed below) have shown efficacy
in promoting empathy-related responses such as altruism (Rosenberg, Baughmen, & Bailenson, 2013),
self-compassion (Falconer et al., 2016), and reduction of implicit biases (Peck et al., 2013). BOI may
increase self-other merging, a conceptual re-framing whereby the empathizer’s self-image incorporates
the other such that the empathizer and the person in need (target) are seen as psychologically “one” (Aron
& Aron, 1986; Batson, 2011). Specifically, this could be because we see aspects of ourselves in the other
(Cialdini, Brown, Lewis, Luic, & Neuberg, 1997; Maner et al., 2002). Evidence suggests that perceiving
another as like oneself or as having a shared group identity increases the likelihood of emotionally
empathizing with another and willingness to help (Krebs, 1975; Stotland, 1969). Self-other merging could
also occur because we see ourselves and the person who is in need as interchangeable exemplars in a
common group identity (Dawes, van de Kragt, & Orbell, 1988; Turner, 1987). This section explores the
link between BOIs and these empathy-related responses to analyze the mechanisms that promote these
effects.
Using a within-subjects design, Farmer, Tajadura-Jiménez, & Tsakaris (2012) induced a RHI
with white participants on a fake hand that appeared to belong to a different racial group (a black rubber
hand), in addition to a light-skinned rubber hand. They found that participants indeed do experience the
RHI over a fake hand that appears to belong to a different racial group. Moreover, existing racial biases
impacted the self-reported strength of the illusion of ownership over a black rubber hand. The researchers
also investigated whether the RHI would be enough to alter higher-level social perceptions and change
implicit attitudes. They found that the strength of the RHI was directly correlated with decreases in
implicit racial biases, as measured by the Implicit Association Task (IAT; Greenwald, McGhee, &
Schwartz, 1998). The IAT is a word-association task designed to elicit unconscious stereotypes and
prejudices. Stereotypes are a tool that allow us to rapidly categorize others, and they may modulate our
cognitive and affective reactivity to others. Racial differences are one of the most ubiquitous stereotypes.
Farmer et al. (2012) suggest that BOIs may have a role in "overriding" ingroup/outgroup distinctions
based on skin color, and point to a role of sensory processing in social cognition (bodily resonance).
Thus, multisensory integration causes participants to experience a different skin-colored body part as
one’s own, and this increases a sense of similarity between oneself and the racial out-group that decreases
implicit biases. However, based on the within-subjects design of the study, it was not possible to isolate
whether or not ownership over the black rubber hand was the variable determining the post-illusion
attitude changes.
To address this concern, Maister, Sebanz, Knoblich, & Tsakiris (2013) conducted a similar study
using a between-subject design so that white participants either had the RHI induced with a racial in-
group (white) rubber hand or a racial out-group (black) rubber hand. After the induction of a RHI on a
dark-skinned rubber hand, the experimenters observed significant correlations between the strength of
perceived ownership over the dark-skinned rubber hand with more positive and less negative racial
attitudes on a subsequent IAT. Researchers observed changes from baseline IAT administered pre- and
post- induction of the RHI with a black rubber hand that were not observed in the RHI condition with a
white rubber hand. Their findings indicated that positive attitude changes towards racial out-groups was
correlated with the reported strength of the illusion, consistent with Farmer et al.’s (2012) findings, but
only for subjects in the racial out-group rubber hand condition. Individual differences in racial attitudes
and empathy (as measured by the Interpersonal Reactivity Index, IRI) did not interfere with the effects of
the RHI on changes in IAT scores, indicating that the effect occurs despite trait differences in empathic
personality and cognitive biases.
Racial biases have been shown to interfere with aspects of embodied social cognition, such as
sensorimotor activation, shared bodily representations, and mirror neuron activation in areas underpinning
simulation of and affective responses to another’s pain (Xu et al., 2009; Avenanti, et al., 2010). Xu, Zuo,
Wang, & Han (2009) used fMRI to study neural responses while the subjects observed racial in-group and
out-group members in pain, and found heightened anterior cingulate cortex (part of the ‘pain matrix’)
activation with in-group pain stimuli but not with out-group pain. Similarly, Avenanti, Sirigu, & Aglioti
(2010) observed significantly lower neural activation of affective and motor responses when participants
observed a member of a racial outgroup experiencing pain, as compared to observing a member of the
participant’s racial in-group experiencing pain. Moreover, subjects responded less to racial out-group pain
than to pain on an unfamiliar purple hand (with highest response to racial in-group). This decrease in
neural activation is important because it involves aspects of the Mirror Neuron (MN) System indicating
self-other overlap in bodily representations and neural activation patterns found in the monkey brain
while observing an other (person or monkey) perform an action that mirror the neural activation patterns
of the individual performing that same action (Rizzolatti, Fadiga, Gallese, & Fogassi, 1996). Shared
representations involve overlapping areas of neural activation (neural resonance) between one’s own
neural emotional and body representations with the neural activations that occur when perceiving or
imagining the bodily and emotional states of another person. Thomas Fuchs (2016) summarizes shared
representations and simulation accounts of empathy succinctly: “The brain simulates the expressions and
actions that occur in the other’s body through the virtual activation of our own bodily states; it can then,
in turn, project these quasi-experiences onto the other as if we were placed in his shoes.” (p. 155)
Avenanti et al. (2010) found that decreased activation of shared neural representations was correlated
with subjects’ implicit racial biases; that is, the more biased a subject is to a racial out-group, the greater
the decrease in neural resonance and shared representations with that racial out-group. This means that
social categorization based on racial group membership modulates the capacities of our social brain,
specifically our ability to share someone else’s emotions and pain.
The key reason that these BOI experiments have a role in empathy research is as follows.
Activation of shared representations is modulated by the perceiver’s identification with the observed
person(s), which is based on their implicit appraisals based on similarity, familiarity, and social closeness
(de Vignemont, 2006; Hein & Singer, 2008). While many subjects report low out-group and racial biases
on explicit self-report measures, implicit attitude measures like the IAT still indicate negative biases and
stereotypes. Empathy research has shown that there is a lack of shared representations observed with out-
group others, specifically those with racial biases, which has been described as the ‘empathy gap’ (Gutsell
& Inzlicht, 2011). The empathy gap has been characterized as an inability to bridge self-representations to
other-representations for neural activations and body representations. The claim is that BOIs help close
this empathy gap by causing the subject to partially overlap body representations for oneself with those of
an out-group member, which extends into greater affiliation with and positive social attitudes towards
out-groups. Specifically, BOIs compensate for the lack of shared representations with out-group members
that substitute for the shared representation mechanisms activated in response to in-group members with
whom one identifies. In an opinion piece entitled “Changing Bodies, Changing Minds” summarizing
findings from inducing full-body ownership illusions in VR, Maister, Slater, Sanchez-Vives, and Tsakiris
(2015) write, “These shared body representations are thought to form the fundamental basis of empathy
and our understanding of others’ emotions and actions.” (p. 6) BOIs are thus seen as a tool to
experimentally increase sharing of body representations to impact ingroup/outgroup categorizations. This
theory of empathy is the motivation for the design in avatar-based embodied simulation studies, which I
describe in the next section.
Virtual Alterity projects attempt to bridge the empathy gap through relational, interactive
interfaces that develop an understanding of another through participatory engagement with aspects of
another person’s embodied, first-person perspective and subjective experiences. This chapter proposes
virtual alterity systems as an alternative to avatar embodiment studies. First, I discuss studies using avatar
embodiment to impact negative biases and increase positive attitudes towards others.
7 AVATAR EMBODIMENT AND EMPATHY
Bodily illusions indicate that under specific multisensory conditions, humans can experience
artificial body parts, fake bodies, avatar bodies or virtual bodies as one’s own body parts or body. The
BOI is one of the major paradigms used in VR research. Immersive VR has been used to further
investigate aspects of the illusion of ownership. Immersive Virtual Environments (IVEs) produce body
and perceptual illusions by presenting a first-person visual perspective in VR head-mounted displays
(HMDs) that alter the user’s perspective of their own body and self-location. Rather than looking down
and seeing their own real body, in VR FBOIs the participant looks down and sees the virtual body (torso,
arms, legs, feet). Evidence indicates that virtual reality technologies produce a strong effect of ownership
over a virtual body (Lenggenhager, Tadi, Metzinger, & Blanke, 2007; Petkova & Ehrsson, 2008). The key
to full-body ownership illusions (FBOI) is the substitution of a virtual body seen through a first-person
perspective, such that the virtual body is aligned with the subject’s visuospatial first-person perspective of
their real body (Petkova, Khoshnevis, & Ehrsson, 2011). Thus, VR can be used to induce embodiment or
a full body ownership illusion (FBOI) by creating multi-sensory stimuli combining first-person
perspective (1PP) with visuomotor (VM) and visuotactile (VT) synchronicity (Maselli et al., 2013).
An avatar is a 3-Dimensional representation of a person within a digital or virtual environment
that configures the ways that the user sees themselves and the way others see them within that
environment (Blascovich and Bailenson, 2011). An avatar might not correspond at all to a user’s visible,
audible, and behavioral characteristics, but research indicates that users identify more strongly and
experience a greater sense of presence when their avatar looks more like them (LaVelle, 2016). Avatar
embodiment studies are conducted in labs that have motion-tracking body suits to track users’ movements
in real time, and render these movements onto the avatar body. This creates a new effect where the user
experiences a sense of agency over the avatar body. Mel Slater, a leading researcher in neuroscience and
virtual environments at the University of Barcelona and head of the EVENT Lab, claims that avatar-based
embodied simulations in VR can impact the automatic associations that people make about one another
based on their bodies, and thus reduce tensions between different groups of people (Bennington-Castro,
2013). This instantiation of empathy is based on perceived similarity between self and other, whereas
virtual alterity projects emphasize perceived self-other distinctions within an experience of merging.
Virtual embodiment can create different social meanings for users by temporarily altering a user’s
identity and perception (Biocca, 1997). The Proteus effect, named after the shape-changing sea god in
Greek mythology, refers to a phenomenon whereby the appearance of a user’s avatar influences self-
perceptions and changes how users interact within the virtual world, and this effect can extend into
behavior effects in the real world (Yee & Bailenson, 2007; Yee, Bailenson, & Ducheneaut, 2008).
Findings indicate that individuals adjust their attitudes and behaviors to align with their expectations
associated with the type of clothing, adornment, body type, or physical features of their avatar. For
example, users who inhabited a tall avatar behaved more assertively in negotiation tasks in the real world,
as compared to users who inhabited a shorter avatar (Yee, Bailenson, & Ducheneaut, 2009). In another
study, operating an attractive avatar caused users to stand closer to a confederate within a virtual world
and increased physical closeness to others in the real world following virtual exposure (Yee & Bailson,
2007). The appearance of an avatar body also affects the level of physical exertion a player exerts within
an online game. Pena and Kim (2014) found that female participants given obese avatars had decreased
physical activity within a game, whereas those who had normal-weight avatars had increases in physical
activity in the game. While these studies do not directly pertain to empathy, the Proteus Effect indicates
that avatar appearance temporarily influences user’s body image, and changes in body image can increase
perceived similarity to others. Thus, the Proteus Effect is one useful mechanism that can evoke empathy-
related outcomes in VR. In one study, users who embodied a super hero and flew around a city to help
find medicine for a boy with diabetes were more likely to help a confederate assistant who faked
“accidentally” knocking over a jar of pens while cleaning up after an experiment, indicating greater
helping behavior as an effect of seeing oneself as a superhero in VR (Rosenberg, Baughman, &
Bailenson, 2013). Yee and Bailenson (2007) hypothesize that when people to create or inhabit avatars and
use them in a social context, they form new self-representations that change how they interact with others.
This could have a constructive role in perspective taking and interpersonal understanding. The user may
use an avatar to explore the world from a different perspective, a process that facilitate empathy, or users
might indulge in a fantasy that distances them from their real self and real others, a process that may
decrease empathy. The evidence that avatar embodiment increases empathic outcomes is inconclusive,
and the literature suggests many confabulating variables. Here I review virtual embodiment studies that
indicate both positive and negative effects on empathy.
Peck et al. (2013) used an embodiment simulation involving three features which have become
standards in avatar embodiment studies: 1) 1PP, meaning that the virtual body aligns with and substitutes
the user’s real body, 2) a virtual mirror that geometrically matches the user’s real body and shows the
avatar body in full view, and 3) a motion-capture bodysuit that tracks and renders the user’s movements
synchronously onto the virtual avatar in VR. They extended the findings from RHI with black rubber
hands into VR to measure BOI over a black-body avatar using the avatar embodiment framework, and
measured implicit racial bias changes after avatar embodiment. Participants completed an initial Implicit
Association Test (IAT; Greenwald, McGhee, & Schwartz, 1998) three days before avatar embodiment.
The IAT involved rapidly associating faces of different racial groups with positive or negative words,
which for racial bias involves longer reaction times for positive words paired to racial out-groups than in-
groups, and fewer positive word associations with racial out-groups (Greenwald, Nosek, & Banaji, 2003).
After the initial IAT measure, subjects returned to the lab and were divided into four different conditions
of avatar embodiment: light-skin, dark-skin, non-embodied dark-skin (seeing dark-skin avatar moving
asynchronously in the mirror with no 1PP virtual body), alien-skin (purple skin tone avatar body). In the
embodiment phase, participants were instructed to look around the environment, look in the mirror, and
explore and move their virtual body. This was followed by an approach phase, in which 12 virtual human
female characters (6 light and 6 dark-skinned) walked past the user from both directions. The embodiment
procedure induced a BOI across all virtual embodiment conditions, and users who experienced a BOI
over the black avatars had decreased racial biases. Virtual embodiment in the white and alien-skin avatar
bodies did not produce significantly strong racial bias changes, isolating the effect to dark-skin virtual
embodiment. The non-embodied, asynchronous movement dark-skin avatar embodiment condition
produced a much lower BOI, indicating the effect of the illusion to reduce racial prejudices. The approach
phase was used to measure nervousness when the other character approached the user in each
embodiment condition, and IAT change was correlated with nervousness scores such that subjects who
felt more nervous had greater decrease in IAT scores.
However, in a similar study, Groom, Bailenson, and Nass (2009) divided subjects into avatar
embodiment either in a black or white avatar, and found that those who embodied a black avatar had
greater racial ingroup preferences in implicit attitudes after avatar embodiment. Thus, manipulations of
self-identity through different forms of an avatar body and correlated body representation changes do not
necessarily reduce negative racial biases. It is possible that avatar embodiment can also activate
stereotypes that aggravate negative attitudes for individuals who have low commitment to the racial group
of the avatar (Ellemers, Spears, & Doosje, 2002). Perceived threat may be a factor that impacts intergroup
bias (the tendency to prefer one’s in-group to an out-group). In Groom and colleagues (2009) setup,
subjects were in a high-stress condition of an interview. Thus, embodying an avatar of a systematically
disadvantaged out-group may increase the participants’ anxiety and indicate a self-threat to their own
success in the interview.
Walking around in the avatar of an elderly person versus the avatar of a younger person
contributed to greater stereotype reductions towards the elderly (Yee & Bailenson, 2006). Oh, Bailenson,
Weisz, & Zaki (2016) compared the effects of reduced prejudice towards elderly (ageism) between
mental simulation (traditional perspective taking) using a first-person, text-based description compared to
virtually embodying an elderly person in an IVE. The authors found that the intervention of embodying
an avatar of an elderly person in an IVE increased participant’s self-other merging and desire to
communicate with the elderly, and that these positive increases were greater in the IVE condition than the
mental simulation condition. Prior to the intervention, subjects were split into two groups to read and
write a summary of short articles that indicated either a high or low intergroup threat: “Elderly Pose
Immediate Threat to Young Americans” (high threat) or “America Prepared for Changing Demographics”
(low threat). Interestingly, subjects in the high-threat condition had greater self-other merging and
intention to communicate than subjects in the low-threat condition. The authors suggest that embodying
an elderly person after expressing negative attitudes may have caused participants in the high-threat
condition to feel guiltier about any previous negative attitudes, and these feelings of guilt may cause
people to show more empathy.
Figure 16. The way subjects saw themselves in a virtual mirror while virtually embodied as an older person in
Oh et al.'s (2016) experiment.
In this study, subjects in the IVE condition were given the following instructions:
For the next minute, look closely at your reflection in the mirror. This is
what you look like to others in the virtual world. Imagine a day in the life
of this individual, looking at the world through her/his eyes and walking
through the world in her/his shoes. (Oh et al., 2016, p. 401-2)
These instructions are interesting because the setup involves a combination between seeing
oneself in the other’s body (avatar embodiment) while thinking about what day in the life of an elderly
person might be like. However, no information is provided about the specific elderly person the subjects
imagine or virtually embody to facilitate perspective taking, which Gelbach and colleagues (2015) found
to be an important variable to facilitate perspective taking. Gelbach et al. (2015) write, “One could argue
that, when individuals are instructed to take the perspective of someone else, but given little or no
information about who this person is, there is no actual perspective to be taken.” (p. 21) This is one of my
biggest concerns with avatar embodiment studies, and one of the main reasons I argue for combing virtual
embodiment with perspective-taking VR for optimal efficacy in facilitating empathic outcomes.
Reiterating ideas from the previous chapter on empathy, Batson, Early, and Salvarani (1997) found that
imagining how you would feel in another person’s situation (self-focused perspective taking) has
differential effects than imagining how the target person feels (other-focused perspective taking). This is
important because in Oh et al.’s (2016) study, subjects are given context about the group (elderly persons)
in the reading and writing task, but not about the specific individual who they virtually embody or
mentally imagine. This could have important consequences for empathy, which I discuss more at length
in the following chapter.
8 LIMITATIONS OF AVATAR EMBODIMENT FOR EMPATHY RESEARCH
While avatar embodiment studies offer a valuable starting point for empathic VR, they have some
limitations. First, the studies conceive of empathy by way of identification, perceived similarity, and
affiliation (bringing the other into one’s coalition), but this is a very narrow way of understanding
empathy that does not involve sharing or recognizing another as distinct from oneself (the self-other
distinction). Second, the setups do not allow the user to engage in affective or cognitive perspective
taking to understand the ways that another person would experience a situation, but instead only allow
users to reflect on their own experience and as such are very self-oriented versus other-oriented. Lastly,
the setups only explore attitude changes, but because these attitudes are implicit it is unclear whether this
could impact motivational changes or behavior changes, specifically in a real-world context.
Slater, Spanlang, Sanchez-Vives, & Blanke (2010) claim that simply perceiving a virtual body
from a first-person perspective with VM synchrony is sufficient to create BOIs, and that BOIs foster a
sense of identification through greater self-other overlap with the out-group that the avatar represents.
However, I question whether avatar embodiment setups encourage an awareness of the other as other.
This is the motivation for the design and research in virtual alterity projects. Rather than experiencing
oneself as another, virtual alterity paradigms facilitate a richer experience of another by augmenting our
access to aspects of the minded being of another person. While I agree with Slater et al. (2010) that the
first-person perspective is valuable for convincing and effective illusions in VEs, I do not consider BOIs
necessary to change self-other overlap. For example, Ahn et al. (2015) successfully promoted helping
behavior in a perspective-taking VR that did not involve a BOI or an avatar body. In this study, users
engage in perspective-taking to by either mentally imagining a task space as it would appear to a
colorblind person in a written script or by capturing this perceptual transformation in a first-person IVE
that displayed the task space the way that it would be seen by the way a colorblind person. Thus, rather
than just embodying an avatar, perspective-taking VEs allow users to see the world as the other would see
it. Afterwards, subjects in the IVE condition were more willing to stay after and help the colorblind
confederate and spent more time helping than subjects in the text-only mentalizing condition. Moreover,
Ahn et al. (2015) used the Self-Other Merging scale and observed greater self-other overlap in the IVE
condition, indicating that BOIs are not essential for increasing self-other overlap. Moreover, I argue that
identification is not the most effective model for empathy. While avatar embodiment studies may foster a
sense of identification with others towards greater inclusivity, perspective taking VEs can help subjects
understand the meaningful differences in another person’s experience of a situation so that the subjects
can know how to help the other, and increase empathic concern by shedding light on why it is valuable to
help the other. Thus, beyond fostering a sense of identification with the other, perspective-taking VEs
foster a recognition of self-other distinctions. Virtual alterity systems combine PT and 1PP virtual
embodiment to maximize the effects of seeing as another while also moving with another in a coordinated
and synchronized manner.
Maister et al. (2015) write that shared body representations are the basis of empathy. While
shared representations may be an enabling condition for certain empathic processes, they are not
sufficient to produce mature empathy. Shared representations are based in perceived similarity and
promote inclusivity, but shared representations are not sufficient to produce other-oriented empathic
concern. This is one of the limitations of altering body representations as an intervention for increasing
empathy. To demonstrate that shared representations are not sufficient to produce other-oriented empathic
concern, I cite Becks, Coan, and Hasslmo’s (2013) study on physiological linkage and emotional
attunement that does not indicate increased empathic concern. Becks et al. (2013) conducted a functional
magnetic resonance imaging (fMRI) study with married couples investigating neural regions underlying
anticipatory anxiety to a shock threat as mediated by hand-holding with a stranger (experimenter), versus
a close other (friend or spouse), or a control with no hand holding. The experiment involved shocks
delivered randomly to oneself (the subject), a stranger, or a close other. They found that the neural
response to threat to oneself and threat to a friend were very similar, indicating that close identification
with a friend causes people to treat the friend as if they were the self (inclusion of the other into the self-
model). This identification effect does not occur for strangers.
Beck et al. (2013) propose that the brain may encode close others as part of the self, and that there
is a blurring of self and other. Moreover, the effect of neural resonance was modulated by perceived
similarity to the stranger. While subjects do have a neural response to stranger pain, this response is not
like the self unless subjects assume that the stranger is a lot like them. This effect seems to indicate a
benefit to manipulating perceived self-other similarity to allow out-group others to be perceived more like
kin. However, assuming that another feels the same pain as oneself in response to the same stimuli may
interfere with the appropriate empathetic response. This framework assumes, “I felt this way in response
to this situation, and because you are similar to me, you must feel the same.” That is, rather than, “I feel
your pain”, this setup elicits, “I assume your pain feels the same as mine.” This involves a collapse of the
other’s emotions into one’s own. One important feature of mature empathy is recognizing that another
person may feel differently than you do in response to the same or similar situation. Thus, shared
representations are not sufficient to produce other-oriented empathy and prosocial responses.
BOI research focuses on the malleability of factors that normally block empathy, therefore
starting with an empathic deficit (negative empathy) and bringing empathy to baseline. Further research
could explore increasing empathy beyond this baseline to foster greater care and concern for others rather
than just reducing biases. In avatar and BOI studies, implicit attitudes towards out-group others are
modified to match positive self-appraisal and in-group attitudes. The problem with this design is that if
these implicit attitudes are transformed, this may not impact real-world motivations and behaviors
towards out-groups. Subjects may not be conscious of their implicit attitude changes unless there is
elaborate debriefing, in which case the individual cannot knowingly incorporate this attitude
transformation into their social interactions and how they encounter out-group others.
Research on the formation of stereotypes has shown that people adjust their perception of in-
groups and out-groups based on their personal experiences with individual members of those groups
(Weber and Crocker, 1983; Johnston and Hewstone, 1992). Avatar embodiment involves an experience of
self-identification with the out-group other, experiencing the other as transposed onto the self, rather than
an experience of being with the out-group other. While avatar embodiment experiences may be personally
meaningful and involve strong emotions, the self-other distinction is not established. As such, avatar
embodiment setups involve a transformed experience of the self but not necessarily a transformed
experience of the out-group other. Specifically, there is no real other to understand or with whom to
empathize. The avatar is not a real other, but merely a computer graphic representation.
In her Masters thesis entitled Bodily Incorporations and the “Merging” of Self and Outgroup:
It’s not so black and white, Adriana M. Corona (2013) conducts studies using a black rubber hand that
diverge from these findings. She induced a RHI over either a black or white fake hand, and explored its
impact on subsequent implicit mimicry of in-group and out-group facial expressions of happiness and
sadness. In her studies, empathic responses towards other-race individuals was not impacted by the
incorporation of out-group bodily features into the subject’s own body image. After the RHI, she
conducted a task instructing subjects to first passively attend to stimulus materials (emotional faces), and
then after a certain number of trials subjects were asked to imagine what the target was feeling.
Expressions of happiness were imitated during both passive attention and active imagination instructions,
but sad expressions were only imitated with active imagination instructions. Skin conductance measures
indicated a racial in-group bias during the passive condition (with greater conductance when witnessing
emotional pain of in-group faces) that disappeared when the instructions changed to trigger reflective
processing and perspective-taking. These findings indicate that research designed to combat racial biases
could benefit by stimulating active perspective taking and reflectively processing what another might be
feeling. This is not explicitly a component of avatar embodiment studies, and virtual alterity projects do
aim to evoke this active reflection on another’s affective state during virtual embodiment.
Lastly, it’s useful to note that there is a very practical limitation to avatar embodiment research.
As Leyer, Linkenauger, Bulthoff, & Moheler, (2015) explain, this technology “requires expensive motion
capture equipment, highly refined software, and sophisticated 3D models of the human body with a
rigged skeleton, all of which require a great deal of money, time, and specialized expertise.” (p. 12) When
I mentioned this to Domna Bonakou, a Post-Doctoral researcher at EVENT Lab Barcelona, she
mentioned that the reason these setups are used over potentially simpler video-based setups with real
people is that they offer high experimental controllability for isolating variables. While virtual alterity
systems do not entirely avoid these high costs and fancy equipment, it is generally simpler to work with a
video-based environment in a mixed reality setup using real physical objects and real human subjects than
to operate with advanced motion-tracking and capture simulated (computer-generated) virtual worlds.
Avatar embodiment in VR may be an important first step to combat deep-rooted biases and rapid
judgments about others that are at the root of stereotype and prejudice formation. But because these
projects do not involve another real person to learn from and interact with, the design limits the possible
outcomes for other-directed, prosocial, altruistic empathy. Users may over-identify with the other,
assuming that they know more about the other’s experience than they actually do, specifically by
assuming that others’ experiences are the same as their own affective or cognitive responses to stimuli
within the avatar embodiment. Moreover, these experiences could trigger empathic over-arousal, a state in
which a person becomes so personally affected by another’s distress that their ability to care for the other
gets occluded by their need to alleviate their own distress. Personal distress and identification are
orthogonal to empathic concern, and identified as factors that can interfere with empathic concern and
compassion. That is, if I assume that your experience is the same as mine, I may not react appropriately to
your distress. This begs the question: How can researchers use VR to increase inclusivity of out-group
others while maintaining the self-other distinction? This is a question that virtual alterity projects attempt
to resolve.
9 WHAT MAKES VIRTUAL ALTERITY PROJECTS DIFFERENT
Virtual alterity classifies a series of VR research and design projects that involve human-human
virtual interfaces simulating the modes of sensory, bodily, perceptual, and cognitive experiences as lived
by another person and while engaging in a task together with another person. Virtual alterity allows two
people to see, hear, and feel from the same physical, embodied perspective simultaneously while moving
synchronously within this shared perspectival space. Questions important for virtual alterity projects are
how virtual environments can be optimally designed to simulate the structures of first-person experiences,
how VEs can augment our access to one another’s minded and embodied being, and how VEs can open
new interactive and communicative tools that go beyond face-to-face communication. Virtual alterity
combines perspective-taking VR with simulation-based VR so that users virtually embody another real
person (simulation) through the filter of another’s perceptions and roles (perspective-taking).
9.1 OTHER-ORIENTED PERSPECTIVE TAKING AND EMPATHY
Throughout this thesis, I distinguish between identification with another and the recognizing
another as separate but similar subjective being. This is central for my positing of virtual alterity systems
as uniquely situated to facilitate empathy conceived as other-directed perspective taking and compassion.
I argue identification may confound recognizing the other as other, thus limiting the scale and scope of
empathic responses, specifically in the way of compassion. Compassion involves care for the wellbeing of
the other, versus egoistic motivations or empathic distress (See Chapter 1). Avatar embodiment studies
evoke identification and perceived similarity to foster greater inclusivity. By contrast, virtual alterity
systems evoke recognition of the alterity of another and sharing of bodily and agentive experiences to
foster positive regard for the other and compassion.
In avatar studies, much of the emphasis for the user is on my experience in this new body where I
experience myself differently. Therefore, avatar-based interfaces miss the mark when it comes to
understanding empathy as a quintessentially other-directed emotion or intentional stance (as it is
understood in phenomenology). This is inconsistent with findings on the importance of the self-other
distinction in empathy and the metacognitive processes involved in recognizing that another's experiences
might be different from your own, and even from your inferences about their experiences. In the virtual
alterity interfaces I have researched and designed, the focus is on how people can use technology for
deeper acquaintance with another's experience. Avatar studies cultivate a relationship to an avatar who is
representing a real person abstractly in a computer-simulated world, and the user is not directed towards
the avatar as another, but instead controls the avatar-other like a puppeteer exploring his or her own
actions and embodiment through the avatar’s embodied structure and virtual surroundings. Rather
cultivating self-other merging through identification and perceived similarity with the other, virtual
alterity systems foster self-other merging through shared agency that spontaneously emerges in interfaces
designed to focus attention on another person’s experiences.
9.2 UNDERSTANDING OTHER’S EXPERIENCE AS DIFFERENT FROM ONE’S OWN EXPERIENCE
The goal of virtual alterity systems is to drive users to be curious about how another’s emotions,
thoughts, reactions, and expressive acts may be different from one’s own, rather than merely perceiving
the other as similar to oneself. Specifically, whereas participants in avatar embodiment studies are
situated to feel like they are the avatar body, participants in virtual alterity studies are situated to feel like
they are gaining access to a part of someone else’s experience that is importantly not their own. These
systems allow users to interact with one another from within the other’s first-person perspective, opening
a new structure for social interaction and interpersonal engagement (see Bailenson, Lee, Blascovich, &
Guadango’s (2008) chapter on Transformed Social Interaction). Virtual alterity systems utilize augmented
interfaces to create inter-identity technologies, which Lindgren and Pia (2012) defines as hybrid spaces
that combine one person’s experience with the experience of another person.
9.3 MERGING BASED ON INTERCONNECTION VERSUS IDENTIFICATION
The Inclusion of the Other into the Self Model (Aron, Aron, & Smollen, 1992) stipulates the self
as a template for understanding others. This model is used to measure a construct called self-other
merging (SOM), which is a sense of feeling psychologically “at one with” another person. However, I
think the The Inclusion of the Other in the Self Model (IOS) is wrongly named, and that much of the
research relying on it is based on identification rather than merging. Identification and “including the
other into the self” involves perceiving the other as part of one’s affiliated in-group or like oneself,
implicating an egocentricity bias. While identification may be one feature that can contribute to SOM,
identification is limited, as it posits that we only come to experience SOM with others with whom we
identify and perceive as like ourselves. I propose a different model for self-other merging based on
interconnectedness versus identification. Interconnectedness involves recognition of another person as
part of a shared human experience that is broader than both self and other, whereas identification merely
posits the other as like oneself. Rather than IOS, I argue that SOM instead involves an expansion of one’s
egocentricity (self) to include the other, rather than collapsing the other into the self-model. More
specifically, I argue that in mature empathy self-other merging emerges in parallel to self-other
differentiation such that one can experience a sense of increased interconnection amidst an increased
awareness of the other as a subject of his or her own life experiences with unique history and context.
This is what virtual alterity projects aim to accomplish. Thus, I claim that IOS and avatar embodiment
illusions over-emphasize the role of perceived similarity and identification in empathy, while missing
important features of SOM in the sense of interconnectedness, ego suppression, and the expansion of the
sense of self.
9.4 RELATIONAL EMPATHY: BEING-WITH VERSUS BEING-AS ANOTHER
Rather than self-other overlap through modifying shared body representations to induce merging,
I propose self-other merging through relational, other-directed empathy. Virtual alterity combines
embodied simulations with perspective-taking VEs that focus on how someone perceives something
rather than just embodying the other. Perspective-taking in virtual alterity involves cognitive, affective,
and embodied components. Virtual alterity systems involve a transmission of sensory features of
experience within a specific task through multi-modal interactions, embodied perspectives, and narratives.
As such, virtual alterity facilitates an emotionally meaningful experience between two or more people.
Virtual alterity setups allow a unique egocentric perspective sharing, which could open new possibilities
for sense of being-with another, impacting communication and empathy.
9.5 SHARING VERSUS OWNING
Virtual alterity systems foster self-other merging through shared agency illusions, rather than
BOIs. Shared agency illusions elicit reflection on pre-reflective, automatic, egocentric processes towards
a recognition of the interconnectivity between self and other. In virtual alterity studies, users report
feeling brought into deeper aspects of another person’s first-person perspective, normally private and
contained only for the individual for-themselves, versus for-others or with-others. while interacting with
another from within aspects of the other’s first-person perspective. This allows users to share an
embodied, first-person perspectival space rather than owning a new body representing another person.
Subjects do not experience a very strong ownership illusion, but instead report a strong agency illusion.
The BeAnother Lab (2017) team categorize this agency illusion as a shared agency illusion, which
involves a sense of fluidity and moving with another versus moving as another. That is, the subject in
virtual alterity studies does not experience a sense of authorship over actions in the way that avatar
embodiment studies measure agency. Instead, users report that they lose sense of whether they are
initiating or following movements during an imitation procedure while virtually embodied as the other.
Thus, the self-other relationship is transformed in a way that focuses on a novel shared experience with
the other.
10 CONCLUDING REMARKS
Embodied simulations allow users to inhabit an avatar body or the body of another real person.
Embodying an avatar can produce behavioral transformations corresponding to stereotypical expectations
based on the appearance of the avatar body (Proteus Effect) or attitudinal changes based on experiencing
an avatar body of an out-group member it as though it were one’s own body (Body Ownership Illusion).
Body ownership illusions have been used as a paradigm to disrupt the unity of bodily self-consciousness,
and to demonstrate the malleability of shared representations with out-group others with whom shared
representations may be decreased. Inhabiting an avatar body from a 1PP with motion tracking has been an
effective model for inducing a body ownership illusion and decreasing implicit biases. Agency illusions
are also a feature of virtual embodiment, and avatar embodiment studies have not yet fully explicated the
role of agency illusions in transforming representations of and attitudes towards self and other. While
virtual alterity projects The Machine to Be Another and Paint With Me both use the 1PP and VM
synchrony with some VT synchrony, subjects do not report experiencing a BOI. Instead, they report
experiencing a unique agency illusion. Rather than just embodying an avatar representing another person,
perspective-taking VEs dissect various modes of first-person experience to facilitate a multi-sensory
simulation of another person’s experience. This may have a more significant role in helping behavior, as
suggested by Ahn et al.’s (2015) study. In the next chapter, I evaluate perspective-taking VEs as another
alternative for facilitating empathic processes and outcomes. The role of agency illusions in SOM and
empathy has gained little attention, but as a feature of VEs agency illusions may be useful to offer a new
model for how VEs can facilitate empathy. Virtual alterity studies disrupt aspects of egocentric
experience through agency illusions while subjects also report a heightened awareness of another person,
and this may have a role in empathy, which I discuss in Chapter 4.
References
Ahn, S. J., Le, A. M. T., & Bailenson, J. (2013). The effect of embodied experiences on self-other
merging, attitude, and helping behavior. Media Psychology, 16(1), 7-38.
Alsmith, A. J., and Longo, M. R. (2014). Where exactly am I? Self-location judgements distribute
between head and torso. Conscious. Cogn. 24, 70–74. doi: 10.1016/j.concog.2013.12.005
Armel, K. C., and Ramachandran, V. S. (2003). Projecting sensations to external objects: evidence from
skin conductance response. Proc. Biol. Sci. 270, 1499–1506. doi: 10.1098/rspb.2003.2364
Austen, E. L., Soto-Faraco, S., Enns, J. T., & Kingstone, A. (2004). Mislocalizations of touch to a fake
hand. Cognitive, Affective, & Behavioral Neuroscience, 4(2), 170-181.
Bailenson, J.N., Yee, N., Blascovich, J., & Guadagno, R.E. (2008). Transformed Social Interaction in
Mediated Interpersonal Communication. In ME. Konijn, M. Tanis, S. Utz, & A. Linden (Eds.), Mediated
Interpersonal Communication (pp 77-99). Mahwah, NJ: Lawrence Erlbaum.
Bertrand, P., Gonzalez-Franco, D., Pointeau, A., & Cherene, C. (2014). The Machine to be Another-
Embodied Telepresence using human performers. Prix Ars Electronica, 96.
Blanke, O. (2012). Multisensory brain mechanisms of bodily self-consciousness. Nature reviews.
Neuroscience, 13(8), 556.
Blanke, O., Mohr, C., Michel, C. M., Pascual-Leone, A., Brugger, P., Seeck, M., ... & Thut, G. (2005).
Linking out-of-body experience and self processing to mental own-body imagery at the temporoparietal
junction. Journal of Neuroscience, 25(3), 550-557.
Blascovich, J., & Bailenson, J. (2011). Infinite reality: Avatars, eternal life, new worlds, and the dawn of
the virtual revolution. William Morrow & Co.
Bourdin, P., Barberia, I., Oliva, R., & Slater, M. (2017). A virtual out-of-body experience reduces fear of
death. PloS one, 12(1), e0169343.
Botvinick, M., & Cohen, J. (1998). Rubber hands' feel' touch that eyes see. Nature, 391(6669), 756.
Costantini, M., & Haggard, P. (2007). The rubber hand illusion: sensitivity and reference frame for body
ownership. Consciousness and Cognition, 16(2), 229-240.
Decety, J. (2010). The neurodevelopment of empathy in humans. Developmental neuroscience, 32(4),
257-267.
De Vignemont, F., & Fourneret, P. (2004). The sense of agency: A philosophical and empirical review of
the “Who” system. Consciousness and Cognition, 13(1), 1-19.
Ehrsson, H. H., Spence, C., & Passingham, R. E. (2004). That's my hand! Activity in premotor cortex
reflects feeling of ownership of a limb. Science, 305(5685), 875-877.
Ehrsson, H. H., Holmes, N. P., & Passingham, R. E. (2005). Touching a rubber hand: feeling of body
ownership is associated with activity in multisensory brain areas. Journal of Neuroscience, 25(45),
10564-10573.
Farmer, H., Maister, L., & Tsakiris, M. (2013). Change my body, change my mind: the effects of illusory
ownership of an outgroup hand on implicit attitudes toward that outgroup. Frontiers in Psychology, 4.
Gadamer, H. G. (1989). Truth and method (J. Weinsheimer & DG Marshall, trans.). New York:
Continuum.
Gallagher, S. (2000). Philosophical conceptions of the self: implications for cognitive science. Trends in
cognitive sciences, 4(1), 14-21.
Guterstam, A., & Ehrsson, H. H. (2012). Disowning one’s seen real body during an out-of-body illusion.
Consciousness and Cognition, 21(2), 1037-1042.
Gutsell, J. N., & Inzlicht, M. (2011). Intergroup differences in the sharing of emotive states: neural
evidence of an empathy gap. Social cognitive and affective neuroscience, 7(5), 596-603.
Haans, A., IJsselsteijn, W. A., & de Kort, Y. A. (2008). The effect of similarities in skin texture and hand
shape on perceived ownership of a fake limb. Body Image, 5(4), 389-394.
Haggard, P., Taylor-Clarke, M., & Kennett, S. (2003). Tactile perception, cortical representation and the
bodily self. Current Biology, 13(5), R170-R173.
Ionta, S., Heydrich, L., Lenggenhager, B., Mouthon, M., Fornari, E., Chapuis, D., ... & Blanke, O. (2011).
Multisensory mechanisms in temporo-parietal cortex support self-location and first-person perspective.
Neuron, 70(2), 363-374.
Kilteni, K., Maselli, A., Kording, K. P., & Slater, M. (2015). Over my fake body: body ownership
illusions for studying the multisensory basis of own-body perception. Frontiers in human neuroscience,
9.
Krebs, D.L. (1975). Empathy and altruism. Journal of Personality and Social Psychology, 32, 1134-1146.
Lloyd, D. M. (2007). Spatial limits on referred touch to an alien limb may reflect boundaries of visuo-
tactile peripersonal space surrounding the hand. Brain & Cognition, 64, 104–109.
Mai, X., Zhang, W., Hu, X., Zhen, Z., Xu, Z., Zhang, J., & Liu, C. (2016). Using tDCS to Explore the
Role of the Right Temporo-Parietal Junction in Theory of Mind and Cognitive Empathy. Frontiers in
Psychology, 7, 380. http://doi.org/10.3389/fpsyg.2016.00380
Mandrigin, A., & Thompson, E. (2015). Own-body perception. In M. Matthen, (Ed.), The Oxford
Handbook of the Philosophy of Perception. Oxford: Oxford University Press. Retrieved from
https://evanthompsondotme.files.wordpress.com/2012/11/mandrigin-and-thompson-own-body-
perception.pdf
Merleau-Ponty, M. (1962). Phenomenology of perception (C. Smith, Trans.). London: Routledge.
(Original work published 1945)
Pavani, F., Spence, C., & Driver, J. (2000). Visual capture of touch: Out-of-the-body experiences with
rubber gloves. Psychological Science, 11(5), 353-359.
Rosenberg, R. S., Baughman, S. L., & Bailenson, J. N. (2013). Virtual superheroes: Using superpowers in
virtual reality to encourage prosocial behavior. PloS one, 8(1), e55003.
Rothfuss, E. (2009). Intersubjectivity, Intercultural Hermeneutics and the Recognition of the Other—
Theoretical Reflections on the Understanding of Alienness in Human Geography Research. Erdkunde,
173-188.
Serino, A., Alsmith, A., Costantini, M., Mandrigin, A., Tajadura-Jimenez, A., & Lopez, C. (2013). Bodily
ownership and self-location: components of bodily self-consciousness. Consciousness and Cognition,
22(4), 1239-1252.
Slater, M., Perez-Marcos, D., Ehrsson, H. H., & Sanchez-Vives, M. V. (2008). Towards a digital body:
the virtual arm illusion. Frontiers in Human Neuroscience, 2.
Stotland, E. (1969). Exploratory investigations of empathy. In L. Berkowitz (Ed.), Advances in
experimental social psychology (Vol. 4, pp. 271-313. New York: Academic Press.
Tsakiris, M., & Haggard, P. (2005). The rubber hand illusion revisited: visuotactile integration and self-
attribution. Journal of Experimental Psychology: Human Perception and Performance, 31(1), 80.
Tsakiris, M., Carpenter, L., James, D., & Fotopoulou, A. (2010). Hands only illusion: multisensory
integration elicits sense of ownership for body parts but not for non-corporeal objects. Experimental Brain
Research, 204(3), 343-352.
Tsakiris, M., Longo, M. R., & Haggard, P. (2010). Having a body versus moving your body: neural
signatures of agency and body-ownership. Neuropsychologia, 48(9), 2740-2749.
Tsakiris, M., & Haggard, P. (2005). The rubber hand illusion revisited: visuotactile integration and self-
attribution. Journal of Experimental Psychology: Human Perception and Performance, 31(1), 80.
READ PAPER
