1.2

Social Robots and Social Presence: Interpersonally Communicating with Robots

David Westerman 1, Autumn Edwards 2, Chad Edwards 2 & Patric R. Spence 3

1 Department of Communication, North Dakota State University, Fargo, North Dakota, USA
2 School of Communication, Western Michigan University, Kalamazoo, Michigan, USA
3 Nicholson School of Communication, University of Central Florida, Orlando, Florida, USA

Abstract:

In 2010, Westerman and Skalski argued that the popular technology best suited for promoting social presence were computers and the internet. Although this may still be the case, social robots seem likely to challenge the internet’s dominance for social presence. This paper discusses the coming (and already here) popularity of social robots, the concept of social presence and how it is relevant to social robots by outlining some of the research in this area, and addresses future possibilities for social robots/social presence, both in the short- and long-term by outlining a few theories that may be especially relevant to the study of human-robot interaction (HRI).

Keywords:

Social robots, human-robot interaction, human-machine communication, social presence

Full Article:

Social Robots and Social Presence: Interpersonally Communicating with Robots

 “Calls on the Commission, when carrying out an impact assessment of its future legislative instrument, to explore the implications of all possible legal solutions, such as:…..f) creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently” (Delvaux, 2016, p. 12).

The above quote sounds like it might come from a science-fiction novel, perhaps one written by Isaac Asimov. Instead, it is actually from a 2016 draft report (with recommendations to the Committee on Civil Law on Robotics) to the European Union Committee on Legal Affairs (that does cite Asimov’s three laws of robotics). Thus, this is a report to a major world governing body suggesting that considerations be made for rights, responsibilities, and punishments for robots, aka, “electronic persons.” Underlying this report seems a simple truth: people respond to robots as they do other people.

In 2010, Westerman and Skalski argued that the popular technologies best suited for promoting social presence were computers and the internet. Although this may still be the case, a new technology is increasing in popularity that is likely to challenge the internet’s dominance for social presence: robots. Historically, understandings of social presence have focused on the degree of perception that we are communicating with a ‘real’ other, even if the person is not physically present (Short, Williams, & Christie, 1976). As agents in their own rights, social robots allow face-to-face human-machine communication (HMC) between bodily co-present participants operating as persons (Edwards, A., Edwards, C., Spence, Harris, & Gambino, 2016; Guzman, 2019; Spence, 2019). Human-robot interaction (HRI) brings the possibility that neither partner is displaced in time or space, which may heighten perceptions of realness, and consequent feelings of connection and relationship. In this sense, the social presence of communicating with a social robot may equal or exceed that felt in human communication mediated through computers or with an un-/disembodied computer.

This article is designed to discuss the coming (and already here) popularity of social robots, the concept of social presence and how it is relevant to social robots by outlining some the research in this area, and future plans for social robots/social robots, both in the short- and long-term by outlining a few theories that may be especially relevant to the study of HRI. We begin by first discussing the popularity of current and future social robots.

The popularity of social robots

“Humanoid social robots are not user-friendly computers that operate as machines; rather, they are user-friendly computers that operate as humans” (Zhao, 2006, p. 403).

The idea of autonomous, mechanical entities dates back far into history; however, the first use of the word robot occurs in the 1920 Czech play R.U.R. by Karel Capek (ADD CITE). Lee, Park, and Song (2005) defined social robots “as robots designed to evoke meaningful social interaction” with “users who actually manifest some types of social responses” (p. 539). Brondi, Sarrica, and Fortunadi (2016) provided a usefully expansive definition of social robots, which acknowledges that they “are capable to establish coordinated interactions with humans and other robots depending upon both the realms of materiality and immateriality (including, first of all, sociality).” Social robots may be anthropomorphic (humanoid), zoomorphic (e.g., PARO, the therapeutic/companion robot seal), or non-human/non-animal “others” (e.g., TARS on Interstellar). To varying degrees, “social robots overlap in form and function with human beings to the extent that their locally controlled performances occupy social roles and fulfill relationships that are traditionally held by other humans” (Edwards et al., 2016, p. 628), with autonomous humanoid social robots operating as humans, as suggested by the Zhao quote that opens this section.

Definitions of social robots have followed perceptual routes as well. Bezereal (2003) defined a social robot as one to which people apply a social model. Thus, if a robot gives off cues to a person suggesting that it is social, then whether or not the robot is genuinely socially intelligent or only appears to be becomes something of a moot point: People will respond to it as if it socially intelligent. This perspective suggests that a robot is a social robot if it is perceived to be high in social presence, suggesting the central importance of the social presence experience in considering HRI. This point will be returned to later in this article.

There has been a massive increase in social robot sales and adoptions over the past few years. According to LaFrance (2016),

The rise of robots seems to have reached a tipping point; they’ve broken out of engineering labs and novelty stores, and moved into homes, hospitals, schools, and businesses. Their upward trajectory seems unstoppable. (para. 32)

Many experts believe the trend will only magnify. Most respondents to the 2014 Predictions for the State of AI and Robotics in 2025 (Pew Research Center) predicted that by 2025 robotics and AI would be pervasive in daily life. The International Federation of Robotics reports that in the next two years alone, more than 35 million service robots will be sold worldwide. Also, Tractica reports that a total of nearly 100 million consumer robots will be shipped worldwide by 2020, with the fastest growth in robotic personal assistants (Tractica.com, 2015). About 4.5 million family robots will be sold yearly by 2020 (Kaul, 2015).  According to the report, “the next 5 years will set the stage for how these robots could fundamentally transform our homes and daily lives” (Tractica.com, 2015). The first large scale bipedal humanoid robot will be on sale for distribution in 2019 (Estes, 2019).

People’s attitudes toward robots have changed some over time as well. For example, Khan (1998) found that people liked the idea of intelligent service robots. Dautenhahn, Woods, Kaouri, Walers, Koay, and Weiry (2005) reported that people were more comfortable with considering robots as tools rather than as friends, although younger people reported more comfort with robots as friends than did older people. More recently, Walden, Jung, Sundar, and Johnson (2015) reported that the majority of senior citizens interviewed in their study had a positive view of robots, but largely saw them as assistants rather than companions, although some did mention the friendship aspect, usually only as a last resort. Lonelier older people seemed more likely to accept robots as companions and interactants rather than only as tools. Social presence is also positively associated with an increased willingness to accept robots as companions (Heernik, Krose, Evers, & Wielinga, 2008).

As one indicator of current levels of acceptance of social robots in U.S. America, we used Amazon’s Mechanical Turk to conduct a brief cross-sectional survey of 205 adults (age: M = 38.40, SD = 12.08). Using a modified version of Bogardus’ (1947) social distance scale, which was originally designed to measure people’s willingness to participate in social contacts of varying degrees of closeness with members of other racial and ethnic social groups, people were asked to indicate whether or not they would accept a humanoid/social robot in a series of roles [see Table 1].

Table 1. XXX  [ADD TITLE]

I would accept a social/humanoid robot as a…. % n
     visitor to my country 72.63 149
     helper/domestic worker at my house 68.29 140
     co-worker in my organization 56.69 116
     teammate on my organizational team 44.39 91
     friend on social networking sites 40.98 84
     citizen in my country 38.54 79
     companion in my home 35.61 73
     neighbor on my street 32.30 66
     supervisor at my workplace 12.20 25
     family member (by marriage or adoption) 11.71 24

A majority were willing to accept the mere presence of social robots in the country and as assistants at home or work. Fewer were accepting of social robots in roles entailing legal or social equality, and fewer still in positions of higher status or intimate relationships. However, the fact that greater than 1 in 10 would already accept a social robot as a full-fledged family member seems to us remarkable, and may speak to significant levels of social presence already attributed to the category of social robots.

Yet, people have expressed doubts about the socialness of robots. There were doubts that robots will “fully replicate human appearance and emotional sensitivity” (Walden et al., 2015, p. 78), with the dissimilarities also casting doubts upon the possibilities of having full relationships with robots. However, even if people do not think they can have these types of relationships with robots, might they be able to do so? The next section will discuss the concept of social presence, which is argued to be central to HRI.

Social presence

“I was so desperate to hear a familiar voice that I resorted to talking to Max. In my current state, even his glib computer-generated voice was somehow comforting. Of course, it didn’t take long for Max to run out of preprogrammed replies; and when he started to repeat himself the illusion that I was talking to another person was shattered, and I felt even more alone” (Cline, 2011, p. 237).

The above section from Ernest Cline’s novel Ready Player One describes a conversation between the main character and his operating system. It also highlights a great deal about the nature of connection and how we attain it with technology, otherwise known as the experience of social presence.

Social presence is one subtype of telepresence, with spatial/physical- and self- presence as the other two common sub-types (Lee, 2004). Generally speaking, telepresence has been defined as the “illusion of nonmediation” (Lombard & Ditton, 1997). Tamborini and Skalski (2006) replace “illusion” with “perception,” and this is an important distinction, especially for HRI in the way it will be discussed in this article. As Lee, Peng, Jin, and Yan (2006) suggest, “Of the three types of presence, social presence has most implications in HRI because generating strong feelings of social presence during HRI can be regarded as the ultimate goal of designing socially interactive robots……For HRI to be a truly social experience, social robots should be experienced as if they were real social actors” (p. 759).

Kelly and Westerman (2016) have suggested that at least two perspectives on social presence exist in the literature. They referred to the first as feeling “physically” with another entity, and is more in line with feeling as if a mediated experience is like a face-to-face (FtF) experience. For example, Lee (2004) defined social presence as “a psychological state in which virtual social actors are experienced as actual social actors” (p. 45). However, Kelly and Westerman also suggested that there is a second perspective they refer to as feeling “psychologically” with another entity. This version of social presence focuses on the connection felt with another entity through mediated channels and/or the mutual awareness of another person as another person (e.g., Biocca, Harms, & Burgoon, 2003) and is similar to related concepts such as electronic propinquity (Korzenny, 1978; Walther & Bazarova, 2008) and perceived and/or mediated immediacy (Kelly & Westerman, 2014; O’Sullivan, Hunt, & Lippert, 2004).

These two perspectives are conceptually distinct. Although it often assumed that being physically present is part of being psychologically present, it does not have to be. One can imagine a married couple, on the edge of divorce, who might still share the same bed at night (physically present with each other) but who know longer feel connected to each other in any way. FtF interactions do not always equal connection. On the flip side, long distance relationships often thrive, and lead to extreme feelings of connections, despite minimal FtF interaction or conscious awareness that an interaction is mediated.

These two perspectives are likely complimentary as well. It may be very difficult to feel connected to another entity if one is too frequently focusing too much attention on the technology used (for example, if it has too many glitches or breaks norms) and not enough on the other person/entity.  Future research in HRI should be clear about which version of social presence is the focus of their study and discussion, and can also examine how and when the two different social presence perspectives can cause each other or not, especially when interacting with a robot.

Both perspectives are apparent in the Ready Player One quote that starts this section.  The main character desires a connection, in the sense of a closeness, with someone. However, that closeness (in this case, an illusion) ends when the operating system does not respond as expected, based on an FtF interaction. This brings us to how social presence can be felt with a robot.

Social presence and social robots

“It is important to recognize that humans are a profoundly social species” (Bezereal, 2003, p. 167).

People want to feel connected to others. This is a truth that is summed up well in the Bezereal quote above. We interact with others (human and non-human entities) to fulfill basic human needs of inclusion, affection, and control (Schutz, 1958). These basic human needs require and also facilitate sociality, which Bezereal (2003) points out is an increasingly important part of robotic design, despite limited interaction ability. Perhaps, in 2019 (and beyond), this interaction ability is and will continue to be, greatly increased. But how do we come to feel socially present with robots? As Bezereal and others (e.g., Walden et al., 2015; Westerman & Skalski, 2010) suggest, Reeves and Nass’ (1996) media equation can help explain this.

Very simply defined, the media equation states that “media = real life” (Reeves & Nass, 1996). This is not to say that media always present an accurate portrayal of real life, but rather, that people respond to media as they do real life. They do this even though they consciously may state that it is ridiculous to do so, and know that media are not “real.” In a classic line of research, the media equation has been backed up by numerous studies that follow a general pattern: take a finding for how two people interact, replace one person with media (e.g., a computer), and find a very similar result (Nass & Moon, 2000; Reeves & Nass, 1996).

Results of these studies formed the basis of the Computers are Social Actors paradigm (CASA), which is premised on the idea that people display fundamentally social and natural reactions toward computers and other media (Reeves & Nass, 1996). Individuals employ their human-human interaction scripts during human-computer interaction, and they do so automatically, “essentially ignoring the cues that reveal the essential asocial nature of a computer” (Nass & Moon, 2000, p. 83). They also anthropomorphize, or “attribute basic human psychological abilities to computers” (Sundar, 2004, p. 108) and they follow the same social rules used for humans (Kim & Sundar, 2012). For example, research has demonstrated that individuals respond to computers as independent sources (versus mere channels) of information in a tutoring context (Sundar & Nass, 2000), treat computers as teammates in group tasks (Nass, Fogg, & Moon, 1996), and rely on computer vocal qualities when evaluating and interacting with the computer (Lee, 2010, Nass & Brave, 2005).

Our recent studies, along with those of several other researchers (e.g., Kim, Park, & Sundar, 2013; Lee, Park, & Song, 2005; Lee, Peng, Jin, & Yan, 2006; Park, Kim, & del Pobil, 2011), have extended the Media Equation and CASA paradigm from computers to robots. In support of the Media Equation with robots, Edwards, Beattie, Edwards, and Spence (2016) demonstrated that, when exposed to Tweets about sexually transmitted infections, participants experienced the same levels of cognitive elaboration, information-seeking, and learning outcomes whether the source was a Twitterbot or a human agent. Similarly, risk messages delivered via robot produced equivalent knowledge acquisition when compared to those delivered via legacy media (Lachlan et al., 2016).

Generally speaking, research supports the extension of CASA into HRI studies. People view robots as real and engage robots with social perceptions and responses similar to those used with other humans. For instance, in an experiment examining a human-robot price negotiation task, robots using guilt messages were rated as lower in credibility than those making straightforward appeals, consistent with the bias against guilt trips observed in interpersonal relationships (Stoll, Edwards, C., & Edwards, A., 2016). In direct comparisons of robot and human agents, several studies have found no difference in social evaluations of the actors. Edwards, C., Edwards, A., Spence, and Shelton (2014) demonstrated that participants perceived a Twitterbot and a human agent delivering the same content as equally credible, attractive, and competent. People also have differing perceptions of the humanness of a chatbot based upon the typos that bot uses (Westerman, Cross, & Lindmark, 2019). Although emergent research indicates robots generally are treated as social actors, perceptions of their social characteristics may vary, just as it does for other media and communication sources.

Sundar’s (2008) MAIN model was designed to explain the effects of technology on credibility by focusing on heuristic-cue information tied to features of the medium. The four main “affordances” of technology are modality (M), agency (A), interactivity (I), and navigability (N). Social robots, like other media, may vary in their medium/interface characteristics, perceived “source” and autonomy of communication, levels and types of interactivity/activity, and nature of transportation-linked interface features. Each of these differences may implicate perceptions of social presence.

Edwards, A. et al.’s (2016) study examining the influence of type of robot agent (telepresence vs. social autonomous) on learning outcomes showed, in support of CASA, that both telepresence (teacher as robot) and social robot (robot as teacher) teachers were perceived by college undergraduates as credible sources of instruction and engendered affective and behavioral learning. However, and in support of the MAIN model, students gave higher credibility to the teacher as robot, which led to some learning differences. Despite identical instructional performances, students reported higher affective learning from the telepresence robot instructor, but higher behavioral learning from the social robot instructor. The authors attributed the results to the activation of heuristics triggered by agency affordance cues. The social presence heuristic, or the idea that one is communicating with a real social being versus inanimate entity, may account for the heightened credibility of the telepresence robot. Conversely, the machine heuristic, which is comprised of attributions about typical machine characteristics, may account for the heightened behavioral learning produced by the autonomous social robot. Often, machines often are seen as more “objective” but less feeling than human actors.

The MAIN model embraces a media effects perspective by assuming that features of the technology will trigger cognitive heuristics which lead to perceptions of credibility or presence (Sundar, Oeldorf-Hirsch, & Garga, 2008) and help enact social mental models for interacting with robots. Kiesler and Goetz (2002) found that people have sparse mental models of robots consistent with mental models held toward other out-groups (having little direct experience). However, we would argue that this does not necessarily mean that people do not rely upon that sparse mental model when interacting with and responding to robots. In fact, several of our studies have suggested the existence of a human-to-human interaction script, which refers to individuals’ general expectation of and preference for human conversational partners. We found that people who believed they would be interacting with a robot anticipated less liking, higher uncertainty, and lower social presence than those who believed they would be interacting with a human (Edwards, C., Edwards, A., Spence, & Westerman, 2016; Spence, Westerman, Edwards, C., & Edwards, A., 2014). We attributed the findings to a violation of the human-to-human interaction script and suggested that once the novelty of the HRI wears off, people may achieve levels of comfort, social presence, and liking similar to what they experience with other human partners (Spence et al., 2014). In fact, following a brief interaction with a humanoid social robot, people reported lower uncertainty and higher social presence when compared to people engaged in an identical interaction with another human (Edwards, A., Edwards, C., Westerman, & Spence, 2019). “Right or wrong, people rely on social models (or fluidly switch between using a social model with other mental models) to make the complex behavior more familiar and understandable and more intuitive with which to interact. We do this because it is enjoyable for us, and it is often surprisingly quite useful” (Bezereal, 2003, p. 168).

The experience of social presence with social robots is likely driven by the mental models we have for interaction in general. Research on social presence and social robots is pretty much in line with these models. For example, Heernik, Krose, Evers, and Wielinga (2006) found that older adult participants were more comfortable talking with more social robots than with less social robots, and they have more positive nonverbal responses to a more social robot. However, engagement seemed overall fairly high in the study, and as these authors point out: “some participants refused to work on the given task with the robot; they simply started a conversation with it, ignoring all instructions” (p. 38). A follow-up study (Heernik, Krose, Evers, & Wielinga, 2008) found that an increased social ability of a robot led to greater social presence (measured more in line with how “real” the robot seemed to be), and social presence was related to perceived enjoyment of interacting with the robot which led to an increased intention to use. This suggests that social presence is increased by things that also increase interpersonal competence, and in turn, leads to a desire for interacting with robots. Gender might also play some part in presence and social robots, Schermerhorn, Schuetz, and Crowell (2008) found that women seem to experience less social presence with a robot than men do. Women saw robots as more machine-like and less like an actual person in this study. Heernik, Krose, Evers, and Wielinga (2006) also found than men seemed more interested in an iCat robot (a cat-like zoomorphic robot) than women, as evidenced by an increased want of the robot.

Personality has also been examined in relation to social presence and robots. De Ruyter, Saini, Markopoulos, and Van Breemen (2005) found that an extraverted robot was seen as more socially intelligent and more likely to be accepted among non-elderly adults. Lee, Peng, Jin, & Yan (2006) also examined the extraversion of a robot (and its human interactant). They found that people were able to recognize a robot’s personality, and that social presence mediated the effects of personality match between human and robot on the perceived intelligence, liking of and enjoying the interaction with the robot.

The notion that people are likely to respond to a robot as they would a person, especially when experiencing social presence, suggests that an understanding of how people respond to other people is likely to help our understanding of how people will respond to robots. As Heernik, Krose, Evers, and Wielinga (2008) state “It might very well be that the more natural and ‘human-like’ the conversation with a robot is, the more enjoyment a user feels and the more this user would feel encouraged to actually use this technology” (p. 33). Lee, Peng, Jin, and Yan (2006) discussed a sort of Turing Test for the social characteristics of an object. They suggested that recognizing cues (such as personality) in a robot is a “first-degree social response” (p. 757), whereas applying rules (such as consistency attraction) is a “second-degree social response” (p. 757). They argue that second-degree responses are reserved for robots (and other objects) that show “true social characteristics” (p. 757), and as such, applying them to an HRI is evidence of successfully passing a social Turing Test.  Thus, we may be able to apply interpersonal and computer-mediated communication (CMC) theories that articulate these social rules to help inform how people come to feel connected with robots and what effects we might expect from doing so. As Lee et al. (2006) suggested, “Future studies on HRI, therefore, will be significantly advanced by adopting and applying sociopsychological findings of human-to-human interaction to human-to-robot interaction, which is how previous CASA studies on HCI have been advanced” (p. 768). The next section provides an overview of a few theories that may be especially useful here, but it likely not an exhaustive list.

Applying IP/CMC theory to HRI

General theories/concepts in IP 

Given the nature of CASA research and its applicability to HRI, seemingly any concept or study done in interpersonal communication would apply to HRI. It would just take removing one person from the “interpersonal” in the original study or concept and replacing that person with a robot. For example, one IP concept that has been examined in HRI is attraction. Lee et al. (2006) compared similarity theory of attraction to complementarity theory of attraction. By using a 2 x 2 experiment, this study examined the extroversion (High vs. Low) of the robot with the extroversion (High vs. Low) of the human participant interacting with the robot. Overall, it seemed that the complementarity attraction rule was applied more than similarity, as people liked and enjoyed interacting with a robot whose extroversion level was different from their own. These results were mediated by the level of social presence felt in the interaction as well.

This study provides some initial evidence that theories of attraction are relevant for social presence in HRI, and can be applied to HRI in general. However, future research can also examine the situations for which similarity attraction might be the rule that is applied instead of complementarity attraction. For example, Byrne (1971) suggested that people would be more attracted to each other when they shared similar attitudes, and there is some evidence to suggest that people like partners who are more similar to them in perceived physical attractiveness (Berscheid, Dion, Walster, & Walster, 1971). Based on the application of CASA to HRI, we might expect similar findings for humans interacting with robots, and these are areas that future studies can examine.

Another area of study that might be important is an examination of what are the most important communication rules overall for competent communication. Building robots that break one rule, and varying the rule, and seeing how people respond, would help us learn more about HRI and social presence, but also might help learn more about how people interact with people. Experiments tend to combine the social cues of robots (Heernik et al., 2006; Heernik et al., 2008; Lee et al., 2006). For example, Heernik et al (2006) manipulated the social communicativeness of a robot by doing the following: the high condition listened more attentively (looked at and nodded while participant was speaking), smiled, used the participant’s name while talking to them, used more facial expressions, and apologized when a mistake was made. Although this likely maximized the difference in perceived socialness between conditions, it does little to tell us more about which of these would be most important for perceived socialness, and thus, increased social presence. Furthermore, the actual words spoken by the robots was held constant (expect for the participant’s name in the high social condition). Thus future research can tease out what nonverbals are most important for establishing social presence, but also what words do the same (in line with the idea of a verbal uncanny valley possibility). Also, the interactive nature of verbal and nonverbal cues is something that can also be examined by future research. One theory regarding cues and cue systems that may be important for HRI and presence study is social presence theory.

Social Presence Theory

Originally formulated by Short et al. (1976), social presence theory (SPT) is a type of theory referred to as a cues-filtered-out theory (Culnan & Markus, 1987), suggesting that it falls into a more general category of theories that suggests that the reduced nonverbal cues provided in CMC would limit the ability to accomplish social goals through technology, including social presence. More specifically, SPT suggests that different communication media allow different amounts of nonverbal information to flow through. Thus, overall, the more cue systems a media can transmit, the more social presence one can feel using that medium.

Although this theory has been criticized in CMC research (e.g., Walther, 1992), it may have usefulness to the study of HRI, and is somewhat foundational to the study of social presence. On the one hand, the increased ability to provide nonverbal information might suggest that interacting with a robot would be a technology that would allow for increased social presence, by making the interaction and the other interactant more “real.” On the other hand, robots were far from the minds of the original authors’ minds (we assume), and critiques and evidence in CMC have suggested that people can come to accomplish social goals such as social presence even through channels that provide little nonverbal information (such as texting), calling into doubt the strict application of SPT into HRI. Future research can and should examine how various nonverbal (and verbal) capacities of robots can impact the experience of social presence, and with what effects. The remaining theories discussed below might help add some more specific possibilities to this future research possibility.

Theory of Electronic Propinquity

A somewhat similar theory that may also be of use to future HRI study is the theory of electronic propinquity (TEP; Korzenny, 1978). First, electronic propinquity, or the psychological proximity between people, is a concept that seems very similar to social presence, at least some conceptual definitions of social presence, as noted by Westerman and Skalski (2010) and Kelly and Westerman (2016). Overall, the theory makes many predictions of factors thought to impact the experience of electronic propinquity: bandwidth, mutual directionality, user skills, complexity of task, communication rules, and channel choices. Although the theory was somewhat abandoned after an initial test seemed to discredit the theory (Korzenny & Bauer, 1981), a more recent study by Walther and Bazarova (2008) found evidence consistent with the theory. Also, one of the most important factors found for experiencing electronic propinquity was communicator skills, suggesting that people can come to learn to use media more effectively to experience electronic propinquity.

Given the factors mentioned as important in TEP, and the knowledge that skills matter, this theory seems ripe for reinterpretation into HRI. But how might a person engage in interaction with a robot? The next theory highlighted might have some suggestions to help deal with this question.

Social Information Processing Theory

Another theory that might help explain and predict the experience of social presence with robots is social information processing theory (SIPT, Walther, 1992). Although the theory largely is designed to explain how people interact with each other through computer-mediated communication, Westerman and Skalski (2010) suggested how SIPT might explain how we come to feel social presence through computers. Although perhaps not all of SIPT applies to HRI, there are parts that we think still do. First, SIPT starts with an assumption that people want to accomplish similar interpersonal goals when interacting, no matter what channel they use. This hearkens back to the quote that begins this section; we are a social species. SIPT also suggests that we use the information that we have available through a channel to accomplish our interpersonal goals. This should also be true for interacting with a robot. Finally, SIPT suggests that goals and relationships can be accomplished through technology, but that they might take longer. These develop over time. It is also possible that similar things can happen with a human and a robot, although how long it might take is a question still to be answered. Researchers may also seek to identify the self-, other-, and system-generated cues of social robots that correspond with different degrees of social presence, especially given actual interaction with a social robot.

So, what are some ways we can use the information provided from HRI to feel socially present? For example, Bezereal (2003) questions what happens when a robot’s behavior no longer adheres to a person’s social model.  This underscores the importance of Expectancy Violations Theory, the next theory reviewed.

Expectancy Violations Theory

Perhaps the most promising theory to consider for HRI, expectancy violations theory (EVT: Burgoon, 1983; 1993) was originally developed to explain and predict outcomes relevant to violations of nonverbal expectations of personal space (Burgoon, 1978), but has been adapted to other expectancy violations as well (Guerrero, Anderson, & Afifi, 2011). In general, EVT starts with the assumption that people do not think that others behave randomly; rather, we have expectations for others’ behaviors (Burgoon, 1978). Expectancies are rooted in any number of past experiences and future projections, but many expectations seem to share a common root in our perceptions of larger social norms (Burgoon & Jones, 1976; McPherson & Yuhua, 2007).

Although not testing EVT directly, there is some evidence that expectancy violations can be problematic in HRI. Dautenhahn et al. (2005) reported that people wanted robots to come close to them during interactions, but not very close, but also found that people were more concerned with the communication of a robot being human-like than either the appearance or behaviors. Emotion, appearance, theory of mind, and communication all come into play in that model. For example, the importance of turn-taking, para-linguistic social cues, and non-verbals were found for interacting with a robot named Kismet in a study by Bezereal (2003). People reported wanting robots to be considerate and polite (Dautenhahn et al., 2005) and found that people reported wanting robot behavior to be predictable (and even controllable). When interaction goes against expectations, it leads to uncertainty, which is often considered a negative experience, and a motivation to reduce that uncertainty (Berger, 1979).

One potential expectancy violation that can occur in HRI is the uncanny valley (Dautenhahn et al., 2005; Walden et al., 2015). The uncanny valley is a concept initially coined by Mori (1970) to suggest a possibility of response to non-human entities, and we would argue, is a specific type of expectancy violation. In general, the uncanny valley is an increased feeling of creepiness that a person feels as another entity moves closer to being human without actually being such. This likely occurs because as something becomes more human-like, our expectations raise about how human it should be. When it then fails to live up to that standard, it seems creepy (A zombie or dead body was used as an example was used as the example of the bottom of the valley by Mori). Thus, things that are cartoonish are not creepy because they do not create the same expectations. This can likely be extended into the feeling of social presence as well.

Interestingly, most discussion of the uncanny valley focuses on looks, but perhaps an uncanny valley also exists for the communicative ability of a robot as well. Perhaps, if a robot sounds human enough, and communicates in a relatively human way, but has some non-human faults, perhaps the same expectancies created-expectancies violated pattern comes into play.  For example, in season 1, episode 1 of the AMC series Humans (ADD CITE), the Hawkins family gathers for breakfast with Anita, their new “synth” (advanced anthropomorphic robot).

Joe: “If we’d have known you were going to be joining us for breakfast, Anita, I’d have got some micro-chips.”

Laura: “I apologize, Nita, that was my husband trying to be funny.”

Anita: [laughs]

Joe: “Finally, someone to laugh at my jokes!”

Laura: “Besides YOU, you mean!” [smiles]

Joe: [laughs]

Sophie: [laughs]

Anita: [laughs on loop, continuing after others have stopped]

Family: [exchanges uncomfortable glances]

Laura: “You can stop now.”

Anita’s spontaneous laughter in response to a bad joke makes it easy to forget she is a robot, but only until her communication becomes painfully unnatural. Once the Hawkins’ laughter dies down, Anita’s behavior turns creepy as she carries on with eight additional iterations of “ahahahahaha.” This violation of expectations for interaction flow and timing diminishes Anita’s social presence by calling into question whether she is “real” and “there,” instead focusing attention on the medium/technology of communication.

Violating expectations when interacting with robots might matter in another form as well. For example, Subramony (2011) suggests that people tried asking Siri challenging questions to test the program’s responsiveness. This is not something we would typically do if interacting with a human, as we would assume the responsiveness in our idealization of human-human communication. Perhaps we hold technology up to a higher standard because we actually test  communicative assumptions when interacting with technology. We seem to expect perfection when communication through and with technology, and then when it does not meet those standards, we decry it as not good enough. But how often do we hold people up to the same testing and standards? What is it that creates this higher expectation when dealing with technology? This is something future research can examine.

The future

“‘Real isn’t how you are made,’ said the Skin Horse. ‘It’s a thing that happens to you. When a child loves you for a long, long time, not just to play with, but REALLY loves you, then you become real’.” –The Velveteen Rabbit (ADD CITE)

“Theodore: “Yeah, but….am I in this because I’m not….strong enough for a real relationship?

Amy: Is it not a real relationship?

Theodore: I don’t know. I mean, what do you think?

Amy: I don’t know. I’m not in it.” [ADD CITE HERE?]

The second exchange here comes from the 2013 movie Her. In this movie, the character Theodore (played by Juaquin Phoenix) begins to date his operating system (a very advanced version of a Siri-like system, voiced by Scarlett Johansson). Their relationship is the central focus of the movie, with various other characters weighing in on it and discussing it with Theodore. In general, this movie is about the experience of social presence. Indeed, at another point in the movie, when Theodore is discussing Samantha to a character named Amy (played by Amy Adams), he says, “Yeah, I mean, I feel really close to her. Like when I talk to her I feel like she’s with me.” Even though it is not physically possible for Samantha to be with Theodore, he feels as if she is with him. If that isn’t social presence, we do not know what is.

But this is only a movie, right? That may be true. However, there are scholars who suggest that this science fiction is not that far from becoming science fact. For example, in his review of Her, noted futurist Ray Kurzweil (2014) largely praised the film, saying that “the movie compellingly presents the core idea that a software program (an AI) can—will—be believably human and lovable.” He also suggested that the technology needed to create Samantha would probably exist around the year 2029. Furthermore, in a sort of “ultimate” social presence with robots, Levy (2008) suggests that we will be commonly marrying and having sex with robotic partners by 2050. To some, this is a future that is fantastic. To some, this is a future that is frightening. To us, this is a future that is fascinating, and the question we all might want to ask is if we are prepared to welcome our robot partners? We study technology to learn more about the human communication process. It just might be the case that studying how we interact with and come to feel close to robots might help us understand how we do the same things when interacting with other people. We certainly hope so.

References

Berger, C. (1979). Beyond initial interaction. In H. Giles & R. St. Clair (Eds.), Language and social psychology (pp. 122-144). Oxford: Basil Blackwell.

Berscheid, E., Dion, K., Walster, E., & Walster, G. W. (1971). Physical attractiveness and dating choice: A test of matching hypothesis. Journal of Experimental Social Psychology, 7, 173-189. doi:10.1016/0022-1031(71)90065-5

Biocca, F., Harms, C., & Burgoon, J. K. (2003). Toward a more robust theory and measure of social presence: Review and suggested criteria. Presence, 12, 456-480. doi:10.1162/105474603322761270

Bogardus, E. S. (1947). Measurement of personal-group relations. Sociometry, 10, 306–311. doi:10.2307/2785570

Breazeal, C. (2003). Toward sociable robots. Robotics and Autonomous Systems, 42, 167-175.

Brondi, S., Sarrica, M., & Fortunati, L. (2016). How many facets does a “social robot” have? A review of popular and scientific definitions. Paper presented at the International Workshop on Social Robotics: main Trends and Perspectives in Europe, Pordenone, Italy.

Burgoon, J. K. (1978). A communication model of personal space violations: Explication and an initial test. Human Communication Research, 4, 129-142. doi:10.1111/j.1468-2958.1978.tb00603.x

Burgoon, J. K. (1983). Nonverbal violations of expectations. In J .M. Wiemann & R. R. Harrison (Eds.), Nonverbal interaction, (pp. 11-77). Beverly Hills, CA: Sage.

Burgoon, J. K. (1993). Interpersonal expectations, expectancy violations, and emotional communication. Journal of Language and Social Psychology, 1-2, 30-48. doi:10.1177/0261927X93121003

Burgoon, J. K., & Jones, S. B. (1976). Toward a theory of personal space expectations and their violations. Human Communication Research, 2, 131-146. doi:10.1111/j.1468-2958.1976.tb00706.x

Byrne, D. (1971). The attraction paradigm. New York: Academic Press.

Cline, E. (2011). Ready Player One. New York: Broadway Books.

Culnan, M. J., & Markus, M. L. (1987). Information technologies. In F. M. Jablin, L. L. Putnam, K. H. Roberts, and L. W. Porter (Eds.), Handbook of organizational communication: An interdisciplinary perspective (pp. 420-443). Newbury Park, CA: Sage.

Dautenhahn, K., Woods, S., Kaouri, C., Walters, M. L., Koay, K. L., & Werry, I. (2005, August). What is a robot companion-friend, assistant or butler?. In Intelligent Robots and Systems, 2005.(IROS 2005). 2005 IEEE/RSJ International Conference on (pp. 1192-1197). IEEE.

Delvaux, M. (2016). Draft report with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). European Parliament Committee on Legal Affairs. Available at http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN

De Ruyter, B., Saini, P., Markopoulos, P., & Van Breeman, A. J. N. (2005). Assessing the effects of building social intelligence in a robotic interface for the home. Interacting with Computers, 17, 522-541.

Edwards, A., Edwards, C., Spence, P., Harris, C., & Gambino, A. (2016). Communicating with a robot in the classroom: Differences in perceptions of credibility and learning between ‘robot as teacher’ and ‘teacher as robot.’ Computers in Human Behavior, 65, 627-634. doi:10.1016/j.chb.2016.06.005

Edwards, A., Edwards, C., Westerman, D., & Spence, P. R. (2019). Initial expectations, interactions, and beyond with social robots. Computers in Human Behavior90, 308-314. doi:10.1016/j.chb.2018.08.042

Edwards, C., Beattie, A., Edwards, A., & Spence, P. (2016). Differences in perceptions of communication quality between a Twitterbot and human agent for information seeking and learning. Computers in Human Behavior, 65, 666-671. doi:10.1016/j.chb.2016.07.003

Edwards, C., Edwards, A., Spence, P., & Shelton, A. (2014). Is that a bot running the social media feed? Testing the differences in perceptions of communication quality for a human agent and a bot agent on Twitter. Computers in Human Behavior, 33, 372-276. doi:10.1016/j.chb.2013.08.013

Edwards, C., Edwards, A., Spence, P., & Westerman, D. (2016). Initial interaction expectations with robots: Testing the human-to-human interaction script. Communication Studies, 67, 227-238. doi:10.1080/10510974.2015.1121899

Estes, A. C. (2019, Jan 14). You can finally buy a robot that will be your friend. Gizmodo.com.  Retreived from https://gizmodo.com/you-can-finally-buy-a-robot-that-will-be-your-friend-1831736329

Guerrero, L. K., Anderson, P. A., & Afifi, W. A. (2011). Close encounters: Communication in Relationships (3rd edition). Thousand Oaks, CA: Sage.

Gunawardena, C. N. (1995). Social presence theory and implications for interaction collaborative learning in computer conferences. International Journal of Educational Telecommunications, 1(2/3), 147-166.

Guzman, A. L. (2018). What is human-machine communication, anyway? In A. L. Guzman (Ed.), Human machine communication: Rethinking communication, technology, and ourselves. New York, NY: Peter Lang.

Heernik, M., Krose, B., Evers, V., & Wielinga, B. (2006). Studying the acceptance of a robotic agent by elderly users. International Journal of ARM, 7, 33-43.

Heerink, M., Krose, B., Evers, V., & Wielinga, B. (2008). The influence of social presence on acceptance of a companion robot by older people. Journal of Physical Agents, 2, 33-40.

Kaul, A. (2015, November 4). A new kind of family robot is launched in China. Tractica.com. Retrieved from https://www.tractica.com/automation-robotics/a-new-kind-of-family-robot-is-launched-in-china/

Kelly, S., & Westerman, C. Y. K. (2014). Immediacy as an influence on supervisor-subordinate communication.  Communication Research Reports, 31, 252-261. doi:10.1080/08824096.2014.924335

Kelly, S., E., & Westerman, D. K. (2016). New technologies and distributed learning systems. In P. L. Witt (Ed.). Handbooks of Communication Science 16: Communication and Learning. Boston, MA: De Gruyter.

Khan, Z. (1998). Attitudes towards intelligent service robots. IPLab-154, KTH. Retrieved from ftp://ftp.nada.kth.se/IPLab/TechReports/IPLab-154.pdf

Kiesler, S., & Goetz, J. (2002). Mental models of robotic assistants. Proceedings of CHI 2002, Minneapolis, MN.

Kim, K. J., Park, E., & Sundar, S. S. (2013). Caregiving role in human–robot interaction: A study of the mediating effects of perceived benefit and social presence. Computers in Human Behavior, 29, 1799-1806. doi:10.1016/j.chb.2013.02.009

Kim, Y., & Sundar, S. S. (2012). Anthropomorphism of computers: Is it mindful or mindless? Computers in Human Behavior, 28, 241-250. doi:10.1016/j.chb.2011.09.006

Korzenny, F. (1978). A theory of electronic propinquity: Mediated communication in organizations. Communication Research, 5, 3-24. doi:10.1177/009365027800500101

Korzenny, F., & Bauer, C. (1981). Testing the theory of electronic propinquity. Communication Research, 8, 479-498.

Kurzweil, R. (February 10, 2014). A review of Her by Ray Kurzweil. Kurzweilai.net. Retrieved from http://www.kurzweilai.net/a-review-of-her-by-ray-kurzweil

Lachlan, K., Spence, P. R., Rainear, A., Fishlock, J., Xu, Z., & Vanco, B. (2016). You’re my only hope: An initial exploration of the effectiveness of robotic platforms in engendering learning about crises and risks. Computers in Human Behavior, 65, 606-611. doi:10.1016/j.chb.2016.05.081

LaFrance, A. (2016, March 22). What is a robot? The Atlantic. Retrieved from http://www.theatlantic.com/technology/archive/2016/03/what-is-a-human/473166/

Lee, E. J. (2010). The more humanlike, the better? How speech type and users’ cognitive style affect social responses to computers. Computers in Human Behavior, 26, 665-672. doi:10.1016/j.chb.2010.01.003

Lee, K. M. (2004). Presence, explicated. Communication Theory, 14, 27-50. doi:10.1111/j.1468-2885.2004.tb00302.x

Lee, K. M., Park, N., & Song, H. (2005). Can a robot be perceived as a developing creature?. Human Communication Research, 31, 538-563. doi:10.1111/j.1468-2958.2005.tb00882.x

Lee, K. M., Peng, W., Jin, S.-A., & Yan, C. (2006). Can robots manifest personality?: An empirical test of personality recognition, social responses, and social presence in human-robot interaction. Journal of Communication, 56, 754-772. doi:10.1111/j.1460-2466.2006.00318.x

Lombard, M., & Ditton, T. B. (1997). At the heart of it all: The concept of presence. Journal of Computer-Mediated Communication, 3, 0. doi:10.1111/j.1083-6101.1997.tb00072.x

McPherson, M. B., & Yuhua, J. L. (2007). Students’ reactions to teachers’ management of compulsive communicators. Communication Education, 56, 18-33. doi:10.1080/03634520601016178

Mori, M. (1970/2012). The uncanny valley. (K. F. MacDorman and N. Kageki, Trans.). IEEE Robotics & Automation Magazine, 19, 98-100. doi:10.1109/MRA.2012.2192811. Retrieved from http://spectrum.ieee.org/automaton/robotics/humanoids/the-uncanny-valley

Nass, C., & Brave, S. (2005). Wired for speech. How voice activates and advances the human-computer relationship. Cambridge.

Nass, C., Fogg, B. J., & Moon, Y. (1996). Can computers be teammates? International Journal of Human-Computer Studies, 45, 669-678. doi:10.1006/ijhc.1996.0073.

Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56, 81-103.

O’Sullivan, P. B., Hunt, S. K., & Lippert, L. R. (2004). Mediated immediacy: A language of affiliation in a technological age. Journal of Language and Social Psychology, 23, 464-490. doi:10.1177/0261927X04269588

Park, E., Kim, K. J., & del Pobil, A. P. (2011). The effects of a robot instructor’s positive negative feedbacks on attraction and acceptance towards the robot in classroom. In Social robotics (pp. 135-141). Springer Berlin Heidelberg.

Pew Research Center (2014, August). AI, robotics, and the future of jobs. Retrieved from http://www.pewinternet.org/2014/08/06/future-of-jobs/

Reeves, B., & Nass, C. (1996). Media equation: How people treat computers, television, and new media like real people and places. New York: Cambridge University Press.

Schermerhorn, P., Schuetz, M., & Crowell, C. R. (2008). Robot social presence and gender: Do females view robots differently than males? Proceedings of the HRI’08 Conference, Amsterdam, The Netherlands.

Short, J., Williams, E., & Christie, B. (1976). The social psychology of communication. New York, NY: John Wiley.

Spence, P. R. (2019). Searching for questions, original thoughts, or advancing theory: Human-Machine Communication.  Computers in Human Behavior, 90 285-287. doi:10.1016/j.chb.2018.09.014

Spence, P. R., Westerman, D., Edwards, C., & Edwards, A. (2014). Welcoming our robot overlords: Initial expectations about interaction with a robot. Communication Research Reports, 31, 272-280. doi:10.1080/08824096.2014.924337

Subramony, A. (2011, December 5). 10 questions to ask Siri. Retrieved from http://www.maclife.com/article/gallery/10_questions_ask_siri

Sundar, S. (2004). Loyalty to computer terminals: Is it anthropomorphism or consistency? Behavior & Information Technology, 23 (2), 107-118. doi:10.1080/01449290310001659222

Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. In M. J. Metzger, & A. J. Flanagin (Eds.), Digital media, youth, and credibility (pp. 73-100). Cambridge, MA: The MIT Press. doi:10.1162/dmal.9780262562324.073.

Sundar, S., & Nass, C. (2000). Source orientation in human-computer interaction: Programmer, networker, or independent social actor? Communication Research, 27, 683-703. doi:10.1177/009365000027006001

Sundar, S. S., Oeldorf-Hirsch, A., & Garga, A. K. (2008). A cognitive-heuristics approach to understanding presence in virtual environments. Proceedings of the 11th Annual International Workshop on Presence, Padova, Italy.

Stoll, B., Edwards, C., & Edwards, A. (2016). “Why aren’t you a sassy little thing”: The effects of robot-enacted guilt trips on credibility and consensus in a negotiation. Communication Studies. 67, 530-547. doi:10.1080/10510974.2016.1215339

Tamborini, R., & Skalski, P. (2006). The role of presence in the experience of electronic games. In P. Vorderer & J. Bryant (Eds.), Playing video games: Motives, responses, and consequences (pp. 225-240). Mahweh, NJ: Erlbaum.

Tobe, F. (2015, December 13). 2016 will be a pivotal year for social robots. The Robot Report: Tracking the Business of Robotics. Retrieved from https://www.therobotreport.com/news/2016-will-be-a-big-year-for-social-robots

Tractica.com. (2015, November 23). Nearly 100 million consumer robots will be sold during the next 5 years. Retreived from https://www.tractica.com/newsroom/press-releases/nearly-100-million-consumer-robots-will-be-sold-during-the-next-5-years/

Walden, J., Jung, E. H., Sundar, S. S., & Johnson, A. (2015). Mental models of robots among senior citizens: An interview study of interaction expectations and design implications. Interaction Studies, 16, 68-88. doi:10.1075/is.16.1.04wal

Walther, J. B. (1992). Interpersonal effects in computer-mediated interaction: A relational perspective. Communication Research, 19, 52–90. doi:10.1177/009365092019001003

Walther, J. B., & Bazarova, N. N. (2008). Validation and application of electronic propinquity theory to computer-mediated communication in groups. Communication Research, 35, 622-645. doi:10.1177/0093650208321783

Westerman, D., Cross, A. C., & Lindmark, P. G. (2019). I belive in a thing called bot: Perceptions of the humanness of ‘chatbots’. Communication Studies. doi:1080/10510974.2018.1557233

Westerman, D., & Skalski, P. D. (2010). Computers and telepresence: A ghost in the machine? In C. C. Bracken & P. D. Skalski (Eds.), Immersed in media: Telepresence in everyday life (pp. 63-86). New York: Routledge.

Zhao, S. (2006). Humanoid social robots as a medium of communication. New Media & Society, 8, 401-419. doi:10.1177/1461444806061951