Jesse Fox

Communication, singular

Tag: avatars

Human or computer? It matters if you’re trying to influence users

Currently, there are two general frames of thought in terms of interacting with computers. First is the media equation, conceived by Byron Reeves and Cliff Nass (1996), which later evolved into the computers as social actors (CASA) framework (see Nass & Moon, 2000). The media equation and CASA both suggest that we mindlessly process media and computer-based interactants. Evolutionarily, we’re not designed to cope with this thing that is COMPUTER or TELEVISION, so the argument is that we process those stimuli in the same way we would if they weren’t mediated. For example, a person on a television screen or a partner in text chat are both interpreted as we would any person. The media equation explains a vast array of phenomena, including why horror movies are scary or why we cuss out our laptops when they malfunction–because we respond to media in a very human-like way.

Another perspective on the matter, although conceived as being constrained to a certain context and domain, is Blascovich’s (2002) social influence model in virtual environments. Blascovich suggests that when an computer-mediated entity is believed to be controlled by a human, it is more influential than when it is believed to be controlled by a computer. The perception of human control or agency, then, is key to persuasion in virtual environments. Blascovich offers a second route, however, as he notes that behavioral realism is an important factor that interacts with our perception of agency. If we think we’re interacting with a human, the representation doesn’t necessarily have to be realistic. If we’re interacting with a computer, however, it needs to act realistic and human-like for us to be affected the same way. Blascovich’s model doesn’t really tackle mindless or mindful processing, but it does provide a contrasting expectation to CASA in terms of how we may respond to computer-mediated representations.

Really, these things come down to a type of Turing test. As more interactants (email spammers, robo-dialers, Twitter bots, NPCs, etc.) are controlled by algorithms, it becomes important that we study the conditions in which people are understanding and responding to these entities as humans, and when they are conceiving of them as computers, and what impact that has on communicative outcomes.

As virtual environment researchers, we wanted to test these contrasting predictions to see whether there were differences in how people respond to visual avatars (i.e., virtual representations controlled by humans) or agents (i.e., virtual representations controlled by computers). (Although you’ll note the conspicuous absence of CASA in the paper, as a reviewer insisted that it didn’t fit there…never mind that we discussed the project and our contrasting hypotheses with Cliff…*sigh*) So, we gathered every paper we could find on visual virtual representations that manipulated whether people thought they were interacting with a person or a computer. These included studies that examined differences in physiological responses, experiences such as presence, or persuasive outcomes (e.g., agreeing with a persuasive message delivered by the representation). One study, for example, measured whether people were more physiologically aroused when they believe they were playing a video game against a computer or a human. Another study measured whether people performed a difficult task better if they thought a human or a computer was watching them.

What we found is that on the whole, avatars elicit more influence than agents. These effects are more pronounced when using desktop environments and objective measures such as heart rate. We anticipate that immersive environments may wash out some effects of agency because of higher levels of realism. Another finding was that agency made more of a difference if people were co-participating in a task with the representation, whether that was cooperating or competing. Perhaps having an outcome contingent on the other’s performance made it more meaningful to have a person in that role than a computer. A final finding is that when both conditions were actually controlled by a human–as opposed to both being actually controlled by a computer–agency effects were greater. So, there is something to be said about perhaps a subconscious Turing test, wherein people can somehow tell when they are interacting with computers even though they don’t explicitly think about it.

What This Means for Research Design

Our findings have a lot of relevance to how we interact with computers and humans, but you can read more about that in the paper. What I want to draw attention to is the importance of these findings in terms of research, as they may extend to any number of technological domains. Tech scholars often run experiments where they are having participants text chat or interact in a VE with someone, or they have them playing a video game, and they are testing some sort of influential effect of this interaction. Our findings indicate it is imperative that you clarify who they are interacting with, even if it seems obvious. Second, it is important that they believe it. If you are not clarifying this, or if your participants aren’t buying into your manipulation, you are probably going to be stuck with weird variance in your data that you can’t explain.

The problem is that directly asking people what they thought isn’t the best approach. As Reeves and Nass note in the media equation, if you straight out ask someone if they are treating a computer like a human, they’ll look at you like you’re nuts–but that doesn’t mean they won’t treat the computer like a human. Further, if you ask someone if they thought they were interacting with a person or a computer, they might have never even thought they weren’t interacting with a person–but now that you’ve introduced them to this idea, they’d feel dumb admitting they didn’t know, so they’re going to say “computer.” Or, you’re going to get them reflecting on the task, and they will suddenly recognize that the mechanistic responses did seem an awful lot like a computer, so they will report “computer” although they didn’t recognize this at the time of the task. Thus, direct questions aren’t the greatest way to parse this out.

My advice is to use a funneling technique, preferably in a verbal debriefing. You might start by asking what they thought the study was about, and then, based on the design, ask relevant questions (e.g., ask about their feelings about their text chat partner, or ask if what they thought about the other player’s style of play.) One thing to note is the use of pronouns (“she was…” “he was…”) that indicates at least some acceptance of the interactant as human. Then, keep probing: “Do you think your partner/opponent/etc. was acting strange at any point, or did they do anything you wouldn’t normally expect?” This is a broad enough question that shouldn’t immediately point to the partner being a computer, but might get them thinking in that direction. If they don’t say anything about it being a computer, I’d say you can be pretty confident they bought the manipulation and believed they were interacting with a person. You can wrap up with more direct questions: “At any time, did you think you were interacting with a computer rather than a human?” The feedback you get will also be helpful in designing future studies or scripts to eliminate this variance.

You can check out the paper through the link below:

Fox, J., Ahn, S. J., Janssen, J. H., Yeykelis, L., Segovia, K. Y., & Bailenson, J. N. (in press). A meta-analysis quantifying the effects of avatars and agents on social influence. Human-Computer Interaction. doi: 10.1080/07370024.2014.921494

Testing gamification: Effects of virtual leaderboards on women’s math performance

Leaderboard environment croppedGamification is one of those tech trends that exploded on the scene. TED talks, mass market books, love letters in esteemed publications, and all that jazz. But scientific evidence? Not so much. The problem with gamification is the huge gaping hole where empirical support should be. See, there’s not a lot of actual scientific research on the effects of gamification, and what’s there isn’t exactly a ringing endorsement. (Check out the meta-analysis by Hamari, Koivisto, & Sarsa, 2014).

I’ve worked with a couple of grad students on some gamification-related projects, focusing specifically on the more popular applications of leaderboards and badges. Kate Christy and I were curious whether leaderboards may trigger stereotype threat or social comparison. We placed women in a virtual classroom, where they saw a leaderboard dominated by women’s names, a leaderboard dominated by men’s names, or no leaderboard. Stereotype threat would suggest women would perform worse after viewing the male-dominated board; social comparison approaches would suggest that women would perform worse after viewing the female-dominated boards. After viewing the board, they took a math quiz.

Our study revealed that women performed worse on the math quiz after seeing the female-dominated board compared to the male-dominated board. Yet, the female-dominated board promoted higher levels of academic identification than the male-dominated board or seeing no board at all. In terms of application, these findings are a little frustrating as they suggest female-dominated leaderboards are bad for performance, yet good for academic identification, and the opposite is true for male-dominated leaderboards. Clearly more research is needed than this to determine the effectiveness of leaderboards among different populations and in different contexts, but it does provide a cautionary note.

You can check out the full article for free here until September 30.

This isn’t to say gamification doesn’t hold promise; it is to suggest that dropping proven educational methods in the name of fun–without any evidence of its effectiveness–is a bad idea. It’s also to suggest that gamification is a really broad term, and some forms may be better than others. And just like any treatment or method–especially with educational outcomes hanging in the balance–scientific evidence should precede practice.

Reality check: Media misrepresentation of the sexualized avatar study

This morning started off with another media-induced facepalm. Thanks to Owen Good at Kotaku for drawing this to our attention.

One of the downsides to being a scientist is having your work misrepresented in the media. It’s bad enough when you can tell they write a report or story based on another story that’s based on another story. Worse is when they bring in “experts” to sensationalize it (especially when they don’t bother contacting the actual researchers for an accurate statement.) Fox News aired a segment on my recently published study on the negative effects of sexualized avatars. Here is what is wrong about it.

1. They say we studied “young girls” and the discussion is around children. We studied female adults (none younger than 18), which is clearly noted in the article.

2. At another point, they say we studied “gamers.” Video game play varied among this sample; participants reported playing between 0 and 25 hours of games a week (M = 1.29, SD = 3.70). Also, they were not asked if they self-described as gamers.

3. Participants did not choose avatars for this study; they were randomly assigned to assure the validity of the manipulation. (Read this if you don’t understand the importance and implications of random assignment for experiments.) I am researching choice in other studies because there could be a self-selection bias in play when it comes to gaming (e.g., women who self-objectify may be more or less likely to choose sexualized avatars).

4. The simulation was not a video game. It was an interaction with objects in a fully immersive virtual environment and a social interaction with a male confederate represented by an avatar. Further research with gaming variables is necessary because those sorts of elements change outcomes in unexpected ways, as my grad student Mao Vang and I recently found when we had White participants play a game alongside Black avatars.

5. What is most wrong is that their “expert” is not a scientist or even a gaming researcher, but a self-described life coach. I looked her up and she has zero notes on her website about her scientific education or experience. Rather, it is a business site populated with slogans like “Let’s Get it, #Go and MAKE THAT DOUGH!” (Random hashtags, arbitrary capitalization, and outdated slang all preserved from the original.) There’s no indication that she knows anything about science, virtual environments or video games, or girls’/women’s development or psychology. It’s clear from her interview that she did not read the article–I love that Owen Good at Kotaku points out that she swipes lines from another journalist’s writeup at Time.

Perhaps the most egregious transgression is when she puts words in our mouths with this gem: “But they say that this is, like, even worse than watching Miley Cyrus twerk.” NO, WE DIDN’T SAY THAT. NOR WOULD WE. EVER. That statement was made by the Time reporter and was rightfully not attributed to us.

Such are the frustrations of the scientist.

The consequences of wearing sexualized avatars

Given the recent media attention, I figured it would probably be wise to give a rundown of our experiment recently published in Computers in Human Behavior on the negative consequences of wearing sexualized avatars in a fully immersive virtual environment. Although several media sources are framing this as a study involving video games, there was no gaming element to it (although I expect the findings to apply to video games, there are different variables to consider in those settings).

A fully immersive environment. The user is in a head-mounted display (HMD) and can only see the virtual world, which changes naturally as she moves.

A fully immersive environment. The user is in a head-mounted display (HMD) and can only see the virtual world, which changes naturally as she moves.

This study was conducted while I was still at Stanford working in the Virtual Human Interaction Lab. I conceived the study as a followup to a previous study whose results had confounded me. In that study (published in Sex Roles), men and women were exposed to stereotypical or nonstereotypical female virtual agents, and we found that stereotypical agents promoted more sexism and rape myth acceptance than nonstereotypical agents. I was surprised because we found no difference between men’s and women’s attitudes. Thus, I wanted to investigate further to see how virtual representations of women affected women in the real world.

In the current experiment, we placed women in either sexualized  or nonsexualized avatar bodies. We used participants’ photographs (taken several weeks before for a presumably unrelated study) to build photorealistic representations of themselves in the virtual environment. Thus, when a woman entered the VE, she saw her face or another person’s face attached to a sexualized or nonsexualized avatar body.

Atomic II avatars

Avatars courtesy of Complete Characters database.

In the virtual environment, women performed movements in front of a mirror so that they could observe the virtual body and experience embodiment (i.e., feel like they were really inside of the avatar’s body). They had a brief interaction with a male avatar afterwards and then we measured their state self-objectification (through the Twenty Statements Test, which is simply 20 blanks starting with “I am ______”). Then, we told them they would be participating in a second, unrelated study. They were allowed to pick a number that “randomly” redirected them to an online study. All participants were redirected to “Group C” which was framed as a survey on social attitudes. The rape myth acceptance items were masked in a long survey with other items.

We found that women who embodied a sexualized avatar showed significantly greater self-objectification than women who embodied a nonsexualized avatar. Furthermore, and counter to predictions, women who embodied an avatar that was sexualized and looked like them showed greater rape myth acceptance.

This second finding was puzzling; I thought seeing oneself in a sexualized avatar would make participants more sympathetic. Our best explanation was that perhaps that seeing oneself sexualized triggered a sense of guilt and self-blame that then promoted more acceptance of rape myths.

Since this study I have conducted two other studies (both using nonimmersive environments to make sure that it is not just the high-end technology yielding these results) that support these findings. Feel free to contact me if you’re interested in those papers.

You can find media coverage of this study at the links below:

*Original story by Cynthia McKelvey for Stanford News Service (also posted at PhysOrg)

* Video Games’ Sexual Double Standard May Have Real-World Impact by Yannick LeJacq at NBC News

*How Using Sexualized Avatars in Video Games Changes Women by Eliana Dockterman at Time

*Using a Sexy Video Game Avatar Makes Women Objectify Themselves by Shaunacy Ferro at Popular Science

*The Scientific Connection Between Sexist Video Games and Rape Culture by Joseph Bernstein at Buzzfeed

So there’s that.

Interestingly, the media have just picked up on a study published earlier this year that I ran while I was still at Stanford. Co-authored with my advisor, Dr. Jeremy Bailenson, and undergraduate research assistant Liz Tricase, the study found negative effects for women who embodied sexualized avatars in a fully immersive environment. Jeremy saw the article online at phys.org and captured this screenshot. *facepalm*PhysOrg sexualized avatars screenshot(Note the ad to the right, telling you to “Create Your Hero Now.”)

The article itself is an excellent writeup by Cynthia McKelvey. Feel free to check out the study itself here.

Powered by WordPress & Theme by Anders Norén