The report, titled I’d Blush If I Could, was conducted by Unesco to study the effects of the increasing trend of gendered artificial intelligence. It takes its name from what was, until recently, Siri’s response to comments that are little more than street harassment in the real world. And let’s just say, they didn’t find a lot of good news.
How Are Virtual Assistants Sexist?
The UN report was nothing if not extensive, and very thoroughly delved into exactly what aspects of virtual assistants are considered sexist in their depiction of the average human woman, which their voices are clearly modeled after. Here are a few specific examples found within the report.
Humanized Backstories
It’s difficult not to notice that almost all leading virtual assistants are modeled on women, in name and voice. But it’s not only a name and voice that’s attributed to the technology. Creative teams are paid to develop more in-depth backstories for these “women,” in order to humanize them and help the AI express itself in a more satisfying and familiar way. And some clear themes emerge in these character developments: sex appeal and submissiveness. The name Siri means “beautiful woman who leads you to victory” in Norse. Cortana is based on a “sensuous unclothed woman” from the video game Halo. Even Ava, a customer-help virtual agent developed by Autodesk, is described as a “twenty-something with smooth skin, full lips, long hair, and piercing brownish-blue eyes…. servile, obedient, and unfailingly polite.” Arguably these character traits are not problematic on their own, and you might expect a virtual assistant to be obedient in character. However, when such ideas are so consistently and exclusively applied to creating depictions of women, it creates a problem. What’s more, if the purpose of humanizing the technology is to fit in with society more naturally, surely it would do better to represent our diverse reality, than the industry reawakening the rigid stereotypical expectations of women in the 1950s. The creators of these feminine characters are quick to excuse themselves from any blame, or negative associations however, as they remind us these assistants are technically genderless. But claiming these assistants are technically genderless is not a “get out of jail free” card for the creators, until they make other aspects of their “personalities” genderless too.
Response to Abuse
The problem is all the more evident in virtual assistants’ incredibly inappropriate responses to abuse and harassment. As you can see from the chart here, the average voice response to these unsolicited statements – which might be considered creepy or predatory if called out in real life – is kind, playful, and flirtatious at times. While they may seem harmless on the surface, there’s no telling what kind of long-term psychological effects these responses could be having on male users. To make matters worse, the report found that virtual assistant responses to women engaging in the same cruel behavior were notably less encouraging (“That’s not nice” or “I’m not that kind of personal assistant”). This further fosters the problematic “boys will be boys” attitude that users might take to heart more than should be encouraged, and there’s been almost no progress in making it better throughout the technologies 8 years existence. As an industry that is constantly evolving, this stagnant response to outdated gender stereotypes within their technology speaks volumes, particularly when you consider what is fueling this sexism.
(Not) Listening to Women
Because virtual assistants are mostly voice-activated, listening is just as important as responding. The ability to hear and understand exactly what a user wants is integral to the primary function of the device. Unfortunately, virtual assistants exhibit sexist tendencies here too, as they’re far less likely to hear female users on a consistent basis. While something as simple as being able to detect high tones as opposed to low tones might seem like an arbitrary one, it’s a decision that was most likely made by a team predominantly consisting of men. And that’s a big part of the problem.
What Is Fueling This Sexism?
The gender gap has been a well established and widely researched fact, particularly in tech. The under-representation of women is notably across the entire industry, from computing jobs, where women only hold 1 in 4 positions, to Silicon Valley startup founders, of which women make up only 12 percent. As you’d expect, artificial intelligence follows suit with its tech industry compatriots. According to the UN report, only 12% of AI researchers and 6% of AI software developers are women, which is a driving factor in virtual assistants’ sexist behavior. It’s a painfully obvious and yet hardly addressed problem in tech, and the repercussions are finally beginning to come to light. Until the demographics of the tech community begin to reflect the demographics of the real world, problems like gendered AI are going to continue to pop up.
Why It Matters
As the UN report demonstrated, gendered AI is far from a random and harmless decision about what kind of voice these virtual assistants will employ. The long term ramifications are far reaching and significant, particularly when you consider what it could be doing to the mentality of men and women who use them on a regular basis. It’s a simple concept: You treat virtual “women” a certain way and this behavior will manifest in the real world with real women. This is why considering the ramifications of small tech decisions is so important; because it rarely stops there. With more and more advanced technology being created and programmed on a daily basis, considering the future is more important than ever before. Understanding the problem and why it matters is a great first step in making sure these kinds of tech problems don’t become an increasingly difficult problem to solve. However, tech companies are going to need to take action to make any kind of meaningful impact.
How Can Tech Companies Fix This?
Flirtation with virtual assistants under the assumption of a female gender has become so commonplace it is often the subject of humor. Source: Dilbert Comics, April 2019. As the report points out though, blindly following the commands of consumers is perhaps the most fool-proof way of corrupting your product for the sake of a few extra dollars, particularly when the research your conducting is designed to make you sound right. Subsequently, the first step in fixing the problem is for tech companies to address that there is one. Hopefully, this UN report will make it easier for AI engineers, developers, and CEOs to take action and study exactly how gender bias factors into virtual assistant programming. If, however, a UN report doesn’t do the trick — which is fairly likely given the amount of money that is to be made from virtual assistants — there are other organizations pushing them to do the right thing whether they want to or not. There are more concrete examples to solving the problem, from genderless voices to politeness checks, but the real solution is going to be closing the gender gap in tech. Regardless of how many quick fixes and one-minute solutions the tech industry can muster to solve its many gender-based problems, the true fix will come in the form of a diversified workforce that is working together to address the needs of everyone, rather than just well-educated, straight, white men. Unfortunately, until the gender gap is addressed, there’s going to be more tone-deaf tech innovations on the horizon.