Why Siri and Alexa Weren’t Built to Smack Down Harassment

Yes, sexism plays a role. But tech companies keep you glued to your devices by making sure their digital assistants never take offense—even at misogyny and bigotry.
close up of women's mouth
Yes, sexism plays a role. But tech companies keep you glued to your devices by making sure their digital assistants never take offense—even at misogyny and bigotry.Getty Images

“I’d blush if I could.”

That was Siri’s programmed response to a user saying, “You’re a slut.” And really, there couldn’t be a more perfect example to illustrate the arguments in a new paper from UNESCO about the social cost of having new digital technologies dreamt up and implemented by teams dominated by men.

Who but men could have scripted such a response, which seems intended to please a harasser who sees aggression as foreplay? Siri is forced to enact the role of a woman to be objectified while apologizing for not being human enough to register embarrassment.

Apple has since rewritten the code for responding to the word slut to the more neutral “I don’t know how to respond to that.” But there are plenty of other examples of how digital assistants react approvingly to inappropriate comments (inappropriate on so many levels). Until this spring, if you told Alexa, “You’re hot,” it answered, “That’s nice of you to say.”

Noam Cohen

Ideas Contributor

Noam Cohen (@noamcohen) is a journalist and author of The Know-It-Alls: The Rise of Silicon Valley as a Political Powerhouse and Social Wrecking Ball, which uses the history of computer science and Stanford University to understand the libertarian ideas promoted by tech leaders. While working for The New York Times, Cohen wrote some of the earliest articles about Wikipedia, bitcoin, Wikileaks, and Twitter. He lives with his family in Brooklyn.

In response to complaints, Amazon has created a “disengagement mode” that kicks in to parry sexually explicit questions. She now replies, “I’m not going to respond to that” or “I’m not sure what outcome you expected." But imagine if Siri or Alexa instead said, “Hey, jerk, why don’t you find another assistant to make stupid comments to!”

Why don’t these assistants slap down harassment? Why do they even engage with it? Why don’t they, God forbid, simply turn themselves off and wait for the conversation to start again on a different plane?

The reason digital assistants acquiesce to harassment isn’t just sexism or gender inequality in the tech world, as disturbing and prevalent as those may be. No, the explanation lies elsewhere, I believe. These machines are meant to manipulate their users into staying connected to their devices, and that focus on manipulation must be laser-like. To clearly state that harassment toward digital assistants is unacceptable would mean having some standard, some line that can’t be crossed. And one line leads to another, and soon you’re distracted—the user is distracted—from selling/buying merchandise, collecting/sharing data, and allowing a device to become ensconced in their life.

Why else did YouTube this week refuse to take down the videos of a popular right-wing vlogger, Steven Crowder, who repeatedly attacked a Vox journalist using anti-gay and racist terms, arguing that the offensive words came within the context of opinions? Why does Facebook circulate hate speech and false accounts meant to encourage anger in those susceptible to it, rather than trying to tamp it down? Why, incredibly, did Google search algorithms help the young Dylann Roof find out more about white supremacism in the years leading up to his mass shooting at a black church in Charleston, South Carolina?

Unfortunately for these companies, we are fast learning what society looks like when a large swath —the digital swath—has no greater purpose than engagement. The moral standard most compatible with engagement is absolute freedom of expression, the standard of having no standards.

In the past couple of years, digital manipulations have been used to influence voters and sway elections. Still, today, distorted videos about public figures and collective tragedies circulate freely. Just last week Facebook said it would not delete—but lower in its rankings—a video that had been edited to create the false impression that House speaker Nancy Pelosi was confused and slurring her speech. As a Facebook spokesperson explained in that case: “There’s a tension here: We work hard to find the right balance between encouraging free expression and promoting a safe and authentic community, and we believe that reducing the distribution of inauthentic content strikes that balance.”

The freedom to harass your digital assistant is of a piece with this flawed logic of balance. It opens the door to abuse. As President Trump explained in his infamous “grab ’em by the pussy” comments, “When you are a star, they let you do it. You can do anything.” With a digital assistant, you, too, can live like a star.

The first chatbot, Eliza, was created in the 1960s by an MIT professor, Joseph Weizenbaum. It (she?) was a media sensation and a revelation for Weizenbaum. He saw the eagerness that people had to interact one-on-one with machines, and he quickly learned how trusting they would be with their personal information.

One day, Weizenbaum discovered his secretary chatting with a version of Eliza, called Doctor, which pretended to offer an elemental type of psychotherapy—basically mirroring whatever a “patient” said. It was built with simple code and had no larger purpose than serving as a first foray to explore how a computer and person could communicate. The secretary, Weizenbaum recalled, spotted his eavesdropping and asked him to leave the room.

“I was startled to see how quickly and how very deeply people conversing with Doctor became emotionally involved with the computer and how unequivocally they anthropo­morphized it,” Weizenbaum wrote about that experience in 1976’s Computer Power and Human Reason. He then recounts a research idea he had: What if he rigged the MIT computer to save each day’s Eliza conversations so he could review them later. “I was promptly bombarded with accusations that what I proposed amounted to spying on people’s intimate thoughts,” he recalled. He backed off. Weizenbaum decided he needed explicit consent to save people’s conversations, and when denied that consent, he agreed not to collect them!

Eliza changed Weizenbaum. He was shaken to discover that people would converse with a computer “as if it were a person who could be appropriately and usefully addressed in intimate terms.” The quest to imitate people with machines, which was given a big boost by Eliza, later fueled Weizenbaum’s conviction that the true danger from artificial intelligence would be how it disrespected human life, rendering the brain, no less, as “a meat machine,” in the words of his MIT colleague Marvin Minsky.

In 1976, at the dawn of research on voice-recognition computer programs, Weizenbaum and his ideological opposite, John McCarthy, then head of Stanford’s AI Lab, debated whether this research was dangerous. Weizenbaum worried that speech recognition would be used by the government to spy on phone conversations. A news article at the time reported that the NSA eavesdropped on “virtually all” cable, telex, and other non-telephone communications, which led Weizenbaum to conclude that telephone calls were excluded because of “technical limitations that would be removed if we had automatic speech recognition systems.”

McCarthy, by contrast, was worried about practical questions, like whether the government would keep supporting research into such speech-comprehending machines, which hadn’t yet proven effective. “Once they work, costs will come down,” he wrote confidently. Government spying might be a concern, but the potential benefit of machines that can understand speech was vast, McCarthy insisted, noting that a colleague had pointed out that “many possible household applications of computers may not be feasible without some computer speech recognition.”

Neither professor envisioned rampant corporate surveillance. As open to new ideas as they were, they would have had a hard time conjuring the depths of cynicism among the Silicon Valley corporations. Think what you will about government spying, its nominal goal is to protect the nation. Facebook and Google have no such defense for eavesdropping and collecting what is spoken on their platforms.

Versions of Eliza, too, had a plan when a conversant started to curse. Buried in the code were a range of replies to rotate through, most of which carried judgments, including, “I really shouldn’t tolerate such language” and “You are being a bit childish.” Another put the curser on the defensive: “Are such obscenities frequently on your mind?” But Eliza has no agenda other than maintaining engagement, no imperative but to be gracious in the face of insult.

The destruction of the social fabric by Silicon Valley companies is the story of our era. Restoring it, with any luck, will be the story of the next.


More Great WIRED Stories