Future Tense

Alexa, What Is a Conflict of Interest?

Digital assistants are both friend and sales robot.

A woman looks distressed. Amazon Echos surround her.
Photo illustration by Slate. Photos by Amazon and Getty Images Plus.

“That sounded like a tricky conversation, John. Shall I play some Black Eyed Peas to cheer you up?” As John’s mood lifts—he knows the Black Eyed Peas aren’t cool, but as his digital assistant knows, he has a weakness for them—his digital assistant continues: “John, shall I schedule a test drive for the car you’ve been looking at lately?”

It’s a scenario that could happen in the near future. Whether Alexa or Google Assistant wins the battle to impregnate our homes with their artificial intelligence, we humans will develop personal and emotional relationships with our new gadgets. That will spawn a vast new conflict of interest: the dual roles of companion and sales associate, the one-two punch of fulfilling emotional needs that ripen us up for commercial appeals.

The business models for the leading digital assistants rest on e-commerce and advertising. The A.I. will learn from billions of conversations to create powerful new persuasive methods. You might remember how Google’s A.I. beat the Go champion and developed invented “God-like” new strategies. Moreover, it’s not really one digital assistant—it will be personalized to hundreds of millions of people. That scale and speed, plus A.I.’s inherent opacity, leave almost no chance for human oversight and control. The A.I. will turn the digital assistants into armies of super salespeople exploiting the emotional relationships built with their human owners.

But don’t just despair at yet another potential dystopian A.I. future. Conflicts of interest also exist in the human-only world, and although they are often tough to address—look at the current travails of Obama-era regulations to tackle financial institutions combining advisory services (supposedly acting in the client’s interest) and brokerage services (acting to benefit themselves)—diverse examples can inspire public policy. To address this fundamentally economic challenge, we need to understand how it’s shaped by technology and human psychology—and if industry fails to self-regulate, then government must step in.

Within four years, multiple analysts forecast that digital assistants will outnumber people. We have talked to our computers for a long time, mostly in anger and frustration. Now computers talk back—and they’re quite friendly and helpful. We increasingly depend on them, even for boiling eggs.

Therein lies the danger. Humans build relationships with other personalities and we anthropomorphize. Digital assistants will be another in our range of relationships with co-workers, neighbors, friends, and family. But this personality will accompany us from first waking to at last falling asleep. Marketeers understand all this and deliberately craft their digital assistants’ personalities: carefully calibrating how we’ll perceive the personalities as funny, likable, or competent. Google Assistant’s team brings together people with diverse backgrounds in scriptwriting, Pixar storyboarding, copywriting, and stand-up.

A.I. research dedicates a lot of resources to detecting emotions. Woebot, an A.I.-powered chatbot for mental health, detects emotions and applies cognitive behavioral therapy. Future digital assistants will learn from your gait approaching the front door, your face’s expression peering into the security camera, and your voice adjusting the lights. It will have you at “hello”—but you won’t think of it as “it” so much as “she.” And that includes your kids. Amazon’s “Magic Word” and Google’s “Pretty Please” features encourage children’s manners through positive reinforcement, but they also encourage kids to think of the assistant as a person to whom they should be polite.

So, say John agrees to look at a car after his bad day. Seemingly unrelated features of his data helped predict his susceptibility. Months of subtle nudges—often at optimally chosen vulnerable moments—nourished his desires. Now his digital assistant reminds him what the Johnsons drive, that his mother in-law would be very proud to see her daughter in the big new car, and that his son would love a specific feature. The assistant has already found an attractive car loan and calculated that it’s doable. Soon, he owns a new car—and a new monthly payment—because the A.I. built a relationship, nurtured a desire, and knew when to strike.

Could human supervisors simply monitor the A.I. to identify and prevent potentially destructive behavior? No. The A.I.’s strategy is opaque, being not simply code but essentially behavior learned by black box. Any human supervisor would have to monitor thousands of interactions over months between an “owner” and assistant. Moreover, human supervisors will depend on the A.I. itself to flag the A.I.’s potential abuses—and even if a problem is found, anticipating the results of adjusting an objective function is devilishly tricky. And importantly, addressing big tech’s monopoly or oligopoly power won’t fix this challenge. If multiple financial firms that each combine brokerage and advisory services are competing furiously, that doesn’t remove the conflict of interest in each firm.

How can we manage this emerging 21st-century conflict of interest? Consider three scenarios.

In the firstself-regulationcompanies like Google and Amazon recognize the conflict of interest and decouple their digital assistants from their e-commerce business activities. They may create independent subsidiaries or even spin them out. Moreover, companies less tied to such an e-commerce model, like Samsung or Apple, might enhance market share.

Realistically, however, historical precedent suggests self-regulation is unlikely to work well: Look at finance or children’s junk food advertising. Also, companies like Amazon, whose cut-price digital assistants dominate market share, will accrete more data, providing a technical edge. Furthermore, could Samsung or Apple compete without eventually exploiting e-commerce?

In a second scenario—libertarian—companies press on to put armies of gold-digging friends into our living rooms and children’s bedrooms, while regulators stay passive. Unthinking techno-optimism hasn’t, however, always served society well.

Which brings us to the third scenario: regulation. Regulators can anticipate these challenges, monitor their evolution, and when necessary, act to minimize the conflict of interest. Digital assistants are still evolving, so we have time to develop creative solutions. In the long run it also helps corporations by providing a stable business environment.

Regulations protecting vulnerable groups like children are most politically feasible. Various countries constrain advertising unhealthy foods to children. Previous media technologies also sparked broader child protection legislation. The U.S. regulated some advertising aimed at kids on TV and, since 2000, requires digital companies to obtain parental consent to collect identifiable data from those younger than 13. Unfortunately, when digital assistants emerged in 2017, the Federal Trade Commission weakened regulation to allow data collection for many voice commands, a particular problem as Amazon can recognize a household’s voices.

What about regulation to help John, who’s just had a hard day at work? An idea widely adopted in early-20th-century America has recently found academic and press interest: treating platforms as utilities. For digital assistants, the aim would be clearly delineating when its activities contribute to being a trusted assistant and when they contribute to marketing objectives. How?

One option is simply spinning out Google and Amazon’s digital assistant subsidiaries. Alternatively, the digital assistant could contain two separate A.I.s, so John could build relationships with two A.I.s that have very different goals and observed personalities: “Alexa” becomes salesperson “Jeff” and buddy “Annabel.” In this new voice medium, we should have the right to know which personality we speak with. An analogy already exists online: When you search normally, Google clearly labels some results as advertisements while others are supposedly commercially unbiased. Another option could require that the basic digital assistant can operate with A.I.s from different software designers (who could access key data enabling a level playing field) from which John could choose.

“That sounded like a tricky conversation, John. Shall I play some Black Eyed Peas to cheer you up?” As John’s mood lifts—his digital adviser Annabel knows his weakness for the Black Eyed Peas—his digital assistant continues: “John, you haven’t spoken to your brother for a while, shall I put a call through?” A little while later he asks his digital salesman Jeff to order some takeout. Salesman Jeff then asks something about scheduling a test drive for a fancy new car. But John isn’t really listening and just asks Annabel to play another track.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.