Skip to main content

Neon CEO explains the tech behind his overhyped ‘artificial humans’

Neon CEO explains the tech behind his overhyped ‘artificial humans’

/

Neon Genesis Evangelicalism

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

The most buzzed-about company at CES 2020 doesn’t make a gadget you can see or touch. It doesn’t even have a product yet. But for reasons I’m still not entirely sure I grasp, the lead-up to this week’s show in Las Vegas was dominated by discussion of a project called Neon, which has emerged from a previously unknown Samsung subsidiary known as STAR Labs. 

What Neon has been promising is so ambitious that it’s easy to swing your expectations around full circle and assume the mundane. The project’s Twitter bio simply reads “Artificial Human,” which could mean anything from an AI chatbot to a full-on android. Promotional videos posted in the run-up to CES, however, suggested that Neon would very much be closer to the former.

Yesterday, we were finally able to see the technology for ourselves. And they are, indeed, just digital avatars, albeit impressively realistic ones. We weren’t able to interact with Neon ourselves, and the demonstration we did see was extremely rough. But the concept and the technology is ambitious enough that we’re still pretty intrigued. (To get a clear idea of the tech’s limitations, check out this interaction between a CNET journalist and a Neon avatar.)

After a low-key event on the CES show floor, we caught up with Neon CEO Pranav Mistry to chat about the project.

“No-one at Samsung other than me knows about it.”

Even at a youthful-looking 38, Mistry is a tech industry veteran who’s worked on products like Xbox hardware at Microsoft and the original Galaxy Gear at Samsung. “It was completely my baby, from design to technology,” he recalls of the early smartwatch. As VP of research at Samsung he later moved on to projects like Gear VR, but with Neon he’s now spearheading an initiative without direct oversight from the parent company.

“Right now you can say that [STAR Labs is] owned by Samsung,” Mistry tells me. “But that won’t necessarily always be the case. There’s no technology relation or product relation between what STAR Labs does and Samsung. There’s no Samsung logos anywhere, there’s nothing to do with Bixby or any other product that’s part of Samsung. Even what we’re planning to show at CES — no-one at Samsung other than me knows about it or can tell me not to do it.” 

Mistry speaks at a thousand miles an hour, and one day I would very much like to sit down with him for a longer chat conducted at a less breakneck pace. At various points he invoked Einstein, Sagan, and da Vinci in an attempt to convey the lofty goals he was aiming to achieve with Neon. It was never less than entertaining. My focus, however, was on figuring out how Neon works and what it actually is.

Neon CEO Pranav Mistry on stage at CES 2020.
Neon CEO Pranav Mistry on stage at CES 2020.

The Neon project is — or as the company would say, “Neons are” — realistic human avatars that are computationally generated and can interact with you in real time. At this point, each Neon is created from footage of an actual person that is fed into a machine-learning model, although Mistry says Neon could ultimately just generate their appearances from scratch. 

I asked how much video would be required to capture the likeness of a person, and Mistry said “nothing much.” The main limitation right now is the requirement for a large amount of local processing power to render each avatar live — the demo I saw at CES was running on an ultra-beefy PC with two 128-core CPUs. Mistry notes that commercial applications would likely run in the cloud, however, and doesn’t see latency as a major hurdle because there wouldn’t need to be a huge amount of data streamed at once.

The CES demo featured a Neon employee interacting with a virtual avatar of a woman with close-cropped hair and dressed in all-black. I’d seen video of this woman, among other people, playing around the Neon booth ahead of Mistry’s presentation — at least, I thought it was video. Mistry, however, swears that it was entirely computer-generated footage, albeit pre-rendered rather than captured in real time. 

Well, okay. That’s not necessarily impressive — we’ve all seen what deepfakes can do with much less effort. What’s different about Neon is the promised real-time aspect, and the focus on intangible human-like behavior. Multiple times, the avatar I mentioned before was told to smile on command by the employee conducting the demonstration. But, according to Mistry, she’d no more produce the same identical smile each time than you would. Each expression, action, or phrase is calculated on the fly, based on the AI model that’s been built up for each Neon.

This is all by design, and Mistry even says Neon is willing to focus on humanity at the expense of functionality. For example, these avatars aren’t intended to be assistants at their owners’ beck and call — they’ll sometimes “get tired” and need time to themselves. According to Mistry, this cuts to the core of why Neon is using language like “artificial human” in the first place. 

“I feel that if you call something a digital avatar or AI assistant or something like that, it means you’re calling them a machine already,” Mistry says. “You are not thinking in the terms of a friend. It can only happen when we start feeling the same kind of respect. I’ve been working on this for a long time. In order to design this thing I need to think in those terms. If they are human, what are the limits they will have? Can they work 24 hours and answer all your questions? A Neon can get tired. Programmatically, computationally, that will make you feel ‘Okay, let me only engage in certain discussions. This is my friend.’”

The obvious question, then, is what’s the use case for an artificial human AI with artificial flaws? On stage, Mistry mentioned possible implementations from personal assistants to foreign language tutors. Then again, he literally said “there is no business model” a few minutes later, so I had to follow up on that point. 

“What I want to give back to the world is something that’s remembered after I go.”

“There are a lot of people in the world that people remember,” Mistry says. “I was an architect and a designer before, and there are a few people that are remembered like that like Einstein, or Picasso, or some musicians in India, and we know their names not because they were rich but because of what they contributed to the world. And that is what I want to end up being, because I have everything else. Do I have enough money to live with? Yeah, more than enough. What I want to give back to the world is something that’s remembered after I go. Because you don’t know how rich Michelangelo was — no-one cares!” 

“But you’re going to be selling this technology to people, right?” I say, somewhat bewildered.

“Of course. What I’m pointing out is that we believe Neons will bring more human aspects and maybe we will license that technology, or not technology as a license but Neons [themselves] as a license. Just to make a point, of course we are not saying we’re a philanthropic company. But the goal is not to build around data and money and so on. Because I want to get a good night’s sleep after 20 years.” 

The concept of ultra-realistic, entirely artificial humans with minds of their own raises obvious questions of nefarious use cases, particularly in a time of heightened fears about political misinformation, and very real examples of AI being used to create non-consensual pornography. I asked Mistry whether he’d considered the potential for negative side effects. “Of course,” he said, comparing Neon to how nuclear technology generates electricity while also being used for weapons of mass destruction. “Every technology has pros and cons — it’s up to us as humans how we look at that.” 

Neon still has a long way to go

Will Neon limit who it sells the tech to, then? Mistry says the company will “more than limit” the tech by encoding restrictions “in hardware.” But he’s not clear what restrictions would be encoded or how. 

Neon still has a long way to go. Even allowing for the unfavorable network environment of a CES show floor, the demonstration’s responses were delayed and linguistically stilted. As someone with an interest in AI and natural language processing, I could see that there’s something to hype here. But I could also see that the average layperson would remain underwhelmed. It’s also worth reiterating that Neon isn’t allowing private demos at CES beyond its staged presentations, reinforcing the idea that the technology is far from ready.

Still, even if the “artificial human” pitch is a little over-egged, Neon is actually more ambitious than I’d assumed. And, despite the pre-CES hype, Mistry is entirely open about the fact that there’s basically no product to show. The message right now, in fact, is to come back in a year and see where Neon is then. If real progress has been made by CES 2021, then, maybe we’ll get excited.