Skip to main content

I tried the wristband that lets you control computers with your brain

Photography by Amelia Holowaty Krales

Thomas Reardon, the CEO and co-founder of neuroscience startup CTRL-Labs, does not want to hear about brain implants.

“There’s nothing you can do with a chip in your brain that we can’t do better,” Reardon tells me during a visit to CTRL-Labs’ cramped Manhattan office. Between us on a conference table, there’s a device that could make keyboards, mice, and touchscreens nearly obsolete — or at least, that’s what CTRL-Labs is claiming. It’s the prototype of a product called CTRL-kit, and it looks like a cyberpunk bandolier bracelet. A few weeks later, when I visit a second time, I’ll be using it to slice fruit with my mind.

Founded in 2015, CTRL-Labs makes brain-machine interfaces, which are devices that translate mental activity into digital action. Lots of companies are interested in this technology; Facebook, for instance, revealed an experimental thought-typing system last year. For many of them, the ultimate goal is a direct line to the human brain, which is still technically difficult and potentially dangerous. CTRL-Labs is trying to perform the same functions with a simple electrode-studded wristband.

CTRL-Labs uses electrode arrays like this to read neural signals from the arm.
CTRL-Labs uses electrode arrays like this to read neural signals from the arm.

CTRL-Labs, originally named “Cognescent,” was founded by Reardon, Patrick Kaifosh, and Tim Machado. (Machado has since departed but remains an adviser.) They conceived the project at Columbia University’s neuroscience program, and while they’re building on decades of existing research, they’re hoping to translate it into the first mass-market brain-computer interface system. This goal has won over investors like Amazon’s Alexa Fund and the Peter Thiel-backed Founders Fund, which participated in a $28 million fundraising round last week. CTRL-Labs announced its CTRL-kit development kit in late April, with plans to ship later this year. For now, the company is showing off a handful of applications with a bulky prototype armband — and when it works, it feels like magic.

“There’s nothing you can do with a chip in your brain that we can’t do better.”

My first CTRL-Labs demo looks deceptively basic. I fit the band snugly around my forearm, and a computer brings up a virtual hand that mimics my real one, curling and spreading its fingers against a flat black background. I’ve seen this done countless times before with systems like the Leap Motion tracking camera, which has been on the market for years. But after waving my hand around for a few minutes, I try something new: I make a fist, press it against the palm of my other hand, and try to open it. My real hand stays closed. My virtual one stretches out, unimpeded.

I take my palm away, so there’s nothing stopping my fingers from moving. Making a fist again, I imagine opening it. My knuckles tense, but the muscles stay still. For a moment, nothing happens. Then, hesitantly, the fingers on the screen pop out again. The armband isn’t reading the motions I’m making but the motions I want to make.

CTRL-Labs’ work is built on a technology known as differential electromyography, or EMG. The band’s inside is lined with electrodes, and while they’re touching my skin, they measure electrical pulses along the neurons in my arm. These superlong cells are transmitting orders from my brain to my muscles, so they’re signaling my intentions before I’ve moved or even when I don’t move at all.

Using CTRL-Labs’ wristband prototype required intense concentration, but not actual muscle movement. To my left is CEO Thomas Reardon, who led development of the Internet Explorer browser before co-founding CTRL-Labs.
Using CTRL-Labs’ wristband prototype required intense concentration, but not actual muscle movement. To my left is CEO Thomas Reardon, who led development of the Internet Explorer browser before co-founding CTRL-Labs.

EMG is widely used to measure muscle performance, and it’s a promising option for prosthetic limb control. CTRL-Labs isn’t the first company to imagine an EMG-based interface, either. Canadian startup Thalmic Labs sells an EMG gesture-reading armband called the Myo, which detects muscle movements and can handle anything from controlling a computer to translating sign language. (CTRL-Labs used Myo armbands in early prototyping, before designing its own hardware.)

This technology provides some clear benefits over the mass-market electroencephalography (EEG) headsets that are often sold as “mind-reading” devices. Those headsets pick up very broad brain activity patterns — often just a general state of concentration — and set them to trigger rudimentary computer commands. An EMG armband bypasses users’ noisy, complicated brains and draws from much clearer lower motor neuron signals before their relatively slow muscles react to those signals.

Tapping into the “source” of your thoughts isn’t fast or efficient

EMG isn’t foolproof. I tried Thalmic’s armband a few years ago, and although it could be eerily good at deducing my finger motion, it didn’t consistently recognize gestures, especially after I shifted the band or changed my arm position. “There are still challenges with EMG,” says neurotechnologist Chad Bouton, director of the Feinstein Institute for Medical Research’s bioelectronic medicine program.

One issue is interference from what Bouton refers to as motion artifacts. The bands have to process extraneous data from accidental hand movements, external vibrations, and the electrodes shifting around the skin. “All those things can cause extra signal you don’t want,” he says. An electrode headset, he notes, would face similar problems — but they’re serious issues for either system.

Reardon says CTRL-Labs’ band can pick out far more precise neural activity than the Myo, which Thalmic bills as a muscle-reading system rather than a brain-computer interface. And the band is supposed to work consistently anywhere on the wrist or lower arm, as long as it’s fitted snugly. (The prototype felt like wearing a thick, metallic elastic bracelet.) But Bouton, who uses EMG to find and activate muscles of people with paralysis, says users would get the best results from hitting exactly the same spot every time — which the average person might find difficult. “Even just moving a few millimeters can make a difference,” he says.

These problems would be moot in a technology that’s fascinated futurists for decades: brain implants. Bouton and other researchers are already developing brain implants for people with limited mobility, letting recipients control computer cursors or robotic arms by thought alone. In recent years, companies like Elon Musk’s NeuraLink have promised to take that a step further, developing mass-market implants that replace typing, clicking, or even talking with a form of digital telepathy.

Reardon thinks this idea is fundamentally misguided. The issue isn’t just that brain implants require invasive surgery and start degrading within a few years (problems that could be fixed in the future). It’s that counterintuitively, going straight to the “source” of your thoughts isn’t fast or efficient. An implanted electrode array has to sort through masses of brain activity to find commands, while our arms’ neurons are serving up a filtered stream of signals, traveling from our brains at lightning-fast speeds.

Inspirational material for how Ctrl-Labs’ band might someday look and operate, based partly on existing watch designs.
Inspirational material for how Ctrl-Labs’ band might someday look and operate, based partly on existing watch designs.
Photo by Amelia Holowaty Krales / The Verge

Brain implants are vital for people who simply don’t have that electrical activity in their limbs, including people with total paralysis or amyotrophic lateral sclerosis (ALS), a disease that attacks neurons. For everyone else, Reardon believes they’re a bad control system, capturing data that could be better found elsewhere. “We’ve evolved for millions of years to have an interface to the world. That interface is your spinal cord, your motor nervous system. That’s the place to go grab intention,” he says.

If CTRL-kit doesn’t directly connect to your brain, is it still a brain-computer interface? Reardon argues that the armband is giving people better versions of the same functions they’d get with a headset or implant, using the same kind of neural signals you’d find in the brain. CTRL-Labs obviously reaps some publicity benefits by using a hot technological buzzword, but by claiming the term, it’s also implicitly questioning whether “true” mass-market brain interfaces even make sense.

Instead of thinking at a computer, I was controlling an imaginary interface

Brain chips can theoretically offer lots of functions CTRL-kit can’t, and that would theoretically make them more useful in the long term. For one thing, the band isn’t trying to pipe signals into your brain the way that NeuraLink imagines simulating the taste of chocolate or treating mood disorders. It also couldn’t capture abstract thoughts or mental images, the way Facebook has suggested tapping into people’s speech centers to transcribe their thoughts. (Bouton believes this could be a viable input option in the future, though another neuroscientist told Vox the idea was “crazy.”)

Most present-day brain implants aren’t even close to having these capabilities, though, and many are based on the same principles as CTRL-kit. When Facebook announced its thought-typing system, for instance, it referenced a Stanford University study where participants learned to type with brain implants. But rather than reading inner monologues, that experiment asked users to imagine pushing a cursor around a screen with their arms, then captured the resulting motor neuron activity. Bouton says that if CTRL-kit works as advertised, it could actually offer some people with incomplete paralysis an alternative to implants.

At CTRL-Labs’ office, I tried my own cursor-moving experiment, minus the implanted electrodes. I stared at a dot on a screen, which I was supposed to move toward a series of targets. Reardon handed me a pen and suggested I point it at the monitor, as though I were using a stylus to drag the dot around. I imagined making the tiny adjustments this would require, but held my hand still. The dot began to slide. It was slow and unpredictable at first, then increasingly accurate. Instead of trying to think at the computer, I was imagining a hand-based interface for performing it, then thinking about my hand.

A workstation at CTRL-Labs’ offices in Manhattan.
A workstation at CTRL-Labs’ offices in Manhattan.

The demo switched to a clone of Fruit Ninja, where I used the imaginary interface to slash at flying produce. I wasn’t particularly good, and when I thought about moving my fingers, I would inadvertently tense or twitch my muscles. This made it feel more like a hypersensitive gesture controller than a mind-reading system.

For brief flashes, though, I felt like I’d developed digital telekinesis. I’d think hard about sending the dot in a direction, and it would shoot over without my hand using a muscle. It felt like an exaggerated version of touch-typing on a keyboard, a process where I understand that I’m hitting keys with my fingers, but I’m barely conscious of moving them — except that here, my fingers really weren’t moving.

This control scheme could theoretically get far more complicated, as with a CTRL-Labs version of Asteroids where you move and shoot with one hand. You don’t have to use a specific metaphor like a stylus, either; I initially tried to imagine a board that I was tipping in various directions for Fruit Ninja. (It wasn’t very accurate, but it wasn’t terrible for a first try.) Some of CTRL-Labs’ goals are mind-bendingly exotic, like training a model for controlling extra fingers. At the very least, the company imagines replacing QWERTY typing with a superfast single-handed alternative, so you could type anywhere without a keyboard.

Imagine typing with one hand in your pocket or drawing pictures with your mind

But as futuristic as CTRL-Labs’ ideas can get, it’s worth noting that many controller startups have tackled some of its goals, and largely failed to conquer them. One of CTRL-Labs’ simpler demos is an air-typing program, where a pair of bands analyzes users’ normal typing patterns, then translates those movements into letters. I saw demo videos that looked fantastic, and I’d buy CTRL-kit for that system alone — except that I’ve already been disappointed by several promising “wearable keyboards,” like the never-released Gest or the deeply flawed Tap.

Similarly, CTRL-Labs wants to offer an alternative to camera-based VR tracking systems, which are limited by a lens’s field of view. These kinds of tracking systems, though, are confined to a hyper-niche market unless big headset companies adopt them. CTRL-Labs hasn’t revealed a firm release plan yet; after CTRL-kit, it could either build its own product or partner with bigger companies. (For what it’s worth, this uncertainty is good, because rushing an experimental controller straight to consumers is usually a terrible idea.) The prototype armband is still tethered to a small box with an umbilical, and we’ve seen only mockups of potential finished designs.

It’s tough to find an interface more reliable than old-fashioned keys and buttons, and anybody claiming to have one deserves skepticism. But CTRL-Labs’ idea makes more sense than many visions of near-future computing. It’s based on established technology rather than speculative breakthroughs, isn’t prohibitively invasive, and doesn’t look completely ridiculous. Unlike a brain chip or headset, I can actually imagine using a device like CTRL-kit in my daily life, despite all the practical hurdles it faces. And it’s offering an experience that feels genuinely incredible — even if it’s not replacing my mouse and keyboard just yet.