Skip to main content

7 ways we’ll interact with computers in the future

An asteroid of new interfaces will wipe out the prehistoric mouse and keyboard

Fifty years ago, pioneering computer scientist Doug Engelbart showed off a series of breathtaking new technologies in one astonishing keynote that’s referred to as “The Mother of All Demos.” Demonstrating the computer mouse, the graphical user interface, hypertext, video conferencing and more, it was the equivalent of a modern Apple event unveiling the Macintosh, the iPhone, the iPad and the iPod all at the same time.

Half a century after Engelbart’s demo, we’re still relying on a lot of the computer interactions he helped to pioneer. But the means by which we interact with computers are changing, slowly but surely. So put down your mouse and keyboard, because here are seven of the ways we’ll interact with machines in the decades to come:

Voice control

Wear OS - Google Asssitant
Simon Hill / Digital Trends

We’ll start with an obvious one. Just a few years ago, voice control was incredibly limited. While it was decent enough for transcribing text, and useful as an accessibility tool for people with impaired vision, few folks were going to voluntarily give up their mouse to speak to their computer instead.

Today, this sci-fi dream has finally come true. Aided by breakthroughs in artificial intelligence, smart speakers like Google Home and Amazon Echo not only understand what we are saying, but can make sense of it, too. Voice controls are able to greatly speed up our interactions with computers, while meaning we no longer have to physically be right in front of them in order to use them.

The technology also lowers the barrier to entry since asking a machine to perform a task, using everyday words, is a whole lot simpler than requesting people learn to grapple with different computer operating systems and software layouts.

Emotion sensing

History of AI name
Image used with permission by copyright holder

It’s great if a machine can do what you ask of it. Even better is when a machine can predict what you want before you even have to ask. That’s where emotion tracking technology could help change things.

While it’s more of a way of improving interfaces, rather than an interface in its own right, emotion sensing can assist users by pulling up relevant suggestions based on how you’re feeling at that precise moment.

Knowing the optimal time for you to do work based on your productivity levels? Analyzing your typing to ascertain your mood and pull up the right apps accordingly? Emotion sensing will help with all of this.

Gestural sensing

Singlecue Gen 2 review gesture
Image used with permission by copyright holder

We already use gestures to control our devices, but there’s so much more that can be done in this area — such as machines which can use image recognition technology to better recognize hand and body motions, even when we’re not physically in contact with a screen.

Devices like the Kinect have already explored this in the gaming space, but companies such as Apple have also explored it for (presumably) more serious productivity-oriented applications.

Aside from image recognition, embedded implants might be another way to let us interact with smart environments with little more than the wave of a hand. Don’t fancy getting a chip injected into your body? Then maybe consider technology like…

Touch surfaces everywhere

Image used with permission by copyright holder

Remember rapper Trinidad James’ 2012 song “All Gold Everything?” Well, in the future it seems that “All touch-sensitive everything” is going to be the name of the game.

Researchers at places like Carnegie Mellon have been working on ways to turn just about any surface you can think of — from desks to human limbs to entire walls of your home — into smart touch surfaces. Why limit your touch interactions to the tiny form factor of a smartwatch, or even a tablet computer, when virtually everything can be made smart with the right paint job?

Particularly as the “smart home” comes of age, this tech will allow us to control our surroundings with assorted virtual buttons and the like. The results will be the most complete realization of the late computer visionar Mark Weiser’s statement that the most profound technologies are those which, “weave themselves into the fabric of everyday life until they are indistinguishable from it.”

Pre-touch

Pre-Touch Sensing for Mobile Interaction

In today’s busy world, who has time to actually touch a touchscreen? That’s right: nobody. Fortunately, smartphone makers everywhere — from Samsung to Apple — are actively investigating pre-touch sensing. (Samsung’s current Air Gesture tech is one early implementation.)

The idea is to track your fingers as they hover over a display, and then trigger interactions accordingly. In terms of functionality it could work a bit like Apple’s 3D Touch feature for the iPhone, with apps or files able to offer a sneak preview of what’s inside before you actually open them up. Except without the indignity of actually having to touch the display to do it.

Virtual and augmented reality

MakeVR 3D modeling in VR on the HTC Vive - Hands On

Virtual and augmented reality technology opens an entire new world of ways to interface with our devices. Want to surround yourself with infinite MacOS screens for some bonkers multitasking? Fancy designing three-dimensional objects in the virtual world? Dream of being able to summon information about an object or device simply by looking at it? AR and VR will make all of this commonplace.

Add in the number of breakthrough haptic controllers to make the virtual experience even more lifelike, and this is one of the most exciting options on this list.

Brain interface

Image used with permission by copyright holder

The ultimate computer interface would surely be one that doesn’t require us to do any more than think about a task and have it performed immediately for us. Brain interfaces could effortlessly carry out certain tasks for us, while also allowing us to tap into the devices around us to access an enormous amount of information.

Groups such as DARPA have investigated brain interfaces, while real-life Iron Man Elon Musk’s proposed Neuralink technology plans to create consumer-facing cybernetic implants that will turn us all into real life cyborgs.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Apple’s Tim Cook drops heaviest hint yet about a future device
Tim Cook at Peek Performance Apple event.

Apple is famous for keeping its cards close to its chest when it comes to upcoming products, so comments made by CEO Tim Cook this week have surprised many observers.

Speaking in an interview with China Daily USA, Cook gave the clearest hint yet that Apple is working on a high-tech headset.

Read more
Meta just revealed how VR headsets could look in the future
Someone wearing a futuristic VR headset.

Meta recently previewed a futuristic-looking VR headset concept in a metaverse promotional video. There's no confirmation that this is an actual product in development, but the new device is clearly much more advanced than a Quest headset and even slimmer than the upcoming Cambria headset.

Fingertip sensors are also shown and might help to quickly identify finger location with great precision, as well as provide haptic feedback.

Read more
Doing work in VR never made sense, and now we know why
An office meeting happening in the VR Metaverse.

Despite Mark Zuckerberg's utopian vision for an immersive metaverse where you live and work, the latter is probably not happening anytime soon -- at least, according to a new study from investigators in Germany.

First reported by PC Gamer, researchers based out of Coburg University conducted an experiment in which 16 people worked completely in VR for a week. The participants used Meta Quest 2 VR headsets combined with a Logitech K830 keyboard and Chrome Remote Desktop. This was done to make it a more realistic scenario of what people can afford today.

Read more