BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Smartcars Of The Future

This article is more than 10 years old.

My father also thought a lot about cars.  He even did experiments with taped lines on the windshield to show the width of the car in front of the driver when it’s at the right distance for a particular speed.  He had nested brackets that showed a safe distance at 40, 50, 60, 70 miles per hour, which would allow a rational amount of time to activate the brakes in time to avoid a crash.

The experiment was actually the development of an algorithm by means of data gathering, the most basic initial process in system design.  A similar algorithm now resides in collision-avoidance systems in some cars today, making use of camera and speedometer input.

But the thing my father bemoaned the most was the simplistic vocabulary available to drivers for speaking with each other.  They were (and still are) confined to various lengths of honk, which might mean different things, light-flashing without an agreement on Morse Code or other patterns, and perhaps gesticulating grandly through the windshield, as in “Non, après vous, Monsieur,” or, more likely, a shaken fist or flipped bird.

And this idea has stayed with me.  The idea of better communications in traffic.

The car of the future, then, would have better intercommunications.

The technology for all this stuff exists now: Google maps, voice-over-IP, and a whole host of sensors, actuators, and programs that make use of them.  Nodes could come into the driver’s field of view on a dashboard screen as they became active (i.e., nearby in meatspace), and the driver could say, “Hey, over there, do you notice me, are you aware?”  And that node could respond, “Hi, I see you” in some fairly simple way.

These interactions could be represented graphically, and a protocol could be established for interaction.  For example, everyone’s car could be represented by an avatar.  People could “skin” their own vehicles or take the default, which might include a number.  Communication could be initiated voluntarily, and vocabulary could be limited to stock phrases to help avoid abuse or overridden voluntarily, allowing people to actually talk, as in a phone call.

And if the driver was focusing on something else, he or she could ignore the requests and just use visual navigation.

It’s important to keep in mind that for a federated sociological system to work, it has to be voluntary.

We have all the subsystems we need today: geolocation, digital voice communications, short messaging, graphical representations, touch navigation as exemplified by Apple's iOS, hands-free actuation, large icons for glanceable use.

To get a sense of how well this could work, we have to look only as far as the 2012 Olympics in London.  In the tense semifinal tennis match between Roger Federer (Switzerland) and Juan Martin del Potro (Argentina) that I managed to watch, disputed calls were replayed instantly with graphical input from multiple cameras.  The output was a digital representation of the line, angle of the shot, and the exact footprint of the ball, with practically every blade of grass represented by a pixel.  The key method here was that these results were displayed instantly, and to everyone, the players, live and televised audiences, and the judges.  Since everyone already agreed on the rules, disputes were settled on the spot and to everyone’s satisfaction.

Thus, driving communications could be facilitated if everyone had all the information instantly — along with a reminder of the operative rules.

To support better driver communications, driving itself could be divided into low-level functions, autonomic elements that could be handed off to the machine, and higher decision making, left up to the human driver.  Optimal driving (that which moves all the cars on the road in the most efficient manner) could be managed through an onboard computer constantly in touch with the cloud and input from other cars.

This same autonomic system could help drivers.  Let’s say four drivers arrive at a four-way stop in some close sequence.  A glanceable screen could show the right-of-way order exactly as well as a visual overlay of the relevant traffic laws.  If a car attempted to go out of order, a visual or aural signal could warn all the relevant drivers that something irregular was happening.

So, cars could drive themselves for the most part, although drivers could take over merely by grabbing a control (e.g., the steering wheel).  In circumstances in which a lot of decision-making was involved (like finding a parking place), the driver would tend to be in charge.  In others (like driving a hundred miles on a freeway), the machine could take the helm, like the automatic pilot on long-haul commercial aircraft.

In Tom Wolfe’s book “The Right Stuff,” available on Amazon, he tells of a moment when engineers in the early U.S. space program wanted to make the space capsules largely automatic, operated by internal systems and controls on the ground.  But the astronauts in training, almost all former fighter pilots, complained that it would be demeaning to sit there like the monkeys used in earlier tests.  They wanted to drive.  So, the spaceship design was changed.

But actually it would be to our benefit to leave low-level driving to the machine.  We could then be free to develop, with a better vocabulary than honks and flashes, a more civil relationship with our fellow humans, knowing that the system is making the best decisions for all of us.

© 2012 Endpoint Technologies Associates, Inc.  All rights reserved.

Twitter: RogerKay