Skip to Main Content
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

Good News, Humans: AI Still Needs Us (for Now)

Companies developing AI and machine learning systems need to acknowledge that they're not infallible and remain 'teachable' via human intervention.

May 23, 2018
Artificial Intelligence AI and Machine Learning ML

I had a run-in with the power and limitations of AI last week when I ordered a Lyft home. A new driver pulled up to the curb, but forgot to tell the Lyft "brain" that I was in the car. We drove off but the driver's system soon buzzed with an alert.

"You're not here," said the driver, confused. "I've been assigned a new passenger."

As I always order an ecologically-friendly "Lyft Line" with up to two other passengers, I wasn't too bothered. Until I looked down at my phone, which said I wasn't actually in the car. The driver was apologetic, but said there was nothing he could do. "I have to follow the route mapped out for this new passenger."

Infuriated, I got out of the Lyft, as another bemused passenger got in.

Here's where it got interesting (IMHO). I instantly contested the $5 fine from Lyft for "not being ready for the Lyft Line" and ordered another driver. Then I could almost "see" how the Lyft system went through its AI "thought processes" for risk assessment on handling my case.

Aptiv Lyft

First, it would look at my history as a rider (excellent: always on time, no credit issues on payment). Then (I assume), it ascertained my "score" by seeing how many rides I'd taken (frequency) coupled with revenue gained. This would give it a baseline "model" (my participation in the Lyft service) and unique risk assessment "score" to handle any issues on my account.

The complaint process was handled by the AI pretty smoothly—until I disputed the credit and selected the option to have a human take over. It all ended well. They had access to the same "score" as the AI so there was no delay as the representative went through my details. But that's because Lyft built a "human-in-the-loop" into its AI-powered system.

The lesson, for me (and hopefully for you too) is that companies developing systems that run on AI and machine learning need to acknowledge that they're not infallible and remain "teachable" via human intervention.

Algorithms Make Life Decisions

Why is this important? Increasingly, these algorithms determine what treatment and terms we will receive in life moving forward, from credit worthiness to health, car, and life insurance policies.

I've been to several "better living through algorithms" symposiums recently, but few get beyond "bias is bad" and "something must be done." Simply put, we need the ability to train AI, but how?

I put in a call to Dr. Jason Mars, a computer science professor at the University of Michigan. He's currently on leave as director of the university's Clarity Lab and is the co-founder and CEO of Clinc, a conversational AI startup for the financial industry.

Clinc"One of the greatest challenges in this age of AI is enabling the masses to wield and train the types of machine learning models that only the top computer science experts of the world have been using," said Dr. Mars. "At Clinc, we invented a new class of training platform to address this exact problem."

Clinc's platform, known as Spotlight, "can train and retrain the best AI models on the planet without having a computer science or AI background," Dr. Mars said.

Essentially, Clinc built a front-end tool disguised as a conversational AI bot. Through natural language processing, it can allow customers to investigate and change what is known about their financial patterns.

"This is a hard science problem," but advancements in the space mean "users can create new capabilities in managing and observing their financial accounts and spending patterns," he said.

Watching an AI Think

In January I sat in a basement at UCLA and saw an AI called TEVI "think." It was remarkable to get a view into an artificial "brain" as it extrapolated "meaning" from human-level inputs. So I went back to TEVI's creator, Ray Christian, founder and CEO of Textpert, and asked him how they "train" TEVI.

"AI models are subject to concept drift," Christian explained. "Which means the model needs to be retrained to take into account new data that has 'drifted' away from what initially trained the model. Every time AI models—including TEVI's—are retrained, you could argue that the users have re-calibrated the model."

However, as he pointed out: "Peeking into the AI blackbox to see its rationale is a more difficult proposition. Cutting-edge research is experimenting with masking certain layers of the neural network in order to isolate variables and understand how the model is perceiving certain features. But it may be awhile before we fully understand what's happening behind the curtain."

Changing the Machine Learning Methods

Also at UCLA is Dr. Miryung Kim, an Associate Professor of Computer Science and an expert in software engineering, who suggested that "current artificial intelligence (AI) and machine learning (ML) technologies are not sufficiently democratized.

"Building complex AI and ML systems requires deep expertise in computer science and extensive programming skills to work with various machine reasoning and learning techniques at a rather low level of abstraction," she said. "It also requires extensive trial and error exploration for model selection, data cleaning, feature selection, and parameter tuning."

In her opinion, the computer science research community must rethink software development tools such as debugging, testing, and verification tools for complex AI- and ML-based systems.

According to Dr. Rana el Kaliouby, founder and CEO of Affectiva, building effective, quality AI begins and ends with carefully designed data collection.

"You start by digging into the specific use cases of the AI you're designing, and then focus on collecting large amounts of real-world data that is representative of these use cases. This is crucial in order to ensure that algorithms perform accurately in the real world," she said.

"For example, when building a driver drowsiness detector, you need a lot of examples of people getting drowsy behind the wheel. We do not think it is ethical to sleep deprive people and send them down the highway. Instead, we collect large amounts of driving data 'in the wild' so we can mine for natural occurrences of drowsiness. Once the AI is deployed, it is important that data comes back to R&D in a continuous feedback loop, so that you can validate and, if necessary, retrain your models."

How to Tell If You're a Tech Addict (and What to Do About It)
PCMag Logo How to Tell If You're a Tech Addict (and What to Do About It)

Like What You're Reading?

Sign up for SecurityWatch newsletter for our top privacy and security stories delivered right to your inbox.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.


Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Sign up for other newsletters

TRENDING

About S.C. Stuart

Contributing Writer

S.C. Stuart

S. C. Stuart is an award-winning digital strategist and technology commentator for ELLE China, Esquire Latino, Singularity Hub, and PCMag, covering: artificial intelligence; augmented, virtual, and mixed reality; DARPA; NASA; US Army Cyber Command; sci-fi in Hollywood (including interviews with Spike Jonze and Ridley Scott); and robotics (real-life encounters with over 27 robots and counting).

Read S.C.'s full bio

Read the latest from S.C. Stuart