X
Innovation

Will AI ever ‘understand’ satire?

Machines don't understand much of anything, especially not things such as ironic speech, but machine learning may be able to assist humanity in some way by counting the instances of linguistic and semantic constructions that indicate satire or misleading news, according to a new study by tech startup AdVerifai, in partnership with George Washington University and Amazon’s AWS.
Written by Tiernan Ray, Senior Contributing Writer
robot-laughs-2.jpg
Tiernan Ray for ZDNet

A lot of nuances of writing are lost on the internet -- things such as irony. 

That's why satirical material such as the writing of Andy Borowitz on the website of The New Yorker magazine has to be labeled as satire, to make sure we know.  

Scientists in recent years have become concerned: What about writing that isn't properly understood, such as satire mistaken for the truth, or, conversely, deliberate disinformation campaigns that are disguised as innocent satire?

And so began a quest to divine some form of machine learning technology that could automatically identify satire as such and distinguish it from deliberate lies. 

In truth, a machine can't understand much of anything, really, and it certainly can't understand satire. But it may be able to quantify aspects of satirical writing, which might help to deal with the flood of fake news on the Internet. 

Case in point: A paper presented this week at the 2019 Conference on Empirical Methods in Natural Language Processing, in Hong Kong, authored by researchers from the tech startup AdVerifai, The George Washington University in Washington, DC, and Amazon's AWS cloud division.

Also: No, this AI hasn't mastered eighth-grade science

The paper, Identifying Nuances in Fake News vs. Satire: Using Semantic and Linguistic Cues, builds upon years of work modeling differences between misleading, factually inaccurate news articles, on the one hand, and satire on the other hand. (There's also a slide deck prepared for EMNLP.) The paper was presented in the workshop on "censorship, disinformation and propaganda."

The pressing concern, as lead author Or Levi, of AdVerifai, and his colleagues, write, is that it can be difficult in practice to tell satire from fake news. That means legitimate satire can get banned while misleading information may get undeserved attention because it masquerades as satire. 

"For users, incorrectly classifying satire as fake news may deprive them from desirable entertainment content, while identifying a fake news story as legitimate satire may expose them to misinformation," is how Levi and colleagues describe the situation. 

The idea of all this research is that, although a person should know satire given a modicum of sense and topical knowledge, society may need to more precisely articulate and measure the aspects of satirical writing in a machine-readable fashion.

Past efforts to distinguish satire from genuinely misleading news have employed some simple machine learning approaches, such as using a "bag of words" approach, where a "support vector machine," or SVM, classifies a text based on very basic aspects of the writing. 

Also: No, this AI can't finish your sentence

For example, a study in 2016 by researchers at the University of Western Ontario, cited by Levi and colleagues, aimed to produce what they called an "automatic satire detection system." That approach looked at things like whether the final sentence of an article contained references to persons, places, and locations -- what are known as "named entities" -- that are at variance with the entities mentioned in the rest of the article. The hunch was that the sudden, surprising references could be a measure of "absurdity," according to the authors, which could be a clue to satiric intent. 

That kind of approach, in other words, involves simply counting occurrences of words, and is based on expert linguists' theories about what makes up satire. 

In the approach of Levi and colleagues, machine learning moves a little bit beyond that kind of human feature engineering. They employ Google's very popular "BERT" natural language processing tool, a deep learning network that has achieved impressive benchmarks for a variety of language understanding tests in recent years. 

They took a "pre-trained" version of BERT, and then they "fine-tuned" it by running it through another training session based on a special corpus comprised of published articles of both satire and fake news. The dataset was built last year by researchers at the University of Maryland and includes 283 fake news articles and 203 satirical articles from January 2016 to October 2017 on the topic of US politics. The articles were curated by humans and labeled as either fake or satirical. The Onion was a source of satirical texts, but they included other sources so that the system wouldn't simply be picking up cues in the style of the source.

Levi and colleagues found that BERT does a pretty good job of accurately classifying articles as satire or fake news in the test set -- better, in fact, than the simple SVM approach of the kind used in the earlier research. 

Also: Why is AI reporting so bad?

Problem is, how it does that is mysterious. "While the pre-trained model of BERT gives the best result, it is not easily interpretable," they write. There is some kind of semantic pattern detection going on inside BERT, they hypothesize, but they can't say what it is. 

To deal with that, the authors also ran another analysis, where they classified the two kinds of writing based on a set of rules put together a decade ago by psychologist Danielle McNamara and colleagues, then at the University of Memphis, called "Coh-Metrix." The tool is meant to asses how easy or hard a given text is for a human to understand given the level of "cohesion" and "coherence" in the text. It's based on insights from the field of computational linguistics. 

The Coh-Metrix rules allow Levi and colleagues to count how many times in each document a certain kind of writing convention occurs. So, for example, the use of the first person singular pronoun is one of the most highly correlated elements in a satirical text. By contrast, at the top of the list of common constructions for fake news is what they call "agentless passive voice density." They use a technique called "principle component analysis," a mainstay of older machine learning, to pick out these occurrences, and then run the occurrences through a logistic regression classifier that separates satire and fake.

This approach is less accurate as a classifier than BERT, they write, but it has the virtue of being more transparent. Hence, the common trade-off between accuracy and explainability is operating here just as it often is in today's deep learning. 

Levi and colleagues plan to pursue the research further, but this time with a much larger dataset of satirical and fake news articles, according to a communication between Levi and ZDNet.

What does all this mean? Maybe it will be a help to institutions that might want to properly separate satire from fake news, such as Facebook. The authors conclude that their findings "carry great implications with regard to the delicate balance of fighting misinformation while protecting free speech." 

At the very least, BERT can score better than prior methods as a classifier of satire versus fake news. 

Just don't confuse this for understanding on the part of machines. Some humans might not "get" satire, but plenty will. In the case of machines, they never really "get" it; we can only hope they can be made to count the salient patterns of satire and place it in the right bin.

Editorial standards