Skip to main content

Google aiming to improve Deepfake detection by releasing example videos

Google is aiming to curb the rise of Deepfake videos on the web by releasing a series of example videos to help researchers develop more accurate detection techniques.

The move was announced in a Google AI blog post, where the company confirmed that they have worked in collaboration with Jigsaw — a firm created within Google’s incubator program.

Deepfake detection is being taken seriously by Google as the video tech could potentially be exploited in nefarious ways. We’ve already seen instances of Deepfake videos being used to mimic politicians and this could just be the start.

Google has worked directly with actors for over a year to create a series of Deepfake videos for the sole purpose of this research. Looking at the examples, some fakes are very easy to spot, while others are remarkably realistic — even more so if you had no point of reference.

Google considers these issues seriously. As we published in our AI Principles last year, we are committed to developing AI best practices to mitigate the potential for harm and abuse.

Last January, we announced our release of a dataset of synthetic speech in support of an international challenge to develop high-performance fake audio detectors. The dataset was downloaded by more than 150 research and industry organizations as part of the challenge, and is now freely available to the public.

Today, in collaboration with Jigsaw, we’re announcing the release of a large dataset of visual deepfakes we’ve produced that has been incorporated into the Technical University of Munich and the University Federico II of Naples’ new FaceForensics benchmark, an effort that Google co-sponsors.

The incorporation of these data into the FaceForensics video benchmark is in partnership with leading researchers, including Prof. Matthias Niessner, Prof. Luisa Verdoliva and the FaceForensics team. You can download the data on the FaceForensics github page.

It shouldn’t take an expert to tell you that having the technology to make it look as though you said something on camera that you didn’t could be damaging to your reputation or personal safety. Obviously, by releasing their own data sets and video examples, Google wants to help researchers develop detection systems for the impressive video AI tech.

Google has also confirmed that the video dataset will continue to grow and develop to help researchers develop even more accurate Deepfake detection systems as the tech develops.

More on Google:

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Check out 9to5Google on YouTube for more news:

Comments

Author

Avatar for Damien Wilde Damien Wilde

Damien is a UK-based video producer for 9to5Google. Find him on Twitter: @iamdamienwilde. Email: damien@9to5mac.com


Manage push notifications

notification icon
We would like to show you notifications for the latest news and updates.
notification icon
Please wait...processing
notification icon
We would like to show you notifications for the latest news and updates.
notification icon
Please wait...processing