Artificial Intelligence helps detect deepfake videos

Researchers tested the tool with an AI-based neural network on videos of former U.S. President Barack Obama. The neural network spotted over 90% of lip syncs involving Obama himself.

(Subscribe to our Today’s Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)

Researchers at Stanford University and UC Berkeley have devised a programme that uses artificial intelligence (AI) to detect deepfake videos.

The programme is said to spot 80% fakes by recognising minute mismatches in the sounds people make and the shape of their mouths, according to the study titled ’Detecting Deep-Fake Videos from Phenome-Viseme Mismatches.

Deepfake videos can be made using face-swapping or lip sync technologies. Face swap videos can be convincing yet crude, leaving digital or visual artifacts that a computer can detect.

Lip syncing is subtle, and harder to spot. The technology manipulates a smaller part of the image, and then synthesises lip movements that closely match the way a person’s mouth would move if they had said particular words. With enough samples of a person’s image and voice, a deepfake video-maker can manipulate an image to “say” anything, the team said.

The team built a tool to look for ‘visemes’ or mouth formations, and ‘phenomes’ or phonetic sounds.

Watch | Have you been swapping?

It tested the tool with an AI-based neural network on videos of former U.S. President Barack Obama. The neural network spotted over 90% of lip syncs involving Obama himself.

Although the program may help detect visual anomalies, deepfake detection is a cat-and-mouse game. As deepfake techniques improve, fewer clues will be left behind, the team said.

Deepfake could also lead to a spike in misinformation which will be much harder to spot.

Source: Read Full Article