Satire or harmful deception?

Fake videos — they’re just a bit of fun that we’re happy to spread around on social media, right? Whilst they play a part in the BBC dystopian future drama, Years and Years, helping to sway a general election, we’re not really fooled by them, are we?

Well, perhaps not yet, but they’ve got US politicians worried enough about their upcoming presidential election in 2020 to officially look into it all.

Congress grapples with how to regulate deepfakes
“Now is the time for social media companies to put in place policies to protect users from this kind of misinformation not in 2021 after viral deepfakes have polluted the 2020 elections,” Schiff said. “By then it will be too late.”

At the outset of the hearing, Schiff came out challenging the “immunity” given to platforms under Section 230 of the Communications Decency Act, asking panelists if Congress should make changes to the law that doesn’t currently hold social media companies liable for the content on their platforms.

Another example.

Deepfakes: Imagine All the People
Of course this isn’t real. The video was done by a company called Canny AI, which offers services like “replace the dialogue in any footage” and “lip-sync your dubbed content in any language”. That’s cool and all — picture episodes of Game of Thrones or Fleabag where the actors automagically lip-sync along to dubbed French or Chinese — but this technique can also be used to easily create what are referred to as deepfakes, videos made using AI techniques in which people convincingly say and do things they actually did not do or say.

A ‘fake’ arms race, for real

This essay from Cailin O’Connor, co-author of The Misinformation Age: How False Beliefs Spread, frames the issue of online misinformation as an arms race.

The information arms race can’t be won, but we have to keep fighting
What makes this problem particularly thorny is that internet media changes at dizzying speed. When the radio was first invented, as a new form of media, it was subject to misinformation. But regulators quickly adapted, managing, for the most part, to subdue such attempts. Today, even as Facebook fights Russian meddling, WhatsApp has become host to rampant misinformation in India, leading to the deaths of 31 people in rumour-fuelled mob attacks over two years.

Participating in an informational arms race is exhausting, but sometimes there are no good alternatives. Public misinformation has serious consequences. For this reason, we should be devoting the same level of resources to fighting misinformation that interest groups are devoting to producing it. All social-media sites need dedicated teams of researchers whose full-time jobs are to hunt down and combat new kinds of misinformation attempts.

I know I’m a pretty pessimistic person generally, but this all sounds quite hopeless. Here’s how one group of people is responding to the challenge of misuse of information and fake videos — by producing their own.

This deepfake of Mark Zuckerberg tests Facebook’s fake video policies
The video, created by artists Bill Posters and Daniel Howe in partnership with advertising company Canny, shows Mark Zuckerberg sitting at a desk, seemingly giving a sinister speech about Facebook’s power. The video is framed with broadcast chyrons that say “We’re increasing transparency on ads,” to make it look like it’s part of a news segment.

“We will treat this content the same way we treat all misinformation on Instagram,” a spokesperson for Instagram told Motherboard. “If third-party fact-checkers mark it as false, we will filter it from Instagram’s recommendation surfaces like Explore and hashtag pages.”