Online ‘truth decay’

Fake news is old news, but I came across a new phrase today — well, new to me, anyway.

You thought fake news was bad? Deep fakes are where truth goes to die
Citron, along with her colleague Bobby Chesney, began working on a report outlining the extent of the potential danger. As well as considering the threat to privacy and national security, both scholars became increasingly concerned that the proliferation of deep fakes could catastrophically erode trust between different factions of society in an already polarized political climate.

In particular, they could foresee deep fakes being exploited by purveyors of “fake news”. Anyone with access to this technology – from state-sanctioned propagandists to trolls – would be able to skew information, manipulate beliefs, and in so doing, push ideologically opposed online communities deeper into their own subjective realities.

“The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases,” the report reads. “Deep fakes will exacerbate this problem significantly.”

Maybe I need to stop reading about fake news, it’s not good for my blood pressure. Just a couple more, then I’ll stop.

After murder and violence, here’s how WhatsApp will fight fake news
WhatsApp has announced it is giving 20 different research groups $50,000 to help it understand the ways that rumours and fake news spread on its platform. The groups are based around the world and will be responsible for producing reports on how the messaging app has impacted certain regions.

The range of areas that are being studied highlight the scale of misinformation that WhatsApp faces. One set of researchers from the UK and US are set to see how misinformation can lead to disease outbreaks in elderly people, one will look at how information was shared on WhatsApp in the 2018 Brazilian elections and another is examining how posts can go viral on the messaging service.

Inside the British Army’s secret information warfare machine
This new warfare poses a problem that neither the 77th Brigade, the military, or any democratic state has come close to answering yet. It is easy to work out how to deceive foreign publics, but far, far harder to know how to protect our own. Whether it is Russia’s involvement in the US elections, over Brexit, during the novichok poisoning or the dozens of other instances that we already know about, the cases are piling up. In information warfare, offence beats defence almost by design. It’s far easier to put out lies than convince everyone that they’re lies. Disinformation is cheap; debunking it is expensive and difficult.

Even worse, this kind of warfare benefits authoritarian states more than liberal democratic ones. For states and militaries, manipulating the internet is trivially cheap and easy to do. The limiting factor isn’t technical, it’s legal. And whatever the overreaches of Western intelligence, they still do operate in legal environments that tend to more greatly constrain where, and how widely, information warfare can be deployed. China and Russia have no such legal hindrances.

Author: Terry Madeley

Works with student data and enjoys reading about art, data, education and technology.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s