James Bridle’s essay about those horrible YouTube videos went viral last year. It formed part of his new book, New Dark Age.
James Bridle on why technology is creating a new dark age
Bridle is already well-known for his creative critiques of modern technology, including the 2012 drone-tracking project Dronestagram, a salt circle that traps self-driving cars, and last year’s influential essay about creepy YouTube kids’ videos. New Dark Age integrates these critiques into a larger argument about the dangers of trusting computers to explain (and, increasingly, run) the world. As Bridle writes, “We know more and more about the world, while being less and less able to do anything about it.”
And, as he explains in this piece for The Observer, the problem with those YouTube videos has not gone away.
How Peppa Pig became a video nightmare for children
As a result, while many videos have since been removed from the website, uncountable numbers still remain. In March, Wired catalogued a slew of violent accounts and demonstrated that it was possible to go from a popular children’s alphabet video to a Minnie Mouse snuff film in 14 steps, just by following YouTube’s own recommendations. As of last week, Googling the title of one of the now-removed videos mentioned in the New York Times article (“PAW Patrol Babies Pretend to Die Suicide by Annabelle Hypnotized”) results in a link to a near-identical video still hosted on the site (“PAW PATROL Babies Pretend To Die MONSTER HANDS From MIRROR! Paw Patrol Animation Pups Save For Kids”), in which the adorable pups don a freakish clip-art monster mask to terrify one another before being lured off a rooftop by a haunted doll. Is “Save For Kids” supposed to read “Safe For Kids”? Either way, it is not, and it’s obvious that just playing whack-a-mole with search terms and banned accounts is never going to solve entangled problems of copyright infringement, algorithmic recommendation, and ad-driven monetary incentives on a billion-view platform with no meaningful human oversight.
The weirdness of YouTube videos, the extremism of Facebook and Twitter mobs, the latent biases of algorithmic systems: all of these have one thing in common with the internet itself, which is that – with a few dirty exceptions – nobody intentionally designed them this way. This is perhaps the strangest and most salutary lesson we can learn from these examples, if we choose to learn at all. The weirdness and violence they produce seems to be in direct correlation to how little we understand their workings – and how much is hidden from us, deliberately or otherwise, by the demands of efficiency and ease of use, corporate and national secrecy, and sheer, planet-spanning scale. We live in an age characterised by the violence and breakdown of such systems, from global capitalism to the balance of the climate. If there is any hope for those exposed to its excesses from the cradle, it might be that they will be the first generation capable of thinking about global complexity in ways that increase, rather than reduce, the agency of all of us.