It’s obvious, when you think about it. Of course not all Neanderthals were ‘cavemen’ — half were women.
Sheanderthal – Aeon Essays Archaeology is no exception to biases against women’s interests across science and the humanities. Since the early days, a tendency to conceptualise humanity’s deep origins as populated literally by ‘cavemen’ has led to presumed male activities being presented as most visible and interesting. … In fact, for most of the subsequent 160 years, female Neanderthals – if featured at all – tend to be fewer in number, peripherally located, and limited to ‘domesticated’ activities including childcare and skin-working. They are essentially scenery, in the words of the anthropologist Diane Gifford-Gonzalez, rather than active providers working on stone knapping or hunting and, in addition, they’re often fearfully lurking, hidden in dark grottos.
The world is a very different place now.
Why eye-catching graphics are vital for getting to grips with climate change – The Conversation One misconception about the climate crisis is that warming will be uniform across the world. Deniers cite cold fronts or blizzards as evidence that warming is exaggerated, or hark back to past heatwaves – such as that experienced by the UK in 1976 when temperatures exceeded 35°C – as proof that the scientists have got it wrong. Apart from this misleading conflation of weather (daily conditions) and climate (long-term conditions), this kind of argument misses the complex patchwork of effects that interact to create what gets reported in the headline figures. Maps can be an invaluable weapon against this misunderstanding. … [W]hat is needed are more universally accessible visualisations that are able to show where we’re heading in no uncertain terms.
How on earth would you protect future generations from something with a half-life of over 700 million years? Use your imagination.
The art of pondering Earth’s distant future – Scientific American We do not, of course, live in these imagined worlds. In this sense, they are unreal—merely fictions. However, our capacities to envision potential futures, and to feel empathy for those who may inhabit them, are very real. Depictions of tomorrow can have powerful, concrete effects on the world today. This is why deep time thought experiments are not playful games, but serious acts of intellectual problem-solving. It is why the safety case experts’ models of far future nuclear waste risks are uniquely valuable, even if they are, at the end of the day, mere approximations.
Remember when virtual reality was supposed to be the next all-encompassing, technological paradigm? Or the Internet of Things? Well, hold on to your VR googles becausethemetaverseiscoming! Mark says so.
Facebook wants us to live in the metaverse – The New Yorker In a Facebook earnings call last week, Mark Zuckerberg outlined the future of his company. The vision he put forth wasn’t based on advertising, which provides the bulk of Facebook’s current profits, or on an increase in the over-all size of the social network, which already has nearly three billion monthly active users. Instead, Zuckerberg said that his goal is for Facebook to help build the “metaverse,” a Silicon Valley buzzword that has become an obsession for anyone trying to predict, and thus profit from, the next decade of technology.
Mark Zuckerberg wants to turn Facebook into a ‘metaverse company’ – what does that mean? – The Conversation In his quest to turn Facebook into a metaverse company, Zuckerberg is seeking to build a system where people move between virtual reality (VR), AR and even 2D devices, using realistic avatars of themselves where appropriate. Here they will work, socialise, share things and have other experiences, while still probably using the internet for some tasks such as searches which are similar to how we use it now. Owning not only the Facebook platform but also WhatsApp, Instagram and VR headset maker Oculus gives Zuckerberg a big head start in making this a reality.
Mark in the metaverse: Facebook’s CEO on why the social network is becoming ‘a metaverse company’ – The Verge The metaverse is a vision that spans many companies — the whole industry. You can think about it as the successor to the mobile internet. And it’s certainly not something that any one company is going to build, but I think a big part of our next chapter is going to hopefully be contributing to building that, in partnership with a lot of other companies and creators and developers. But you can think about the metaverse as an embodied internet, where instead of just viewing content — you are in it. And you feel present with other people as if you were in other places, having different experiences that you couldn’t necessarily do on a 2D app or webpage, like dancing, for example, or different types of fitness.
For context, it would be helpful to read Neal Stephenson’s 1992 Snow Crash or Ernest Cline’s Ready Player One from 2011, recently made into a movie of the same name. Exciting, dynamic sci-fi thrillers, but not futures that I’d like as my present.
The metaverse has always been a dystopian idea – VICE If it is coming, and if it is a big deal, then surprisingly few have paused to carefully consider the actual source of the metaverse, an undertaking which seems like a good idea, especially because that source is a deeply dystopian novel about a collapsed America that is overrun by violence and poverty. The metaverse was born in Neal Stephenson’s 1992 Snow Crash, where it serves as entertainment and an economic underbelly to a poor, desperate nation that is literally governed by corporate franchises. […]
Both books’ metaverses get at a common truism: there is something inherently dystopian in a future where humans abandon the real world in favor of an escapist and consumerist-oriented fully immersive digital one. To want to spend any serious amount of time in a metaverse, it must be made more appealing than reality, a feat which can be accomplished in one of two ways—either the world outside is already shitty enough to drive you into a glitch-prone, murder-filled alternative, or the fantasy of becoming someone else is compelling enough to consume you totally.
Is this all hype at the moment? Is there any real substance to these aspirations?
But as usual with such amorphous concepts and platform aspirations, there’s very little there. None of these luminaries, from Zuck to Nadella to Boz, seem capable of painting a coherent vision for what their particular metaverse will look or feel like, beyond gesturing at “presence” and a collection of apps, keywords, and old science fiction tropes. It is an odd vision built from a compendium of juvenile fantasies, perceived market opportunities, and overt dystopias.
Well, the author of that article might think so, but that’s not a view shared by venture capitalist Matthew Ball. He first wrote about the beginnings of the metaverse in 2018 …
The Metaverse: What it is, where to find it, who will build it, and Fortnite – MatthewBall.vc This is why considering Fortnite as video game or interactive experience is to think too small and too immediately. Fortnite began as a game, but it quickly evolved into a social square. Its players aren’t logging in to “play”, per se, but to be with their virtual and real-world friends. Teenagers in the 1970s to 2010s would come home and spend three hours talking on the phone. Now they talk to their friends on Fortnite, but not about Fortnite. Instead, they talk about school, movies, sports, news, boys, girls and more. After all, Fortnite doesn’t have a story or IP – the plot is what happens on it and who is there.
A framework for the metaverse – MatthewBall.vc Since [the 2020 update], a lot has happened. COVID-19 forced hundreds of millions into Zoomschool and remote work. Roblox became one of the most popular entertainment experiences in history. Google Trends’ index on the phrase “The Metaverse” set a new “100” in March 2021. Against this baseline, use of the term never exceeded seven from January 2005 through to December 2020. With that in mind, I thought it was time to do an update – one that reflects how my thinking has changed over the past 18 months and addresses the questions I’ve received during this time, such as “Is the Metaverse here?”, “When will it arrive?”, and “What does it need to grow?”.
Each of these buckets is critical to the development of the Metaverse. In many cases, we have a good sense of how each one needs to develop, or at least where there’s a critical threshold (say, VR resolution and frame rates, or network latency). But recent history warns us not to be dogmatic about any specific path to, or idealized vision of, a fully functioning Metaverse. The internet was once envisioned as the ‘Information Superhighway’ and ‘World Wide Web’. Neither of these descriptions were particularly helpful in planning for 2010 or 2020, least of all in understanding how the world and almost every industry would be transformed by the internet.
Very extensive, and I can’t say I follow even half of it, but it all sounds very exciting. It’s nice to see Second Life getting a mention as a “proto-metaverse”, but I wish it was more involved.
Second Life 2021 review, documentary from inside the social metaverse – YouTube Second Life is an open world 3D social virtual world, the precursor of the virtual reality or VR platforms we see today. But is it really on its way out of the Metaverse game as some believe? Or does it hold the keys to realizing the Metaverse as it is envisioned by many futurists and sci-fi authors? This short film seeks to answer those questions.
Hopefully this next social internet will result in a more positive future than the one envisaged in Keiichi Matsuda’s video, Hyper-reality, that I shared some time back.
Anyway, to round all this off, here are a couple of links from Dezeen on what real estate in this new digital universe might look like.
Artist Krista Kim sells “first NFT digital house in the world” for over $500,000 – Dezeen Kim designed the home in 2020 to be a space that embodied her philosophy of meditative design and worked with an architect to render the house using Unreal Engine, software that is commonly used to create video games. She describes the house, which overlooks a moody mountain range and features an open-plan design and floor to ceiling glass walls, as a “light sculpture”.
As we come to terms with the latest numbers and updates about everyone’s favourite virus, we can be thankful, at least, that we don’t have this to worry about:
Wasp-76b: The exotic inferno planet where it ‘rains iron’ – BBC News
Wasp-76b, as it’s known, orbits so close in to its host star, its dayside temperatures exceed 2,400C – hot enough to vaporise metals. The planet’s nightside, on the other hand, is 1,000 degrees cooler, allowing those metals to condense and rain out. It’s a bizarre environment, according to Dr David Ehrenreich from the University of Geneva. “Imagine instead of a drizzle of water droplets, you have iron droplets splashing down,” he told BBC News.
ESO telescope observes exoplanet where it rains iron – ESO
This strange phenomenon happens because the ‘iron rain’ planet only ever shows one face, its day side, to its parent star, its cooler night side remaining in perpetual darkness. Like the Moon on its orbit around the Earth, WASP-76b is ‘tidally locked’: it takes as long to rotate around its axis as it does to go around the star.
The most dangerous people on the internet this decade – Wired
In some cases these figures represent dangers not so much to public safety, but to the status quo. We’ve also highlighted actual despots, terrorists, and saboteurs who pose a serious threat to lives around the world. As the decade comes to a close, here’s our list of the people we believe best characterize the dangers that emerged from the online world in the last 10 years—many of whom show no signs of becoming any less dangerous in the decade to come.
It’s not just the people that are alarming, it’s the technology too, and what can be done with it, like this investigation into the smartphone tracking industry. (I didn’t even realise there was such an industry.)
Twelve million phones, one dataset, zero privacy – The New York Times
Every minute of every day, everywhere on the planet, dozens of companies — largely unregulated, little scrutinized — are logging the movements of tens of millions of people with mobile phones and storing the information in gigantic data files. The Times Privacy Project obtained one such file, by far the largest and most sensitive ever to be reviewed by journalists. It holds more than 50 billion location pings from the phones of more than 12 million Americans as they moved through several major cities, including Washington, New York, San Francisco and Los Angeles.
Each piece of information in this file represents the precise location of a single smartphone over a period of several months in 2016 and 2017.
But perhaps there’s some room for optimism? Here’s the New York Times again, gazing into their crystal ball.
No more phones and other tech predictions for the next decade – The New York Times
There has been a lot of gnashing and wailing about screen addiction, “sharenting” and the myriad other negative effects of all the devices we have come to rely on. (I am guilty as charged.) These gadgets have been designed to hook you, not unlike sugar or cigarettes or gambling or opiates. The well known techie Tristan Harris calls it “human downgrading” — and he’s right. But there is yet another opportunity here to push for design ethics, a movement that I think will gain traction as we all assess what our dives into digital have done to humanity. While our tech devices have, on the whole, been good for most people, there is a true business opportunity in making them work more efficiently and without a reliance on addiction. Whether we move toward more intuitively created tech that surrounds us or that incorporates into our bodies (yes, that’s coming), I am going to predict that carrying around a device in our hand and staring at it will be a thing of the past by 2030. And like the electrical grid we rely on daily, most tech will become invisible.
Cyborgs. So much promise, so little follow-through.
Transhumanism is tempting—until you remember Inspector Gadget – Wired
It’s comforting to think of the body as a machine we can trick out. It helps us ignore the strange fleshy aches that come with having a meat cage. It makes a fickle system—one we truly don’t understand—feel conquerable. To admit that the body (and mind that sits within it) might be far more complex than our most delicate, intricate inventions endangers all kinds of things: the medical industrial complex, the wellness industry, countless startups. But it might also open up new doors for better relationships with our bodies too: Disability scholars have long argued that the way we see bodies as “fixable” ultimately serves to further marginalize people who will never have the “standard operating system,” no matter how many times their parts are replaced or tinkered with.
I remember Professor Reading from Warwick University/Professor Warwick from Reading University being the talk of the town back in the 90s, when I was a student researching interactive art.
The Cyborg: Kevin Warwick is the world’s first human-robot hybrid – Vice
This isn’t just for fun: Warwick is certain that without upgrading, humans will someday fall behind the advances of the robots they’re building – or worse. “Someday we’ll switch on that machine, and we won’t be able to switch it off.” That might explain why he has very little technology at home, and counts The Terminator among his biggest influences. He doesn’t want to become a robot; he wants to be a better human.
It got me thinking about Stelarc, the Cypriot/Australian performance artist who visited our campus one day to deliver a must bizarre lecture. He demoed his extra hand and talked about the new ear he was planning on installing/implanting/growing.
Here’s Wired’s profile of him, from 2012.
For extreme artist Stelarc, body mods hint at humans’ possible future – Wired
He speaks excitedly about potential future applications for the ear. “The ear also might be a kind of distributed Bluetooth system, where if you telephone me on your cellphone, I’ll be able to speak to you through my ear,” Stelarc said. “But because the small speaker and the small receiver would be implanted in a gap between my teeth, I would hear your voice in my head. If I keep my mouth closed, only I hear your voice. If I open my mouth and someone else is close by, they might hear your voice seemingly coming from my mouth. And if I lip-sync, I’d look like some bad foreign movie.”
Several years and surgical procedures later, and he’s still battling away.
Stelarc — Making art out of the human body – Labiotech
The final procedure will re-implant the microphone, which will be wirelessly connected to the Internet. The goal is to use it to listen in to what’s happening in other places of the world. “The ear is not for me. I’ve got two good ears to hear with,” the artist says. “For example, someone in Venice could listen to what my ear is hearing in Melbourne.”
Redefining the human body as “meat, metal and code”: An interview with Stelarc – Sleek Magazine
I left our meeting in awe of a man that, at the age of 71, is still at the foreground of technological art and posthumanist thought. Stelarc was making interactive internet art before the invention of Google (and dare I say it, before I could talk). Decades into his work and exploration of the limits of the human body, Stelarc continues to break and bend our conceptions of what constitutes a body, and fundamentally, what it means to be human.
Books won’t die – The Paris Review
In hindsight, we can see how rarely one technology supersedes another: the rise of the podcast makes clear that video didn’t doom audio any more than radio ended reading. Yet in 1913, a journalist interviewing Thomas Edison on the future of motion pictures recounted the inventor declaring confidently that “books … will soon be obsolete in the public schools.” By 1927 a librarian could observe that “pessimistic defenders of the book … are wont to contrast the actual process of reading with the lazy and passive contemplation of the screen or listening to wireless, and to prophecy the death of the book.” And in 1966, Marshall McLuhan stuck books into a list of outdated antiques: “clotheslines, seams in stockings, books and jobs—all are obsolete.”
Throughout the nineteenth century and again in the twentieth, every generation rewrote the book’s epitaph. All that changes is whodunnit.
And here’s a somewhat related article, asking us to see our current worries about technology ruining everything in a wider, historical context.
Pessimism v progress – The Economist
The New York Times sums up the encroaching gloom. “A mood of pessimism”, it writes, has displaced “the idea of inevitable progress born in the scientific and industrial revolutions.” Except those words are from an article published in 1979. Back then the paper fretted that the anxiety was “fed by growing doubts about society’s ability to rein in the seemingly runaway forces of technology”. …
The most important lesson is about technology itself. Any powerful technology can be used for good or ill. The internet spreads understanding, but it is also where videos of people being beheaded go viral. Biotechnology can raise crop yields and cure diseases—but it could equally lead to deadly weapons.
Technology itself has no agency: it is the choices people make about it that shape the world.
Well yes, to an extent. But are we completely free in our choices, or are we being manipulateda little?
I do think these Economist illustrations are very clever, though, like that one of Johnson’s V for victory sign.
An interesting visualisation of all the reasons why creating safe self-driving cars is harder than the hype would have us believe.
How does a self-driving car work? Not so great.
The autonomous vehicle industry has made lots of cheery projections: Robocars will increase efficiency and independence and greatly reduce traffic deaths, which occurred at the rate of about 100 a day for the past three years nationwide. But to deliver on those promises, the cars must work. Our reporting shows the technology remains riddled with problems.
There are flaws in how well cars can “see” and “hear,” and how smoothly they can filter conflicting information from different sensors and systems. But the biggest obstacle is that the vehicles struggle to predict how other drivers and pedestrians will behave among the fluid dynamics of daily traffic. […]
Gill Pratt, the head of the Toyota Research Institute, said in a speech earlier this year that it’s time to focus on explaining how hard it is to make a self-driving car work.
Octave Uzanne’s “The End of Books” (1894)
The end of books has been declared many times. Over a century before the invention of the e-reader and the meteoric rise of the audiobook and podcast, ardent French bibliophile Octave Uzanne (1851–1931) wrote a story, inspired by rapid advances in phonographic technology, imagining how printed text might disappear …
One of these men — called the Bibliophile — is asked his opinion on the future of books. He replies as follows:
If by books you are to be understood as referring to our innumerable collections of paper, printed, sewed, and bound in a cover announcing the title of the work, I own to you frankly that I do not believe (and the progress of electricity and modern mechanism forbids me to believe) that Gutenberg’s invention can do otherwise than sooner or later fall into desuetude as a means of current interpretation of our mental products.
“Printing”, he continues, “is…threatened with death by the various devices for registering sound which have lately been invented, and which little by little will go on to perfection.”
Check out these marvellous illustrations or click through for more or to read this yourself from a digitised copy of Scribner’s Magazine.
Every restaurant-table will be provided with its phonographic collection; the public carriages, the waiting-rooms, the state-rooms of steamers, the halls and chambers of hotels will contain phonographotecks for the use of travellers. The railways will replace the parlor car by a sort of Pullman Circulating Library, which will cause travellers to forget the weariness of the way while leaving their eyes free to admire the landscapes through which they are passing.
Nothing wrong with indulging in a little nostalgia now and then, right?
Do you remember Suck.com, the web’s first and best snarky internet/pop-culture magazine? It owned the show in the 90s, and I was a huge fan. It stopped publishing in 2001, but for the last four years the “Suck, Again” project has been serialising its archives as a daily email newsletter, each article sent out twenty years to the day since the original.
Gen Xers rejoice: Suck.com comes back as a daily newsletter
Launched in 1995 by Wired staffers Joey Anuff and Carl Steadman — the same year as Salon.com and a year before Slate — Suck offered a daily riff on early Web culture, politics, pop culture and dating. It was done with a characteristically Gen X flare: arch, wry, ironic and smart. It was massively influential.
It’s fascinating to see just how deeply the internet and the other new technologies have become embedded into our societies since then — and just how ‘on the money’ the Suck.com team were in highlighting the issues that we’re still grappling with today, two decades later.
Like this from April 1999 — fifteen years before Alexa first appeared, for example.
In the December 1998 Wired, Negroponte – director of MIT’s Media Lab and sharp-dressed retailer of broader-bandwidth tomorrows to corporate America (and to the unwashed AOL millions in his best-selling book Being Digital) – announced that he was vacating his bully pulpit on the magazine’s end page. After six years there, the man, whose audio-animatronic prose is to literary style what the Parkinsonian tics of Disneyland’s Mr. Lincoln are to fluid human movement, had decided to step down.
Negroponte’s departure marks the end of an era when Magna Cartas for the Knowledge Age and Declarations of the Independence of Cyberspace were taken seriously, at least by the self- anointed “digital elite.” Oddly, Negroponte himself seems not to have noticed how retro his Jetsonian visions of digital butlers and supercomputing cufflinks seem in the politically turbulent, economically anxious late-’90s. At the end of a century that has witnessed acid rain and global warming, Bhopal and Chernobyl, he beckons us toward a future where technology never fails, corporations are always benign, and there’s a high-tech magic bullet for every social malady.
Here’s a more favourable piece on him for 21C magazine.
In his immaculate Italian suit, Nicholas Negroponte looks more like an international financier than one of the leading thinkers of the information age. His new book, Being Digital, may have propelled the head of MIT’s Media Lab into the spotlight, but is he a true visionary or just a well-connected hype merchant?
For all that I might now think that Nicholas Negroponte was a little wide of the mark politically, I’ve had his Being Digital book on my bookshelf since it was first published in 1995, just next to Douglas Coupland’s Microserfs. They’re still two of my favourites.
I loved the nostalgic/futuristic feel of these Wonders of the World Wide Web videos from Jo Luijten. They capture the look and feel of the technology of the time perfectly. Yes, it’s ludicrous to imagine these modern-day systems running this way, but if we jump ahead 30 years from now, what will we be laughing at then?
How about this for an unsettling glimpse into the future?
Hyper-reality Hyper-Reality presents a provocative and kaleidoscopic new vision of the future, where physical and virtual realities have merged, and the city is saturated in media.
It serves as the introduction to this fantastic overview of augmented reality in urban environments.
City Skins: Scenes from an augmented urban reality In one scene, the film’s protagonist-user (“Juliana”), becomes confused, even anxious, by a technical glitch which forces a reboot of her device while shopping for food, showing the viewer a brief glimpse of a un-augmented and totally featureless supermarket, clearly designed for the express purpose of accommodating a digital overlay. Matsuda’s film ultimately suggests that augmented reality may become so commonplace as to be essential to making sense of the world.
However futuristic it may seem, location-based augmented reality (virtual reality’s more successful but less hyped cousin) has been around for a while.
Growing interest in location-based AR projects, beginning in the late 1990s, can be in part attributed to the confluence of art and networking technologies which emerged out of the gradual popularization of the Internet and the influence of “net art.” Net art, according to critic Josephine Bosma, has often concerned itself with “the public domain as a virtual, mediated space consisting of both material and immaterial matter,” indicating a conceptual and ethical foundation for augmented reality’s radical leap from the space of the screen to a “hybrid space” mixing real and virtual elements.
Near the tail end of the 20th century, pseudonymous author and technologist Ben Russell released The Headmap Manifesto — a utopian vision of augmented reality referencing Australian aboriginal songlines and occult tomes, while pulling heavily from cybernetic theory and the Temporary Autonomous Zones of Hakim Bey. At turns both wildly hypothetical and eerily prescient, Headmap explores in-depth the implications of “location-aware” augmented reality as a kind of “parasitic architecture” affording ordinary people the chance to annotate and re-interpret their environment.
That might sound too abstract and theoretical, but here’s an example of a very real-world, poignant use of AR.
Following the release of the first iPhone and advancements in mobile phone cameras and processing power, AR began to move toward the more visually-dominant experiences we are familiar with today — in the process also opening up possibilities for more explicitly political projects. The group 4 Gentlemen, for instance, embraced AR as a tool for criticizing oppressive government policies in China. A collective of exiled Chinese artists and one American artist, 4 Gentlemen (taking their name from a group of intellectual dissidents central to the Tiananmen Student Protest in 1989) developed a series of works that digitally recreated in situ both the famous “Tank Man” image and the “Goddess of Democracy” statue — two symbols of the Tiananmen protest which have defined the struggle for democracy and human rights in China since.
Nothing but the truth: the legacy of George Orwell’s Nineteen Eighty-Four Orwell was both too pessimistic and not pessimistic enough. On the one hand, the west did not succumb to totalitarianism. Consumerism, not endless war, became the engine of the global economy. But he did not appreciate the tenacity of racism and religious extremism. Nor did he foresee that the common man and woman would embrace doublethink as enthusiastically as the intellectuals and, without the need for terror or torture, would choose to believe that two plus two was whatever they wanted it to be.
Nineteen Eighty-Four is about many things and its readers’ concerns dictate which one is paramount at any point in history. During the cold war, it was a book about totalitarianism. In the 1980s, it became a warning about technology. Today, it is most of all a defence of truth.
There will be preliminary hearings tomorrow, and then one of four things may happen: Johnson may appeal, the Criminal Prosecution Service may allow Ball to continue with his own private proceedings, or the CPS may take over the proceedings, or they may shut them down on the basis that the prosecution is not in the public interest.
George Orwell jumped ahead 36 years. With his new TV series, Years and Years, Russell T. Davies only leaps from five to 15 years ahead, but his vision of the future feels likelier and far scarier as a result. Why do we, the audience, keep doing this to ourselves?
From Years and Years to Bird Box: why we turn to dystopian dramas in a crisis
Right now, it’s hard to think of a more prescient film than the 2006 thriller Children of Men with its depiction of environmental catastrophe and xenophobia; call me naive but not in a million years did I think we’d get so close to Alfonso Cuarón’s vision. Great art is supposed to reflect life, or so we are told. For me, the power of Years and Years lies not in its moments of high drama but in its more subtle drawing of the growing tensions between families, generations and cultures, and the line the series draws between now and the years to come. The future is here on TV, but the question is: have we got the stomach for it?
Pallasmaa, in his The Eyes of the Skin, noted that touch is a key part of remembering and understanding, that “tactile sense connects us with time and tradition: through impressions of touch we shake the hands of countless generations”. Is this reach for the switch merely functional, then? A light switch can stick around for decades, as with the doorhandle. When you touch the switch, you are subconsciously sensing the presence of others who have done so before you, and all those yet to do so. You are also directly touching infrastructure, the network of cables twisting out from our houses, from the writhing wires under our fingertips to the thicker fibres of cables, like limbs wrapped around each other, out into the countryside, into the National Grid.
If we always replace touch with voice activation, or simply by our presence entering a room, we are barely thinking or understanding, placing things out of mind. While data about those interactions exist, it is elsewhere, perceptible only to the eyes of the algorithm. We lose another element of our physicality, leaving no mark, literally. No sense of patina develops, except in invisible lines of code, datapoints feeding imperceptible learning systems of unknown provenance. As is often the case with unthinking smart systems, it is a highly individualising interface, revealing no trace of others.
I think I now need to re-read Bret Victor’s take on the future of interaction design, that I mentioned earlier.
Following on from yesterday’s post about Joe Clark’s frustrations with various aspects of iPhone interface design (and smartphone design more broadly, I think), here are a few more.
First, Craig Mod on the new iPads — amazing hardware, infuriating software.
Getting the iPad to Pro The problems begin when you need multiple contexts. For example, you can’t open two documents in the same program side-by-side, allowing you to reference one set of edits, while applying them to a new document. Similarly, it’s frustrating that you can’t open the same document side-by-side. This is a weird use case, but until I couldn’t do it, I didn’t realize how often I did do it on my laptop. The best solution I’ve found is to use two writing apps, copy-and-paste, and open the two apps in split-screen mode.
Daily iPad use is riddled with these sorts of kludgey solutions.
Switching contexts is also cumbersome. If you’re researching in a browser and frequently jumping back and forth between, say, (the actually quite wonderful) Notes.app and Safari, you’ll sometimes find your cursor position lost. The Notes.app document you were just editing fully occasionally resetting to the top of itself. For a long document, this is infuriating and makes every CMD-TAB feel dangerous. It doesn’t always happen, the behavior is unpredictable, making things worse. This interface “brittleness” makes you feel like you’re using an OS in the wrong way.
How we use the OS, the user interface, is key. Here’s Bret Victor on why future visions of interface design are missing a huge trick – our hands are more than just pointy fingers.
A brief rant on the future of interaction design Go ahead and pick up a book. Open it up to some page. Notice how you know where you are in the book by the distribution of weight in each hand, and the thickness of the page stacks between your fingers. Turn a page, and notice how you would know if you grabbed two pages together, by how they would slip apart when you rub them against each other.
Go ahead and pick up a glass of water. Take a sip. Notice how you know how much water is left, by how the weight shifts in response to you tipping it.
Almost every object in the world offers this sort of feedback. It’s so taken for granted that we’re usually not even aware of it. Take a moment to pick up the objects around you. Use them as you normally would, and sense their tactile response — their texture, pliability, temperature; their distribution of weight; their edges, curves, and ridges; how they respond in your hand as you use them.
There’s a reason that our fingertips have some of the densest areas of nerve endings on the body. This is how we experience the world close-up. This is how our tools talk to us. The sense of touch is essential to everything that humans have called “work” for millions of years.
Now, take out your favorite Magical And Revolutionary Technology Device. Use it for a bit. What did you feel? Did it feel glassy? Did it have no connection whatsoever with the task you were performing?
I call this technology Pictures Under Glass. Pictures Under Glass sacrifice all the tactile richness of working with our hands, offering instead a hokey visual facade.
And that was written in 2011. We’ve not got any further.
The YouTube video he links to isn’t there anymore, but this one from Microsoft works just as well.
The ‘future book’ is here, but it’s not what we expected
In the 1990s, Future Bookism hit a kind of beautiful fever pitch. We were so close. Brown University professor Robert Coover, in a 1992 New York Times op-ed titled “The End of Books,” wrote of the future of writing: “Fluidity, contingency, indeterminacy, plurality, discontinuity are the hypertext buzzwords of the day, and they seem to be fast becoming principles, in the same way that relativity not so long ago displaced the falling apple.”
Things didn’t quite work out that way; Amazon swallowed up pretty much all the burgeoning e-book market, with Kindles that are “as interactive as a potato”.
Yet here’s the surprise: We were looking for the Future Book in the wrong place. It’s not the form, necessarily, that needed to evolve—I think we can agree that, in an age of infinite distraction, one of the strongest assets of a “book” as a book is its singular, sustained, distraction-free, blissfully immutable voice. Instead, technology changed everything that enables a book, fomenting a quiet revolution. Funding, printing, fulfillment, community-building—everything leading up to and supporting a book has shifted meaningfully, even if the containers haven’t. Perhaps the form and interactivity of what we consider a “standard book” will change in the future, as screens become as cheap and durable as paper. But the books made today, held in our hands, digital or print, are Future Books, unfuturistic and inert may they seem.
It’s an interesting take, for sure, but I can’t help but think this publishing revolution is marvellous for authors but, as a reader, I’m still pining for that promised interactivity. I don’t think it’s enough to say we’ve got Wikipedia and YouTube videos and email newsletters and somehow we can bundle them all up and consider the resulting unstructured, messy, unvalidated heap a Future Book.
Tim Carmody responds to Craig’s essay with a call to pursue an older approach.
Towards the Future Book
I think the utopian moment for the future of the book ended not when Amazon routed its vendors and competitors, although the Obama DOJ deserves some blame in retrospect for handing them that win. I think it ended when the Google Books settlement died, leading to Google Books becoming, basically abandonware, when it was initially supposed to be the true Library of Babel.
For Tim, that goal—“the digitization of all printed matter, available for full-text search and full-image browsing on any device”—is where the future of the book should lie.
Will Self, meanwhile, is in a less positive mood.
The printed world in peril
As for my attempts to express the impact of the screen on the page, on the actual pages of literary novels, I now understand that these were altogether irrelevant to the requirement of the age that everything be easier, faster, and slicker in order to compel the attention of screen viewers. It strikes me that we’re now suffering collectively from a “tyranny of the virtual,” since we find ourselves unable to look away from the screens that mediate not just print but, increasingly, reality itself.
I’ve been a fan of his for many years now, his lack of optimism notwithstanding.
At the end of Bradbury’s Fahrenheit 451, the exiled hoboes return to the cities, which have been destroyed by the nuclear conflicts of the illiterate, bringing with them their head-borne texts, ready to restart civilization. And it’s this that seems to me the most prescient part of Bradbury’s menacing vision. For I see no future for the words printed on paper, or the art forms they enacted, if our civilization continues on this digital trajectory: there’s no way back to the future—especially not through the portal of a printed text.
Using Proterozoic geology as his unusual starting point, MIT Media Lab Director Joi Ito takes a look at the past, present and future of the web and cultural technology.
The next Great (Digital) Extinction As our modern dinosaurs crash down around us, I sometimes wonder what kind of humans will eventually walk out of this epic transformation. Trump and the populism that’s rampaging around the world today, marked by xenophobia, racism, sexism, and rising inequality, is greatly amplified by the forces the GDE has unleashed. For someone like me who saw the power of connection build a vibrant, technologically meshed ecosystem distinguished by peace, love, and understanding, the polarization and hatred empowered by the internet today is like watching your baby turning into the little girl in The Exorcist.
And here’s a look into the technological future with analyst Benedict Evans.
The end of the beginning The internet began as an open, ‘permissionless’, decentralized network, but then we got (and indeed needed) new centralised networks on top, and so we’ve spent a lot of the past decade talking about search and social. Machine learning and crypto give new and often decentralized, permissionless fundamental layers for looking at meaning, intent and preference, and for attaching value to those.
The End of the Beginning What’s the state of not just “the world of tech”, but tech in the world? The access story is now coming to an end, observes Evans, but the use story is just beginning: Most of the people are now online, but most of the money is still not. If we think we’re in a period of disruption right now, how will the next big platform shifts — like machine learning — impact huge swathes of retail, manufacturing, marketing, fintech, healthcare, entertainment, and more?
I’m getting impatient for the future, it’s not coming quick enough.
Microsoft has been dreaming of a pocketable dual-screen Surface device for years
The Verge revealed last week that Microsoft wants to create a “new and disruptive” dual-screen device category to influence the overall Surface roadmap and blur the lines between what’s considered PC and mobile. Codenamed Andromeda, Microsoft’s project has been in development for at least two years and is designed to be a pocketable Surface device. Last week, Microsoft’s Surface chief, Panos Panay, appeared to tease just such a machine, built in collaboration with LG Display. We’re on the cusp of seeing the release of a folding, tablet-like device that Microsoft has actually been dreaming of for almost a decade.
That was earlier this month, but here’s something from 2015 — concepts from years ago and still years away.
Microsoft obsesses over giant displays and super thin tablets in future vision video
While everyone is busy flicking and swiping content from one device to another to get work done in the future, it’s nice to see there’s still a few keyboards laying around. Microsoft also shows off a concept tablet that’s shaped like a book, complete with a stylus. The tablet features a bendable display that folds out into a bigger device. If such a tablet will exist within the next 10 years then I want to pre-order one right now.
But consider this:
Imagining Windows 95 running on a smartphone
Microsoft released their Windows 95 operating system to the world in 1995. 4096 created an amusing video that imagines a mobile edition of Windows 95 running on a Microsoft-branded smartphone. Move over Cortana, Clippy is making a come back.
It’s all very amusing to think of such old technology in this new setting, but we’ll be laughing at how old-fashioned the iPhone X is soon enough, I’m sure.
Following on from that article about what it might be like to work until we’re 100, here’s another example of over-optimistic, blue-sky, work-based astrology, this time from Liselotte Lyngsø, a futurist from the Copenhagen-based consultancy Future Navigator.
This is what work will look like in 2100
Human potential, according to Lyngsø, is not best cultivated in today’s workplace structure, and many of the changes she predicts revolve around the ongoing effort to maximize the abilities of individuals. To that end, many of today’s workplace structures, such as the 9-to-5 workday, traditional offices, rigid hierarchies, and the very concept of retirement will change dramatically.
“I don’t think we’ll have work hours like we used to. Likewise I think we’ll replace retirement with breaks where we reorient and retrain, where the borders [of work] are blurred,” she says. “It’s also about creating a sustainable lifestyle so you don’t burn out, and you can keep working for longer.”
I’ve mentioned before that, when it comes to our time here, we don’t get long. But perhaps our lives — and our working lives, especially — will be longer than we think.
What if we have to work until we’re 100?
Retirement is becoming more and more expensive – and future generations may have to abandon the idea altogether. So what kinds of jobs will we do when we’re old and grey? Will we be well enough to work? And will anyone want to employ us?
Kindle v Glass, apps v text: the complicated future of books
It’s yet another way that our digital footprint is commercialised, marketed and analysed. Nothing is private anymore. Curling up on the couch with an e-book is not a solitary act but instead a way for corporations to learn about your habits and then sell you items you’ll think you need.
Despite it all, the book will survive and perhaps thrive, though our understanding of what a book can do and how it relates to the reader must change. Amazon remains a behemoth and yet a recent New Yorker feature on the company painted a picture of multinational disinterest in building a quality collection of books and literary culture (perhaps because they’re too busy selling garden tools, dildos and toys on their website).