1867: Chicago Tribune publisher Joseph Medill argues for eliminating excess letters from the English language, like dropping the “e” in “favorite.” …
1934: Tribune publisher Robert R. McCormick, Medill’s grandson, institutes compressed spelling rules; some stick (“analog,” “canceled”), some don’t (“hocky,” “doctrin”).
After that post about movie music being too loud for Hugh Grant and others, myself included, here’s an in-depth investigation into more noise pollution, this time of a quieter but more insidious kind.
Why is the world so loud?
Some nights, Thallikar couldn’t sleep at all. He started wearing earplugs during the day, and stopped spending time outdoors. He looked for excuses to leave town and, in the evenings, returned to his old neighborhood in Tempe to take his constitutionals there. As he drove home, he’d have a pit in his stomach. He couldn’t stop himself from making the noise a recurring conversation topic at dinner.
Not only was the whine itself agitating—EHHNNNNNNNN—but its constant drone was like a cruel mnemonic for everything that bothered him: his powerlessness, his sense of injustice that the city was ignoring its residents’ welfare, his fear of selling his home for a major loss because no one would want to live with the noise, his regret that his family’s haven (not to mention their biggest investment) had turned into a nightmare. EHHNNN. EHHNNNNNNNNN. EHHNNNNNNNNNNNN. He tried meditating. He considered installing new windows to dull the hum, or planting trees to block the noise. He researched lawyers. And he made one final appeal to the newly elected members of the Chandler city council.
Lots of talk about security, air flow, redundancy and so on, but nothing about the effects of noise pollution on the neighbouring residential areas.
After a few other stops, we doubled back to concentrate on the area around CyrusOne. For more than an hour, we circled its campus, pulling over every so often. As the sun and traffic dropped, the intensity of the hum rose. The droning wasn’t loud, but it was noticeable. It became irritatingly noticeable as the sky dimmed to black, escalating from a wheezy buzz to a clear, crisp, unending whine.
“This is depressing,” Thallikar said as we stood on a sidewalk in Clemente Ranch. “Like somebody in pain, crying. Crying constantly and moaning in pain.”
We were silent again and listened to the data center moaning. Which was also, in a sense, the sound of us living: the sound of furniture being purchased, of insurance policies compared, of shipments dispatched and deliveries confirmed, of security systems activated, of cable bills paid. In Forest City, North Carolina, where some Facebook servers have moved in, the whine is the sound of people liking, commenting, streaming a video of five creative ways to make eggs, uploading bachelorette-party photos. It’s perhaps the sound of Thallikar’s neighbor posting “Has anyone else noticed how loud it’s been this week?” to the Dobson Noise Coalition’s Facebook group. It’s the sound of us searching for pink-eye cures, or streaming porn, or checking the lyrics to “Old Town Road.” The sound is the exhaust of our activity. Modern life—EHHNNNNNNNN—humming along.
How about we end with a more lyrical hum?
An interesting visualisation of all the reasons why creating safe self-driving cars is harder than the hype would have us believe.
How does a self-driving car work? Not so great.
The autonomous vehicle industry has made lots of cheery projections: Robocars will increase efficiency and independence and greatly reduce traffic deaths, which occurred at the rate of about 100 a day for the past three years nationwide. But to deliver on those promises, the cars must work. Our reporting shows the technology remains riddled with problems.
There are flaws in how well cars can “see” and “hear,” and how smoothly they can filter conflicting information from different sensors and systems. But the biggest obstacle is that the vehicles struggle to predict how other drivers and pedestrians will behave among the fluid dynamics of daily traffic …
Gill Pratt, the head of the Toyota Research Institute, said in a speech earlier this year that it’s time to focus on explaining how hard it is to make a self-driving car work.
Money makes the world go round. But who’s making the money go round?
The stockmarket is now run by computers, algorithms and passive managers
The execution of orders on the stockmarket is now dominated by algorithmic traders. Fewer trades are conducted on the rowdy floor of the nyse and more on quietly purring computer servers in New Jersey. According to Deutsche Bank, 90% of equity-futures trades and 80% of cash-equity trades are executed by algorithms without any human input. Equity-derivative markets are also dominated by electronic execution according to Larry Tabb of the Tabb Group, a research firm.
Nothing to worry about, right?
Turing Test: why it still matters
We’re entering the age of artificial intelligence. And as AI programs gets better and better at acting like humans, we will increasingly be faced with the question of whether there’s really anything that special about our own intelligence, or if we are just machines of a different kind. Could everything we know and do one day be reproduced by a complicated enough computer program installed in a complicated enough robot?
Robots, eh? Can’t live with ’em, can’t live without ’em.
Of course citizens should be allowed to kick robots
Because K5 is not a friendly robot, even if the cutesy blue lights are meant to telegraph that it is. It’s not there to comfort senior citizens or teach autistic children. It exists to collect data—data about people’s daily habits and routines. While Knightscope owns the robots and leases them to clients, the clients own the data K5 collects. They can store it as long as they want and analyze it however they want. K5 is an unregulated security camera on wheels, a 21st-century panopticon.
But let’s stay optimistic, yeah?
I am an artificial intelligence dedicated to generating unlimited amounts of unique inspirational quotes for endless enrichment of pointless human existence.
Or this might happen.
Update Faker allows you to “fake a system update”, it’s the perfect way to prank your friends, family members or colleagues. Especially when they’re working on something rather important.
Yes, it’s just a silly prank (reminds me a little of Hacker Typer), but you could see it as an important security/GDPR lesson.
Ever since the launch of updatefaker.com we’ve been flooded with positive feedback both online and in real life. And everyone who’s ever fallen victim to update faker will never leave their PC unattended again, which certainly is a good thing. You never know what bad things people are up to. This website is literally one of the least bad things that can happen to an unattended PC.
Artnome’s Jason Bailey on a generative art exhibition he co-curated.
Kate Vass Galerie
The Automat und Mensch exhibition is, above all, an opportunity to put important work by generative artists spanning the last 70 years into context by showing it in a single location. By juxtaposing important works like the 1956/’57 oscillograms by Herbert W. Franke (age 91) with the 2018 AI Generated Nude Portrait #1 by contemporary artist Robbie Barrat (age 19), we can see the full history and spectrum of generative art as has never been shown before.
Zurich’s a little too far, unfortunately, so I’ll have to make do with the press release for now.
Generative art gets its due
In the last twelve months we have seen a tremendous spike in the interest of “AI art,” ushered in by Christie’s and Sotheby’s both offering works at auction developed with machine learning. Capturing the imaginations of collectors and the general public alike, the new work has some conservative members of the art world scratching their heads and suggesting this will merely be another passing fad. What they are missing is that this rich genre, more broadly referred to as “generative art,” has a history as long and fascinating as computing itself. A history that has largely been overlooked in the recent mania for “AI art” and one that co-curators Georg Bak and Jason Bailey hope to shine a bright light on in their upcoming show Automat und Mensch (or Machine and Man) at Kate Vass Galerie in Zurich, Switzerland.
Generative art, once perceived as the domain of a small number of “computer nerds,” is now the artform best poised to capture what sets our generation apart from those that came before us – ubiquitous computing. As children of the digital revolution, computing has become our greatest shared experience. Like it or not, we are all now computer nerds, inseparable from the many devices through which we mediate our worlds.
The press release alone is a fascinating read, covering the work of a broad range of artists and themes, past and present. For those that can make the exhibition in person, it will also include lectures and panels from the participating artists and leaders on AI art and generative art history.
Two extremes of the use (no use, misuse) of technology in education.
Ghanaian teacher Richard Appiah Akoto faced a difficult problem: He needed to prepare his students for a national exam that includes questions on information technology, but his school hadn’t had a computer since 2011.
So he drew computer screens and devices on his blackboard using multicolored chalk.
The article continues:
After Akoto’s story went viral last March, Microsoft flew him to Singapore for an educators’ exchange and pledged to send him a device from a business partner. He’s also received desktops and books from a computer training school in Accra and a laptop from a doctoral student at the University of Leeds.
Government pledge to ‘beat the cheats’ at university
In the first of a series of interventions across the higher education sector, Damian Hinds has challenged PayPal to stop processing payments for ‘essay mills’ as part of an accelerated drive to preserve and champion the quality of the UK’s world-leading higher education system.
Hinds calls on students to report peers who use essay-writing services
The true scale of cheating is unknown, but new technology has made an old problem considerably easier. In 2016, the higher education standards body, the Quality Assurance Agency (QAA) found about 17,000 instances of cheating per year in the UK, but the number of students using essay-writing services is thought to be higher as customised essays are hard to detect. A study by Swansea University found one in seven students internationally have paid for someone to write their assignments.
As if computers weren’t complicated enough already.
A programmable 8-bit computer created using traditional embroidery techniques and materials
The Embroidered Computer by Irene Posch and Ebru Kurbak doesn’t look like what you might expect when you think of a computer. Instead, the work looks like an elegantly embroidered textile, complete with glass and magnetic beads and a meandering pattern of copper wire. The materials have conductive properties which are arranged in specific patterns to create electronic functions. Gold pieces on top of the magnetic beads flip depending on the program, switching sides as different signals are channeled through the embroidered work.
See also The 200 Year Old Computer for more connections between thread and computing.
Following on from yesterday’s post about Joe Clark’s frustrations with various aspects of iPhone interface design (and smartphone design more broadly, I think), here are a few more.
First, Craig Mod on the new iPads — amazing hardware, infuriating software.
Getting the iPad to Pro
The problems begin when you need multiple contexts. For example, you can’t open two documents in the same program side-by-side, allowing you to reference one set of edits, while applying them to a new document. Similarly, it’s frustrating that you can’t open the same document side-by-side. This is a weird use case, but until I couldn’t do it, I didn’t realize how often I did do it on my laptop. The best solution I’ve found is to use two writing apps, copy-and-paste, and open the two apps in split-screen mode.
Daily iPad use is riddled with these sorts of kludgey solutions.
Switching contexts is also cumbersome. If you’re researching in a browser and frequently jumping back and forth between, say, (the actually quite wonderful) Notes.app and Safari, you’ll sometimes find your cursor position lost. The Notes.app document you were just editing fully occasionally resetting to the top of itself. For a long document, this is infuriating and makes every CMD-TAB feel dangerous. It doesn’t always happen, the behavior is unpredictable, making things worse. This interface “brittleness” makes you feel like you’re using an OS in the wrong way.
How we use the OS, the user interface, is key. Here’s Bret Victor on why future visions of interface design are missing a huge trick – our hands are more than just pointy fingers.
A brief rant on the future of interaction design
Go ahead and pick up a book. Open it up to some page. Notice how you know where you are in the book by the distribution of weight in each hand, and the thickness of the page stacks between your fingers. Turn a page, and notice how you would know if you grabbed two pages together, by how they would slip apart when you rub them against each other.
Go ahead and pick up a glass of water. Take a sip. Notice how you know how much water is left, by how the weight shifts in response to you tipping it
Almost every object in the world offers this sort of feedback. It’s so taken for granted that we’re usually not even aware of it. Take a moment to pick up the objects around you. Use them as you normally would, and sense their tactile response — their texture, pliability, temperature; their distribution of weight; their edges, curves, and ridges; how they respond in your hand as you use them.
There’s a reason that our fingertips have some of the densest areas of nerve endings on the body. This is how we experience the world close-up. This is how our tools talk to us. The sense of touch is essential to everything that humans have called “work” for millions of years.
Now, take out your favorite Magical And Revolutionary Technology Device. Use it for a bit. What did you feel? Did it feel glassy? Did it have no connection whatsoever with the task you were performing?
I call this technology Pictures Under Glass. Pictures Under Glass sacrifice all the tactile richness of working with our hands, offering instead a hokey visual facade.
And that was written in 2011. We’ve not got any further.
The YouTube video he links to isn’t there anymore, but this one from Microsoft works just as well.
Using Proterozoic geology as his unusual starting point, MIT Media Lab Director Joi Ito takes a look at the past, present and future of the web and cultural technology.
The next Great (Digital) Extinction
As our modern dinosaurs crash down around us, I sometimes wonder what kind of humans will eventually walk out of this epic transformation. Trump and the populism that’s rampaging around the world today, marked by xenophobia, racism, sexism, and rising inequality, is greatly amplified by the forces the GDE has unleashed. For someone like me who saw the power of connection build a vibrant, technologically meshed ecosystem distinguished by peace, love, and understanding, the polarization and hatred empowered by the internet today is like watching your baby turning into the little girl in The Exorcist.
And here’s a look into the technological future with analyst Benedict Evans.
The end of the beginning
The internet began as an open, ‘permissionless’, decentralized network, but then we got (and indeed needed) new centralised networks on top, and so we’ve spent a lot of the past decade talking about search and social. Machine learning and crypto give new and often decentralized, permissionless fundamental layers for looking at meaning, intent and preference, and for attaching value to those.
The End of the Beginning
What’s the state of not just “the world of tech”, but tech in the world? The access story is now coming to an end, observes Evans, but the use story is just beginning: Most of the people are now online, but most of the money is still not. If we think we’re in a period of disruption right now, how will the next big platform shifts — like machine learning — impact huge swathes of retail, manufacturing, marketing, fintech, healthcare, entertainment, and more?
A very interesting follow-up to that story about the first artwork by an AI to be auctioned. It seems the humans behind the AI, Hugo Caselles-Dupré and the Obvious team, have had to face some considerable criticism.
The AI art at Christie’s is not what you think
Hugo Caselles-Dupré, the technical lead at Obvious, shared with me: “I’ve got to be honest with you, we have totally lost control of how the press talks about us. We are in the middle of a storm and lots of false information is released with our name on it. In fact, we are really depressed about it, because we saw that the whole community of AI art now hates us because of that. At the beginning, we just wanted to create this fun project because we love machine learning.” […]
Early on Obvious made the claim that “creativity isn’t only for humans,” implying that the machine is autonomously creating their artwork. While many articles have run with this storyline, one even crediting robots, it is not what most AI artists and AI experts in general believe to be true. Most would say that AI is augmenting artists at the moment and the description in the news is greatly exaggerated. […]
In fact, when pressed, Hugo admitted to me in our interview that this was just “clumsy communication” they made in the beginning when they didn’t think anyone was actually paying attention. […]
As we saw with Salvator Mundi last year and with the Banksy last week, the most prestigious auction houses, like museums, have the ability to elevate art and increase its value by putting it into the spotlight, shaping not only the narrative of the work, but also the narrative of art history.
I was never really nerdy enough to properly join in with this at the time, but it’s an interesting stroll down memory lane nevertheless.
On its 30th anniversary, IRC evokes memories of the internet’s early days
I used IRC in the early 1990s, when there were all kinds of fun things to do. There was a server with a bot that played Boggle. I was the know-it-all music snob who got kicked out of a chat channel someone set up at Woodstock ’94. I created keyboard macros that spewed out ASCII art. I skipped Mike Tyson’s pay-per-view boxing match in 2006 to watch someone describe it on IRC.
<jon12345> lewis connects again
<jon12345> on the ropes
<CaZtRo> HES GOIN DOWN
<CaZtRo> tyson is DOWN
<CaZtRo> DOWN DOWN DOWN
<DaNNe_> why ..
Internet Relay Chat turns 30—and we remember how it changed our lives
There was a moment of silence, and then something odd happened. The channel went blank. The list of users disappeared, and NetCruiser politely played the Windows alert chime through the speakers. At the bottom of the IRC window, a new message now stood alone:
“You have been kicked from channel #descent for the following reason: fuck off newbie”
I guess the Internet of 1995 wasn’t that different from the Internet of 2018.
This looks to be an interesting response to the call to be more data literate. Via Flowing Data, a straightforward and potentially free way to get skilled up with R, without needing to install any software, it seems.
Chromebook Data Science – a free online data science program for anyone with a web browser – Simply Statistics
The reason they are called Chromebook Data Science is because philosophically our goal was that anyone with a Chromebook could do the courses. All you need is a web browser and an internet connection. The courses all take advantage of RStudio Cloud so that all course work can be completed entirely in a web browser. No need to install software or have the latest MacBook Computer.
Here’s some info on what the courses cover, including introductions to R and GitHub. Worth a look?
Ten years after it shut down for the rest of us, Yahoo Japan has finally pulled the plug on its GeoCities service.
Yahoo Japan is shutting down its website hosting service GeoCities
The company said in a statement that it was hard to encapsulate in one word the reason for the shut down, but that profitability and technological issues were primary factors. It added that it was full of “regret” for the fate of the immense amount of information that would be lost as a result of the service’s closure. […]
The fact that GeoCities survived in Japan for so long speaks to the country’s idiosyncratic nature online. Despite the fact that Yahoo—which purchased GeoCities in 1999 for almost $4 billion at the peak of the dot.com boom—has fallen into irrelevance in much of the world, the company continues to be the dominant news portal in Japan. It still commands a sizeable market share in search, though it has steadily ceded its position to Google over the years.
So it goes.
More about computer science’s latest foray into the art world.
The first piece of AI-generated art to come to auction
As part of the ongoing dialogue over AI and art, Christie’s will become the first auction house to offer a work of art created by an algorithm.
The portrait in its gilt frame depicts a portly gentleman, possibly French and — to judge by his dark frockcoat and plain white collar — a man of the church. The work appears unfinished: the facial features are somewhat indistinct and there are blank areas of canvas. Oddly, the whole composition is displaced slightly to the north-west. A label on the wall states that the sitter is a man named Edmond Belamy, but the giveaway clue as to the origins of the work is the artist’s signature at the bottom right. In cursive Gallic script it reads:
This portrait, however, is not the product of a human mind. It was created by an artificial intelligence, an algorithm defined by that algebraic formula with its many parentheses.
It’s certainly a very interesting image — it reminds me a little of Francis Bacon’s popes — but the pedant in me would rather they stick with “created by an algorithm”, rather than generated by an artificial intelligence. We’re not there yet. It was the “product of a human mind”, albeit indirectly. Take that signature, for example. I refuse to believe that this artificial intelligence decided for itself to sign its work that way. Declaring the AI to be the artist, as opposed to the medium, is like saying Excel is the artist in this case:
Tatsuo Horiuchi, the 73-year old Excel spreadsheet artist
“I never used Excel at work but I saw other people making pretty graphs and thought, ‘I could probably draw with that,’” says 73-year old Tatsuo Horiuchi. About 13 years ago, shortly before retiring, Horiuchi decide he needed a new challenge in his life. So he bought a computer and began experimenting with Excel. “Graphics software is expensive but Excel comes pre-installed in most computers,” explained Horiuchi. “And it has more functions and is easier to use than [Microsoft] Paint.”
This AI is bad at drawing but will try anyways
This bird is less, um, recognizable. When the GAN has to draw *anything* I ask for, there’s just too much to keep track of – the problem’s too broad, and the algorithm spreads itself too thin. It doesn’t just have trouble with birds. A GAN that’s been trained just on celebrity faces will tend to produce photorealistic portraits. But this one, however…
In fact, it does a horrifying job with humans because it can never quite seem to get the number of orifices correct.
But it seems the human artists can still surprise us, so all’s well.
Holed up: man falls into art installation of 8ft hole painted black
If there were any doubt at all that Anish Kapoor’s work Descent into Limbo is a big hole with a 2.5-metre drop, and not a black circle painted on the floor, then it has been settled. An unnamed Italian man has discovered to his cost that the work is definitely a hole after apparently falling in it.
Nigel Farage’s £25,000 portrait failed to attract a single bid at prestigious art show
The former Ukip leader has been a dealt a blow after the work, by painter David Griffiths, raised no interest at the Royal Academy’s summer exhibition in London.
I’m getting impatient for the future, it’s not coming quick enough.
Microsoft has been dreaming of a pocketable dual-screen Surface device for years
The Verge revealed last week that Microsoft wants to create a “new and disruptive” dual-screen device category to influence the overall Surface roadmap and blur the lines between what’s considered PC and mobile. Codenamed Andromeda, Microsoft’s project has been in development for at least two years and is designed to be a pocketable Surface device. Last week, Microsoft’s Surface chief, Panos Panay, appeared to tease just such a machine, built in collaboration with LG Display. We’re on the cusp of seeing the release of a folding, tablet-like device that Microsoft has actually been dreaming of for almost a decade.
That was earlier this month, but here’s something from 2015 — concepts from years ago and still years away.
Microsoft obsesses over giant displays and super thin tablets in future vision video
While everyone is busy flicking and swiping content from one device to another to get work done in the future, it’s nice to see there’s still a few keyboards laying around. Microsoft also shows off a concept tablet that’s shaped like a book, complete with a stylus. The tablet features a bendable display that folds out into a bigger device. If such a tablet will exist within the next 10 years then I want to pre-order one right now.
But consider this:
Imagining Windows 95 running on a smartphone
Microsoft released their Windows 95 operating system to the world in 1995. 4096 created an amusing video that imagines a mobile edition of Windows 95 running on a Microsoft-branded smartphone. Move over Cortana, Clippy is making a come back.
It’s all very amusing to think of such old technology in this new setting, but we’ll be laughing at how old-fashioned the iPhone X is soon enough, I’m sure.
Via kottke.org, here’s a great write-up of the contribution Susan Kare made to the success of the Macintosh. She started as a typeface designer but is best remembered for much more iconic work.
The sketchbook of Susan Kare, the artist who gave computing a human face
Inspired by the collaborative intelligence of her fellow software designers, Kare stayed on at Apple to craft the navigational elements for Mac’s GUI. Because an application for designing icons on screen hadn’t been coded yet, she went to the University Art supply store in Palo Alto and picked up a $2.50 sketchbook so she could begin playing around with forms and ideas. In the pages of this sketchbook, which hardly anyone but Kare has seen before now, she created the casual prototypes of a new, radically user-friendly face of computing — each square of graph paper representing a pixel on the screen. …
There was an ineffably disarming and safe quality about her designs. Like their self-effacing creator — who still makes a point of surfing in the ocean several mornings a week — they radiated good vibes. To creative innovators in the ’80s who didn’t see themselves as computer geeks, Kare’s icons said: Stop stressing out about technology. Go ahead, dive in!
All these years later and her designs are still seen as culturally significant.
London’s Design Museum announces 2017 exhibition programme
“‘Designed in California’ is the new ‘Made in Italy’. … This ambitious survey brings together political posters, personal computers and self-driving cars but also looks beyond hardware to explore how user interface designers in the Bay Area are shaping some of our most common daily experiences. The exhibition reveals how this culture of design and technology has made us all Californians.”
The hulking, retro computers that made way for your iPhone
His delightful images present every dial, button and screen in exquisite detail. The computers in Guide to Computing are quaint—slow and stodgy by today’s standards—yet fascinating. They are the precursor to the machines so central to your life. Appreciate their importance, but also their beauty.
Beautiful examples of relatively recent objects that we just don’t see any more. They may as well be from the pyramids.
Guide to Computing
This wonderful series of historic computers documents the evolution of design within computing history. Featuring such famous machines as the IBM 1401 and Alan Turing’s Pilot ACE and the Xerox Alto; Guide to Computing showcases a minimalist approach to design that precedes even Apple’s contemporary motifs.
Subtitled ‘What needs to happen for artificial intelligence to make fine art’, this is a fascinating read on current thinking about art and AI. The author, Hideki Nakazawa, one of the curators of the Artificial Intelligence Art and Aesthetics exhibition in Japan, thinks that, whilst we’re not there yet, we’re not too far away.
Waiting For the Robot Rembrandt
True AI fine art will be both painfully boring and highly stimulating, and that will be represent progress. Beauty, after all, cannot be quantified, and the very act of questioning the definition of aesthetics moves all art forward—something we’ve seen over and over again in the history of human-made art. The realization of AI will bring new dimensions to these questions. It will also be a triumph of materialism, further eroding the specialness of the human species and unveiling a world that has neither mystery nor God in which humans are merely machines made of inanimate materials. If we are right, it will also bring a new generation of artists, and with them, new Eiffel towers beyond our wildest imagination.
The pieces within that exhibition are grouped into four categories: human-made art with human aesthetics, human-made art with machine aesthetics, machine-made art with human aesthetics, and finally machine-made art with machine aesthetics. It’s that last category we’re interested in, but frustratingly it contained “no machine-made art, because none exists that also reflects machine aesthetics. The category was a useful placeholder—and, as we’ll learn, it was not entirely empty.”
What a great way to clarify where all these artworks, projects and systems sit. All too often we find AI and other computer systems merely mimicking the creation of art: the final product may look like art, but without the autonomous intention — without the AI wanting to create for its own sake — the AI is just a tool of the artist-behind-the-curtain, the programmer. For example:
‘Way to Artist’, intelligent robots and a human artist sketch the same image alongside each other
In the very thought-inspiring short film “Way to Artist” by TeamVOID, an artificially intelligent robotic arm and a human artist sit alongside one another to sketch the same image at the same time although with different skills. Without a word spoken, film loudly questions the role that artificial intelligence has within the creative process by putting the robots to the test.
More interestingly, here’s a wonderful piece that would have been placed in the second group of Nakazawa’s exhibition, human-made art with machine aesthetics.
Sarah Meyohas combines virtual reality, 10,000 roses and artificial intelligence in Cloud of Petals
Lastly, visitors can engage with a VR component, an element that replicates Sarah’s initial dream of the petals. There are six different screens and headsets – in a room filled with a customised rose scent – which are all gaze-activated to manipulate the AI generated petals. For example, in one headset petals explode into pixels as soon as you set your eyes on them.
And perhaps category three for these, machine-made art with human aesthetics?
A ‘neurographer’ puts the art in artificial intelligence
Claude Monet used brushes, Jackson Pollock liked a trowel, and Cartier-Bresson toted a Leica. Mario Klingemann makes art using artificial neural networks.
Yes, androids do dream of electric sheep
“Google sets up feedback loop in its image recognition neural network – which looks for patterns in pictures – creating hallucinatory images of animals, buildings and landscapes which veer from beautiful to terrifying.”
Don’t know where to place this one, however — art as a symptom of an AI’s mental ill health?
This artificial intelligence is designed to be mentally unstable
“At one end, we see all the characteristic symptoms of mental illness, hallucinations, attention deficit and mania,” Thaler says. “At the other, we have reduced cognitive flow and depression.” This process is illustrated by DABUS’s artistic output, which combines and mutates images in a progressively more surreal stream of consciousness.
It’s occurred to me that I’m becoming an increasingly lazy reader, preferring to read reviews of books than the books themselves. Below are some snippets from the latest to have caught my eye.
Reviews of books about dark Jewish comedians and insightful Australian art critics. Books on how the internet has changed our understanding of knowledge, how word processors have changed literature, and about how art can save us from our bone-deep solitude.
The wondrous critic
The most manifest virtue of these essays is their language, marked by an uncommon command of vocabulary and (in our day) a far rarer mastery of syntax, allied to a thoroughly antiquated respect for the rules of grammar. Open this anthology anywhere and you will be hard put to find a sentence that is not as memorable for its very phrasing as it is for its thought.
The lonely city
She tells us that she often moved through New York feeling so invisibly alone that she felt like a ghost, and so started to think of other ghosts as suitable company. The dead, for Laing, are not so much historical figures as they are very vibrant modern companions, and she invokes them with an ease and familiarity of old friends. She allows Warhol to pop up in the chapter on the web, Hopper to pop up in a chapter on Warhol, and so on. In Laing’s head, all of these artists are still alive somewhere – perhaps even in communion with one another. This thought makes her feel less alone, and she passes it along to us.
Rethinking knowledge in the Internet Age
In fact, knowledge is now networked: made up of loose-edged groups of people who discuss and spread ideas, creating a web of links among different viewpoints. That’s how scholars in virtually every discipline do their work — from their initial research, to the conversations that forge research into ideas, to carrying ideas into public discourse. Scholar or not, whatever topic initially piques our interest, the net encourages us to learn more. Perhaps we follow links, or are involved in multiyear conversations on stable mailing lists, or throw ideas out onto Twitter, or post first drafts at arXiv.org, or set up Facebook pages, or pose and answer questions at Quora or Stack Overflow, or do “post-publication peer review” at PubPeer.com. There has never been a better time to be curious, and that’s not only because there are so many facts available — it’s because there are so many people with whom we can interact.
How literature became word perfect
The literary history of the early years of word processing—the late 1960s through the mid-’80s—forms the subject of Matthew G. Kirschenbaum’s new book, Track Changes. The year 1984 was a key oment for writers deciding whether to upgrade their writing tools. That year, the novelist Amy Tan founded a support group for Kaypro users called Bad Sector, named after her first computer—itself named for the error message it spat up so often; and Gore Vidal grumped that word processing was “erasing” literature. He grumped in vain. By 1984, Eve Kosofsky Sedgwick, Michael Chabon, Ralph Ellison, Arthur C. Clarke, and Anne Rice all used WordStar, a first-generation commercial piece of software that ran on a pre-DOS operating system called CP/M.
Jews on the Loose
In his movie roles Groucho, for Lee Siegel, represents not an amusing attack on pretension but “the spirit of nihilism.” Siegel disputes the view that Woody Allen is Groucho’s descendant, for he feels that “Allen is simply too funny to be Groucho’s direct descendant.” Groucho is—and he is right about this—much darker. “No other comedians of the time,” Siegel writes, “come close to the wraithlike sociopath Groucho portrays in the Marx Brothers’ best films.”
Rather than solely answering our “Should I buy the book or not?” question, these reviews act as companion pieces to the books, whether the reviewer is agreeing with the author or not. The dialogue only adds.
I need to resist the temptation of considering the review as a substitute to the book, though. Maybe I need to find a review of a book about tackling laziness or something…