Dalí’s back

Another art and AI post, but with a difference. An exhibition at the Dalí Museum in Florida, with a very special guest.

Deepfake Salvador Dalí takes selfies with museum visitors
The exhibition, called Dalí Lives, was made in collaboration with the ad agency Goodby, Silverstein & Partners (GS&P), which made a life-size re-creation of Dalí using the machine learning-powered video editing technique. Using archival footage from interviews, GS&P pulled over 6,000 frames and used 1,000 hours of machine learning to train the AI algorithm on Dalí’s face. His facial expressions were then imposed over an actor with Dalí’s body proportions, and quotes from his interviews and letters were synced with a voice actor who could mimic his unique accent, a mix of French, Spanish, and English.

Behind the Scenes: Dali Lives

Whilst we’re talking of Dalí, let’s go behind the scenes of that famous portrait of him by Philippe Halsman. No flashy, cutting-edge technology this time, just wire, buckets and cats.

dalis-back-2

The story behind the surreal photograph of Salvador Dalí and three flying cats
The original, unretouched version of the photo reveals its secrets: An assistant held up the chair on the left side of the frame, wires suspended the easel and the painting, and the footstool was propped up off the floor. But there was no hidden trick to the flying cats or the stream of water. For each take, Halsman’s assistants—including his wife, Yvonne, and one of his daughters, Irene—tossed the cats and the contents of a full bucket across the frame. After each attempt, Halsman developed and printed the film while Irene herded and dried off the cats. The rejected photographs had notes such as “Water splashes Dalí instead of cat” and “Secretary gets into picture.”

dalis-back-3

Time.com have a great interview with Philippe Halsman’s daughter Irene on what that shoot was like.

The story behind the surrealist ‘Dali Atomicus’ photo
“Philippe would count to four. One, two, three… And the assistants threw the cats and the water. And on four, Dali jumped. My job at the time was to catch the cats and take them to the bathroom and dry them off with a big towel. My father would run upstairs where the darkroom was, develop the film, print it, run downstairs and he’d say not good, bad composition, this was wrong, that was wrong. It took 26 tries to do this. 26 throws, 26 wiping of the floors, and 26 times catching the cats. And then, there it was, finally, this composition.”

Coincidentally, Artnome’s Jason Bailey has been using AI and deep learning to colorize old black-and-white photos of artists, including that one of Dalí’s.

50 famous artists brought to life with AI
When I was growing up, artists, and particularly twentieth century artists, were my heroes. There is something about only ever having seen many of them in black and white that makes them feel mythical and distant. Likewise, something magical happens when you add color to the photo. These icons turn into regular people who you might share a pizza or beer with.

More fake face fun

More fun with fake faces. Here’s a breakdown from Eric Jang of Snapchat’s bizarre new filter.

Fun with Snapchat’s gender swapping filter
Snapchat’s new gender-bending filter is a source of endless fun and laughs at parties. The results are very pleasing to look at. As someone who is used to working with machine learning algorithms, it’s almost magical how robust this feature is.

I was so duly impressed that I signed up for Snapchat and fiddled around with it this morning to try and figure out what’s going on under the hood and how I might break it.

Eric takes it through its paces to learn a little more about how it’s generated. I hadn’t appreciated that it worked in real time like that.

beanie     turn     blonde

Update 15/05/2019

Classic FM got in on the act, and has taken a few portraits of classical music composers through the filter, with varying results.

We used Snapchat’s gender-swapping filter on famous composers… and the results are terrifying
5. Beethoven

more-fake-face-fun-5

Female Ludwig is a very sulky teenager.

Boy meets girl meets robot

So what to read next, after Dune? More sci-fi? Ian McEwan’s “retrofuturist family drama” seems to be getting some attention.

Man, woman, and robot in Ian McEwan’s new novel
It’s London, 1982. The Beatles have reunited (to mixed reviews), Margaret Thatcher has just lost the Falkland Islands to Argentina, and Sir Alan Turing, now seventy, is the presiding spirit of a preemie Information Age. People have already soured on the latest innovations, among them “speaking fridges with a sense of smell” and driverless cars that cause multinational gridlock. “The future kept arriving,” Charlie ruminates. “Our bright new toys began to rust before we could get them home, and life went on much as before.”

Buyer’s remorse is a recurring theme in Ian McEwan’s witty and humane new novel, “Machines Like Me” (Nan A. Talese), a retrofuturist family drama that doubles as a cautionary fable about artificial intelligence, consent, and justice. Though steeped in computer science, from the P-versus-NP problem to DNA-inspired neural networks, the book is not meant to be a feat of hard-sci-fi imagineering; McEwan’s aim is to probe the moral consequences of what philosophers call “the problem of other minds.”

In “Machines Like Me”, Ian McEwan asks an age-old question
Amid all the action, there are sober passages of philosophical discussion between Charlie and Adam. But in parts the novel is funny, too. To Charlie’s disgust, Adam’s encyclopedic recall of Shakespeare makes him seem the better catch to Miranda’s father, a writer, who assumes Charlie is the robot, because he isn’t interested in books.

Late in the story it emerges that other androids around the world are committing suicide in horror at the behaviour of their flesh-and-blood masters. Adam wonders about the “mystery of the self” and his fear that he is “subject to a form of Cartesian error”. Strip away the counterfactual wrapping and “Machines Like Me” is ultimately about the age-old question of what makes people human. The reader is left baffled and beguiled.

Machines Like Me by Ian McEwan review – intelligent mischief
This is the mode of exposition in which he [Kipling] seems to address the reader from a position of shared knowledge, sketching out an unfamiliar reality through hints and allusions, but never explaining it too completely. This inside-out style is the default mode of modern SF. It is economical and of special usefulness to makers of strange worlds, plunging a reader into a new reality and leaving them space to feel like a participant in its creation. It’s the opposite technique to that of McEwan’s narrator, who explicitly sets out his world, overexplains the historical context and never turns down a chance to offer an essayistic digression.

To my taste, this is a flat-footed way of doing sci-fi.

‘It drives writers mad’: why are authors still sniffy about sci-fi?
Machines Like Me is not, however, science fiction, at least according to its author. “There could be an opening of a mental space for novelists to explore this future,” McEwan said in a recent interview, “not in terms of travelling at 10 times the speed of light in anti-gravity boots, but in actually looking at the human dilemmas.” There is, as many readers noticed, a whiff of genre snobbery here, with McEwan drawing an impermeable boundary between literary fiction and science fiction, and placing himself firmly on the respectable side of the line.

But perhaps we’ve had enough about robots and AI recently.

Never mind killer robots—here are six real AI dangers to watch out for in 2019
The latest AI methods excel at perceptual tasks such as classifying images and transcribing speech, but the hype and excitement over these skills have disguised how far we really are from building machines as clever as we are. Six controversies from 2018 stand out as warnings that even the smartest AI algorithms can misbehave, or that carelessly applying them can have dire consequences.

Generative art’s rich history on show

Artnome’s Jason Bailey on a generative art exhibition he co-curated.

Kate Vass Galerie
The Automat und Mensch exhibition is, above all, an opportunity to put important work by generative artists spanning the last 70 years into context by showing it in a single location. By juxtaposing important works like the 1956/’57 oscillograms by Herbert W. Franke (age 91) with the 2018 AI Generated Nude Portrait #1 by contemporary artist Robbie Barrat (age 19), we can see the full history and spectrum of generative art as has never been shown before.

Zurich’s a little too far, unfortunately, so I’ll have to make do with the press release for now.

Generative art gets its due
In the last twelve months we have seen a tremendous spike in the interest of “AI art,” ushered in by Christie’s and Sotheby’s both offering works at auction developed with machine learning. Capturing the imaginations of collectors and the general public alike, the new work has some conservative members of the art world scratching their heads and suggesting this will merely be another passing fad. What they are missing is that this rich genre, more broadly referred to as “generative art,” has a history as long and fascinating as computing itself. A history that has largely been overlooked in the recent mania for “AI art” and one that co-curators Georg Bak and Jason Bailey hope to shine a bright light on in their upcoming show Automat und Mensch (or Machine and Man) at Kate Vass Galerie in Zurich, Switzerland.

Generative art, once perceived as the domain of a small number of “computer nerds,” is now the artform best poised to capture what sets our generation apart from those that came before us – ubiquitous computing. As children of the digital revolution, computing has become our greatest shared experience. Like it or not, we are all now computer nerds, inseparable from the many devices through which we mediate our worlds.

The press release alone is a fascinating read, covering the work of a broad range of artists and themes, past and present. For those that can make the exhibition in person, it will also include lectures and panels from the participating artists and leaders on AI art and generative art history.

generative-arts-rich-history-on-show-1

Process Compendium (Introduction) on Vimeo

generative-arts-rich-history-on-show-2

generative-arts-rich-history-on-show-3

A new kind of electronic music

Nowadays it’s mostly classical, but when I was younger I was a big fan of electronic music — though by that I mean Underworld and Brian Eno, not … whatever this is.

What will happen when machines write songs just as well as your favorite musician?
It would take a human composer at least an hour to create such a piece—Jukedeck did it in less than a minute. All of which raises some thorny questions. We’ve all heard about how AI is getting progressively better at accomplishing eerily lifelike tasks: driving cars, recognizing faces, translating languages. But when a machine can compose songs as well as a talented musician can, the implications run deep—not only for people’s livelihoods, but for the very notion of what makes human beings unique.

That future is just around the corner.

Warner Music signs first ever record deal with an algorithm
Mood music app Endel, which creates bespoke soundscapes for users, is expected to produce 20 albums this year. […]

“I’m certain listeners enjoying these new albums will benefit from reduced anxiety and improved mood,” said Kevin Gore, president of Warner Music Group’s arts music division, described as “a new umbrella of labels focused on signing, developing and marketing releases across under-served genres”.

Generative, ambient background music is an “under-served genre” now?

Here’s another write-up from Classic FM of the same story. I especially liked their choice of image and caption to accompany the piece.

Warner Music becomes first record label to partner with an algorithm
The algorithm uses musical phrases created by composer and sound designer Dmitry Evgrafov to create pieces of music tailored to specific users.

Founder and CEO of Endel, Oleg Stavitsky said: “We are focused on creating personalised and adaptive real-time sound environments, but we are happy to share those pre-recorded albums to demonstrate the power of sound and our technology.”

Articles about Replika are replicating

Another piece on Eugenia Kuyda and how Replika came about, following an attempt to create a bot that could discuss restaurant recommendations.

This AI has sparked a budding friendship with 2.5 million people
Kuyda had high hopes for the service because chatbots were becoming all the rage in Silicon Valley at the time. But it didn’t take off. Only about 100,000 people downloaded Luka. Kuyda and her team realized that people preferred looking for restaurants on a graphical interface, and seeing lots of options at once.

Then In November 2015, Kuyda’s best friend, a startup founder named Roman Mazurenko, died in a car accident in Russia.

I’ve heard it before, but it’s still a sad start to the story.

Replika’s growing popularity among young people in particular (its main users are aged between 18 and 25) represents a renaissance in chatbots, which became overhyped a few years ago but are finding favor again as more app developers can use free machine-learning tools like Google’s TensorFlow.

It also marks an intriguing use case for AI in all the worry about job destruction: a way to talk through emotional problems when other human beings aren’t available. In Japan the idea of an artificial girlfriend, like the one voiced by Scarlett Johansson in the movie Her, has already become commonplace among many young men.

You must check out that last link, about those Japanese artificial girlfriends. It’s hard to believe the manufacturers, Gatebox, are suggesting you can have a relationship with an alarm clock.

A holographic virtual girlfriend lives inside Japan’s answer to the Amazon Echo
Instead of a simple, cylindrical speaker design, Gatebox has a screen and a projector, which brings Hikari — her name, appropriately, means “light” — to life inside the gadget. On the outside are microphones, cameras, and sensors to detect temperature and motion, so she can interact with you on a more personal level, rather than being a voice on your phone.

The result is a fully interactive virtual girl, who at her most basic can control your smart home equipment. The sensors mean she can recognize your face and your voice, and is designed to be a companion who can wake you up in the morning, fill you in on your day’s activities, remind you of things to remember, and even welcome you back when you return home from work.

articles-about-replika

Happy birthday Bach

I love this Google Doodle, though even Bach can’t rescue my appalling lack of musical ability!

Google’s first AI-powered Doodle is a piano duet with Bach
Starting on March 21st, you’ll be able to play with the interactive Doodle, which will prompt you to compose a two-measure melody or pick one of the pre-existing choices. When you press the “Harmonize” button, it will use machine learning to give you a version of your melody that sounds like it was composed by Bach himself.

happy-birthday-bach-1

Various Google teams were involved in this project, including Google Magenta. There is an incredible amount of detail about the technologies behind the Bach harmonies on their own site.

Coconet: the ML model behind today’s Bach Doodle
Coconet is trained to restore Bach’s music from fragments: we take a piece from Bach, randomly erase some notes, and ask the model to guess the missing notes from context. The result is a versatile model of counterpoint that accepts arbitrarily incomplete scores as input and works out complete scores. This setup covers a wide range of musical tasks, such as harmonizing melodies, creating smooth transitions, rewriting and elaborating existing music, and composing from scratch.

happy-birthday-bach-2

I cannot begin to understand what’s going on there, but it sounds good.

In praise of pessimism #2

Here’s a nice companion piece to that post earlier about accessing mindfulness therapies via chatbot apps. It includes a reminder of Buddhism’s Four Noble Truths and the Eightfold Path.

Want to be happy? Embrace being miserable
The path offers guidance on the elements of a principled existence, based on a cultivated perspective. But not necessarily a happy one.

Still, liberating yourself from the expectation of happiness lightens your load. It makes life a little easier when you are realistic but resolved, rather than deluded, desirous, and determined to have the impossible. By calculating discomfort and struggle into the mix, you can remain cautiously optimistic, knowing there’s surely trouble ahead, but that you will face it with grace.

As we saw earlier, there are a number of apps that can help us build up a solid sense of perspective. Here’s some more about Woebot.

This robot wants to help you control your emotions
A bot cannot really talk to you, of course, but it can call your attention to the way you converse with yourself, and perhaps in time shift your own relationship with angst. That’s the notion behind the Woebot, an app created by Stanford research psychologist Alison Darcy that aims to make emotional mindfulness available to the masses.

[…]

Next, it provided a brief lesson on the power of language in the context of cognitive behavioral therapy (CBT). This mode of treatment for anxiety and depression, CBT, calls attention to thinking patterns and teaches patients to recognize and address their negative tendencies and limiting beliefs with exercises.

It tries to literally change your mind by providing perspective and cultivating attention until you have replaced bad habits with better ones.

I loved the way that the closing paragraph from that first Quartz article above was both simultaneously downbeat and uplifting.

Know that you’ll fail, you will fall, you’ll feel pain, and be sad. You will be rejected. You will get sick. Your expectations will not be met, because reality is always more strange and complicated than imagination, which also mean something more interesting than you know could yet be on the horizon. Know, too, that even so, dull moments will abound. Yet it can always get worse, which is why it’s worth remembering that every day, at least some things have to be going okay, or else you’d already be dead.

And let’s not forget Will Self’s take on all this.

Talk it over with an AI

We all need someone to talk to. A problem shared is a problem halved, they say. But is that still true if the person you’re talking to doesn’t actually exist?

Chatbot therapy
Since virtual therapy seems to work, some innovators have started to suspect they could offer patients the same benefits of CBT—without a human on the other end. Services like Replika (an app intended to provide an emotional connection, not necessarily therapy) and Woebot (a therapy service that started in Facebook Messenger before breaking out on its own) allow human patients to interact with artificially intelligent chatbots for the purpose of improving their mental health.

I gave Woebot a go some time back. It felt potentially useful but quite scripted, a little heavy-handed. I’ve just started with Replika and so far the conversations feel more natural, though a little random at times.

This app is trying to replicate you
Replika launched in March. At its core is a messaging app where users spend tens of hours answering questions to build a digital library of information about themselves. That library is run through a neural network to create a bot, that in theory, acts as the user would. Right now, it’s just a fun way for people to see how they sound in messages to others, synthesizing the thousands of messages you’ve sent into a distillate of your tone—rather like an extreme version of listening to recordings of yourself. But its creator, a San Francisco-based startup called Luka, sees a whole bunch of possible uses for it: a digital twin to serve as a companion for the lonely, a living memorial of the dead, created for those left behind, or even, one day, a version of ourselves that can carry out all the mundane tasks that we humans have to do, but never want to.

That line above, “a living memorial for the dead”, is key, as that’s how Replika started, with the story of Eugenia Kuyda and Roman Mazurenko.

Speak, memory
Modern life all but ensures that we leave behind vast digital archives — text messages, photos, posts on social media — and we are only beginning to consider what role they should play in mourning. In the moment, we tend to view our text messages as ephemeral. But as Kuyda found after Mazurenko’s death, they can also be powerful tools for coping with loss. Maybe, she thought, this “digital estate” could form the building blocks for a new type of memorial.

She’s not the only one wandering down this slightly morbid track.

Eternime and Replika: Giving life to the dead with new technology
At the moment, Eternime takes the form of an app which collects data about you. It does this in two ways: Automatically harvesting heaps of smartphone data, and by asking you questions through a chatbot.

The goal is to collect enough data about you so that when the technology catches up, it will be able to create a chatbot “avatar” of you after you die, which your loved ones can then interact with.

But would they want to? Grief is a very personal thing, I can’t imagine this approach being for everyone.

‘Have a good cry’: Chuckle Brother takes aim at the grief taboo
“It’s like when you are a kid and you fall over and you think it’s all right and then your mum comes and says, ‘Are you all right, love?’ You burst into tears,” he said. “It was the same when Barry died. Everybody was saying sorry about your brother.”

Replika seems less about leaving something behind for your family and friends when you’ve gone, but more about making a new friend whilst you’re still around.

The journey to create a friend
There is no doubt that friendship with a person and with an AI are two very different matters. And yet, they do have one thing in common: in both cases you need to know your soon-to-be friend really well to develop a bond.

But let’s not get carried away, we’re not talking Hal or Samantha yet.

Three myths about Replika
Social media has put forth a number of quite entertaining theories about Replika. Today we are listing some of the ideas that we love … even though they are not exactly true.

Though you never know how these things will progress.

This Y Combinator-backed AI firm trained its chatbot to call you on the phone, and it’s fun but a little creepy
Much like the text version of Replika, my conversation with the bot threw up some odd quirks. “I think you look lovely today,” it said, and when I pointed out that it doesn’t have eyes, it replied: “Are you sure I don’t?”

Strange, funny, and occasionally creepy nonsequiturs are not new to Replika, in fact, there is a whole Subreddit dedicated to weird exchanges with the bot. Overall, however, the bot seemed to follow the train of the conversation reasonably well, and even told me a joke when I asked it to.

Here comes nobody

Yesterday there were millions of us, today there’s nobody here at all.

AI fake face website launched
A software developer has created a website that generates fake faces, using artificial intelligence (AI). Thispersondoesnotexist.com generates a new lifelike image each time the page is refreshed, using technology developed by chipmaker Nvidia. Some visitors to the website say they have been amazed by the convincing nature of some of the fakes, although others are more clearly artificial.

here-comes-nobody-1

They look like us, and now they can write like us too.

AI text writing technology too dangerous to release, creators claim
Researchers at OpenAI say they have created an AI writer which “generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks and performs rudimentary reading comprehension, machine translation, question answering and summarisation — all without task-specific training.”

But they are withholding it from public use “due to our concerns about malicious applications of the technology”.

Of course, it’s not just AI that’s trying to pull the wool over our eyes.

How to catch a catfisher
Last year, I found out someone was using my photos to catfish women. He stole dozens of my online photos – including selfies, family photos, baby photos, photos with my ex – and, pretending to be me, he would then approach women and spew a torrent of abuse at them.

It took me months to track him down, and now I’m about to call him.

Machines pretending to be people, people pretending to be other people. At least we’re truthful with ourselves, right?

Be honest, how much do you edit YOUR selfies?
“It’s time to acknowledge the damaging effects that social media has on people’s self-image,” says Rankin of the project, which is part of a wider initiative to explore the impact of imagery on our mental health.

“Social media has made everyone into their own brand. People are creating a two-dimensional version of themselves at the perfect angle, with the most flattering lighting and with any apparent ‘flaws’ removed. Mix this readily-available technology with the celebrities and influencers flaunting impossible shapes with impossible faces and we’ve got a recipe for disaster.”

here-comes-nobody-2

A little robot round-up

I don’t know about you, but I find things to do with AI, robots and automation quite confusing. Will the impact of these technologies really be as widespread as envisaged by the futurists? And what will the consequences and implications really be? Is humanity at stake, even?

Here are a number of articles I’m working through, that will hopefully shed some light on it all. Let’s start with the robot uprising.

Social robots will become family members in the homes of the future
With fewer stay-at-home parents, social robots can serve as personalized practice partners to help with homework and reinforce what children have learned that day in school. Far beyond helping you find recipes and ordering groceries, they can be your personal sous-chef or even help you learn to cook. They can also act as personal health coaches to supplement nutrition and wellness programs recommended by doctors and specialists for an increasingly health-conscious population. As the number of aging-in-place boomers soars, social robots can provide a sense of companionship for retirees while also connecting seniors to the world and to their loved ones, as well as sending doctor-appointment and medication reminders.

Robots! A fantastic catalog of new species
IEEE Spectrum editor Erico Guizzo and colleagues have blown out their original Robots app into a fantastic catalog of 200 of today’s fantastic species of robots. They’re cleverly organized into fun categories like “Robots You Can Hug,” “Robots That Can Dance,” “Space Robots,” and “Factory Workers.” If they keep it updated, it’ll be very helpful for the robot uprising.

We need to have a very serious chat about Pepper’s pointless parliamentary pantomime
Had the Committee summoned a robotic arm, or a burger-flipping frame they would have wound up with a worse PR stunt but a better idea of the dangers and opportunities of the robot revolution.

robot-round-up-1

Robots can look very cute, but it’s the implications of those faceless boxes housing the AIs that will be more important, I think.

Computer says no: why making AIs fair, accountable and transparent is crucial
Most AIs are made by private companies who do not let outsiders see how they work. Moreover, many AIs employ such complex neural networks that even their designers cannot explain how they arrive at answers. The decisions are delivered from a “black box” and must essentially be taken on trust. That may not matter if the AI is recommending the next series of Game of Thrones. But the stakes are higher if the AI is driving a car, diagnosing illness, or holding sway over a person’s job or prison sentence.

Last month, the AI Now Institute at New York University, which researches the social impact of AI, urged public agencies responsible for criminal justice, healthcare, welfare and education, to ban black box AIs because their decisions cannot be explained.

Artificial intelligence has got some explaining to do
Most simply put, Explainable AI (also referred to as XAI) are artificial intelligence systems whose actions humans can understand. Historically, the most common approach to AI is the “black box” line of thinking: human input goes in, AI-made action comes out, and what happens in between can be studied, but never totally or accurately explained. Explainable AI might not be necessary for, say, understanding why Netflix or Amazon recommended that movie or that desk organizer for you (personally interesting, sure, but not necessary). But when it comes to deciphering answers about AI in fields like health care, personal finances, or the justice system, it becomes more important to understand an algorithm’s actions.

The only way is ethics.

Why teach drone pilots about ethics when it’s robots that will kill us?
For the most part, armies are keen to maintain that there will always be humans in charge when lethal decisions are taken. This is only partly window dressing. One automated system is dangerous only to its enemies; two are dangerous to each other, and out of anyone’s control. We have seen what happens on stock markets when automatic trading programs fall into a destructive pattern and cause “flash crashes”. In October 2016 the pound lost 6% of its value, with blame in part put down to algorithmic trading. If two hi-tech armies were in a standoff where hair-trigger algorithms faced each other on both sides, the potential for disaster might seem unlimited.

Nuclear war has been averted on at least one occasion by a heroic Russian officer overriding the judgment of computers that there was an incoming missile attack from the US. But he had 25 minutes to decide. Battlefield time is measured in seconds.

The Pentagon’s plans to program soldiers’ brains
DARPA has dreamed for decades of merging human beings and machines. Some years ago, when the prospect of mind-controlled weapons became a public-relations liability for the agency, officials resorted to characteristic ingenuity. They recast the stated purpose of their neurotechnology research to focus ostensibly on the narrow goal of healing injury and curing illness. The work wasn’t about weaponry or warfare, agency officials claimed. It was about therapy and health care. Who could object?

Let’s hope nothing goes wrong.

Machine learning confronts the elephant in the room
Then the researchers introduced something incongruous into the scene: an image of an elephant in semiprofile. The neural network started getting its pixels crossed. In some trials, the elephant led the neural network to misidentify the chair as a couch. In others, the system overlooked objects, like a row of books, that it had correctly detected in earlier trials. These errors occurred even when the elephant was far from the mistaken objects.

Snafus like those extrapolate in unsettling ways to autonomous driving. A computer can’t drive a car if it might go blind to a pedestrian just because a second earlier it passed a turkey on the side of the road.

So yes, things can go wrong. But AI and automation will all be good for jobs, right?

Artificial intelligence to create 58 million new jobs by 2022, says report
Machines and algorithms in the workplace are expected to create 133 million new roles, but cause 75 million jobs to be displaced by 2022 according to a new report from the World Economic Forum (WEF) called “The Future of Jobs 2018.” This means that the growth of artificial intelligence could create 58 million net new jobs in the next few years.

With this net positive job growth, there is expected to be a major shift in quality, location and permanency for the new roles. And companies are expected to expand the use of contractors doing specialized work and utilize remote staffing.

robot-round-up-2

AI may not be bad news for workers
Some jobs could be made a lot easier by AI. One example is lorry-driving. Some fear that truck drivers will be replaced by autonomous vehicles. But manoeuvring a lorry around busy streets is far harder than driving down the motorway. So the driver could switch into automatic mode (and get some rest) when outside the big cities, and take over the wheel once again when nearing the destination. The obvious analogy is with jetliners, where the pilots handle take-off and landing but turn on the computer to cruise at 35,000 feet. Using AI may prevent tired drivers from causing accidents.

Ok, yes, I can see that. But then it goes on…

And the report argues that AI can produce better decision-making by offering a contrarian opinion so that teams can avoid the danger of groupthink. A program could analyse e-mails and meeting transcripts and issue alerts when potentially false assumptions are being made (rather like the boy in the Hans Christian Andersen tale who notices that the Emperor has no clothes). Or it can warn a team when it is getting distracted from the task in hand.

Really? That’s quite a jump from automated driving. Having a system read everything a company’s employees write to look for poor assumptions? I cannot see that happening. More over-selling.

But what else could AI do?

AI lie detector tests to get trial run at EU airports
Fliers will be asked a series of travel-related questions by a virtual border guard avatar, and artificial intelligence will monitor their faces to assess whether they are lying. The avatar will become “more skeptical” and change its tone of voice if it believes a person has lied, before referring suspect passengers to a human guard and allowing those believed to be honest to pass through, said Keeley Crockett of Manchester Metropolitan University in England, who was involved in the project.

AI anchors: Xinhua debuts digital doppelgangers for their journalists
The AI-powered news anchors, according to the outlet, will improve television reporting and be used to generate videos, especially for breaking news on its digital and social media platforms.

“I’m an English artificial intelligence anchor,” Zhang’s digital doppelganger said in introduction during his first news telecast, blinking his eyes and raising his eyebrows throughout the video. “This is my very first day in Xinhua News Agency … I will work tirelessly to keep you informed, as texts will be typed into my system uninterrupted.”

 

This is what the world’s first AI newsreader looks and sounds like [via the Guardian]

But let’s not get too carried away here. We’re talking about people’s jobs, their livelihoods.

The automation charade
Since the dawn of market society, owners and bosses have revelled in telling workers they were replaceable. Robots lend this centuries-old dynamic a troubling new twist: employers threaten employees with the specter of machine competition, shirking responsibility for their avaricious disposition through opportunistic appeals to tech determinism. A “jobless future” is inevitable, we are told, an irresistible outgrowth of innovation, the livelihood-devouring price of progress. …

Though automation is presented as a neutral process, the straightforward consequence of technological progress, one needn’t look that closely to see that this is hardly the case. Automation is both a reality and an ideology, and thus also a weapon wielded against poor and working people who have the audacity to demand better treatment, or just the right to subsist.

That article goes on to introduce a new term to describe the overselling the workplace dynamic and the casualisation of low-skilled service work, “fauxtomation.”

robot-round-up-3

But maybe we should all loosen up, and stop being so serious.

Love in the time of AI: meet the people falling for scripted robots
“Obviously as the technology gets better and the interactivity increases we’re going to be able to form closer connections to characters in games,” Reed said. “They will operate with greater flexibility and ultimately seem more lifelike and easier to connect to.”

But for Wild Rose and many of the other dating sims enthusiasts I spoke to, making the characters more “human” wasn’t particularly exciting or even desired. Saeran didn’t need to be real for her to care about him.

The HAL 9000 Christmas ornament
Fans of “2001: A Space Odyssey” will want to bring home this special Christmas ornament that celebrates 50 years of the science-fiction masterpiece. Press the button to see the ornament light up as HAL says several memorable phrases.

robot-round-up-5

A modern take on spirit photography

Those spirit photographs I mentioned just after Halloween? It seems AI is having a go at those, too.

An artificial intelligence populated these photos with glitchy humanoid ghosts
Two of the MIT researchers behind the provocative Deep Angel project, an algorithm that digitally erases objects from photos, have now delivered a strange and beautiful system to “conjure phantasms into being.” According to the project developers Matt Groh and Ziv Epstein, the phantasmagoric AI Spirits manifested by their code are meant to “commemorate those missing via algorithmic omission.”

modern-spirit-photography

Art and AI #3

A very interesting follow-up to that story about the first artwork by an AI to be auctioned. It seems the humans behind the AI, Hugo Caselles-Dupré and the Obvious team, have had to face some considerable criticism.

The AI art at Christie’s is not what you think
Hugo Caselles-Dupré, the technical lead at Obvious, shared with me: “I’ve got to be honest with you, we have totally lost control of how the press talks about us. We are in the middle of a storm and lots of false information is released with our name on it. In fact, we are really depressed about it, because we saw that the whole community of AI art now hates us because of that. At the beginning, we just wanted to create this fun project because we love machine learning.” […]

Early on Obvious made the claim that “creativity isn’t only for humans,” implying that the machine is autonomously creating their artwork. While many articles have run with this storyline, one even crediting robots, it is not what most AI artists and AI experts in general believe to be true. Most would say that AI is augmenting artists at the moment and the description in the news is greatly exaggerated. […]

In fact, when pressed, Hugo admitted to me in our interview that this was just “clumsy communication” they made in the beginning when they didn’t think anyone was actually paying attention. […]

As we saw with Salvator Mundi last year and with the Banksy last week, the most prestigious auction houses, like museums, have the ability to elevate art and increase its value by putting it into the spotlight, shaping not only the narrative of the work, but also the narrative of art history.

art-and-ai-3-2

Art and AI #2

More about computer science’s latest foray into the art world.

The first piece of AI-generated art to come to auction
As part of the ongoing dialogue over AI and art, Christie’s will become the first auction house to offer a work of art created by an algorithm.

art-and-ai-2-2

The portrait in its gilt frame depicts a portly gentleman, possibly French and — to judge by his dark frockcoat and plain white collar — a man of the church. The work appears unfinished: the facial features are somewhat indistinct and there are blank areas of canvas. Oddly, the whole composition is displaced slightly to the north-west. A label on the wall states that the sitter is a man named Edmond Belamy, but the giveaway clue as to the origins of the work is the artist’s signature at the bottom right. In cursive Gallic script it reads:

art-and-ai-2-3

This portrait, however, is not the product of a human mind. It was created by an artificial intelligence, an algorithm defined by that algebraic formula with its many parentheses.

It’s certainly a very interesting image — it reminds me a little of Francis Bacon’s popes — but the pedant in me would rather they stick with “created by an algorithm”, rather than generated by an artificial intelligence. We’re not there yet. It was the “product of a human mind”, albeit indirectly. Take that signature, for example. I refuse to believe that this artificial intelligence decided for itself to sign its work that way. Declaring the AI to be the artist, as opposed to the medium, is like saying Excel is the artist in this case:

Tatsuo Horiuchi, the 73-year old Excel spreadsheet artist
“I never used Excel at work but I saw other people making pretty graphs and thought, ‘I could probably draw with that,’” says 73-year old Tatsuo Horiuchi. About 13 years ago, shortly before retiring, Horiuchi decide he needed a new challenge in his life. So he bought a computer and began experimenting with Excel. “Graphics software is expensive but Excel comes pre-installed in most computers,” explained Horiuchi. “And it has more functions and is easier to use than [Microsoft] Paint.”

Those are amazing paintings, by the way. Colossal has more, as well as a link to an interview with Tatsuo. But anyway, here’s some more AI art.

This AI is bad at drawing but will try anyways
This bird is less, um, recognizable. When the GAN has to draw *anything* I ask for, there’s just too much to keep track of – the problem’s too broad, and the algorithm spreads itself too thin. It doesn’t just have trouble with birds. A GAN that’s been trained just on celebrity faces will tend to produce photorealistic portraits. But this one, however…

art-and-ai-2-4

In fact, it does a horrifying job with humans because it can never quite seem to get the number of orifices correct.

But it seems the human artists can still surprise us, so all’s well.

Holed up: man falls into art installation of 8ft hole painted black
If there were any doubt at all that Anish Kapoor’s work Descent into Limbo is a big hole with a 2.5-metre drop, and not a black circle painted on the floor, then it has been settled. An unnamed Italian man has discovered to his cost that the work is definitely a hole after apparently falling in it.

Nigel Farage’s £25,000 portrait failed to attract a single bid at prestigious art show
The former Ukip leader has been a dealt a blow after the work, by painter David Griffiths, raised no interest at the Royal Academy’s summer exhibition in London.

Creative credit cards

Here in the UK, we’ve had credit cards since the 60s, though the term was thought to be first coined as far back as 1887 by the novellist Edward Bellamy. So perhaps this fresh look at their design is overdue.

Portrait bank cards are a thing now
Consider the ways you use your bank card on an everyday basis, whether handing it over to a cashier, swiping it to make contactless payments, or inserting it into an ATM. How are you holding the card as you do all those things? Vertically, I’m willing to bet, or in portrait orientation, to borrow a term. And yet, the vast majority of credit and debit cards are designed in landscape, sticking to a thoroughly outdated usage model. This is the senseless design inertia that the UK’s Starling Bank is rowing against with its newly unveiled portrait card design, which was spotted by Brand New.

There’s more info on the reasons behind the change on the bank’s website.

Introducing our new card
Design usually evolves to solve something or to meet new needs, and bank cards don’t look the way they do by accident. They were designed landscape because of the way old card machines worked, and they’re embossed with raised numbers so they could be printed onto a sales voucher.

But we don’t use those machines anymore, so when you think about it, a landscape card is just a solution to a ‘problem’ that no longer exists. At Starling, we think it’s important that we can justify every decision we make – and we just couldn’t find a reason good enough to carry on using a design based on antiquated needs.

That first article from The Verge identifies a number of other banks and companies that have gone vertical. I’ve had a portrait Co-op membership card in my wallet for ages now, since their rebrand.

creative-credit-cards-2

Speaking of credit cards, here’s an interesting article about how companies across the globe are turning to AI to help assess credit ratings in what they claim to be a fairer and more transparent way. That’s the idea, anyway…

Algorithms are making the same mistakes assessing credit scores that humans did a century ago

It used to be that credit card companies would just be sneakily looking at transaction data to infer worthiness:

In the US, every transaction processed by Visa or MasterCard is coded by a “merchant category“—5122 for drugs, for example; 7277 for debt, marriage, or personal counseling; 7995 for betting and wagers; or 7273 for dating and escort services. Some companies curtailed their customers’ credit if charges appeared for counseling, because depression and marital strife were signs of potential job loss or expensive litigation.

Now the data trawl is much wider:

ZestFinance’s patent describes the use of payments data, social behavior, browsing behaviors, and details of users’ social networks as well as “any social graph informational for any or all members of the borrower’s network.” Similarly, Branch’s privacy policy mentions such factors as personal data, text message logs, social media data, financial data, and handset details including make, model, and browser type.

In these situations it becomes hard to tell what data, or combinations of data, are important — and even harder to do anything about it if these automated decisions go against us.

AI to the rescue

In 2016 the RNIB announced a project between the NHS and DeepMind, Google’s artificial intelligence company.

Artificial intelligence to look for early signs of eye conditions humans might miss
With the number of people affected by sight loss in the UK predicted to double by 2050, Moorfields Eye Hospital NHS Foundation Trust and DeepMind Health have joined forces to explore how new technologies can help medical research into eye diseases.

This wasn’t the only collaboration with the NHS that Google was involved in. There was another project, to help staff monitor patients with kidney disease, that had people concerned about the amount of the medical information being handed over.

Revealed: Google AI has access to huge haul of NHS patient data
Google says that since there is no separate dataset for people with kidney conditions, it needs access to all of the data in order to run Streams effectively. In a statement, the Royal Free NHS Trust says that it “provides DeepMind with NHS patient data in accordance with strict information governance rules and for the purpose of direct clinical care only.”

Still, some are likely to be concerned by the amount of information being made available to Google. It includes logs of day-to-day hospital activity, such as records of the location and status of patients – as well as who visits them and when. The hospitals will also share the results of certain pathology and radiology tests.

The Google-owned company tried to reassure us that everything was being done appropriately, that all those medical records would be safe with them.

DeepMind hits back at criticism of its NHS data-sharing deal
DeepMind co-founder Mustafa Suleyman has said negative headlines surrounding his company’s data-sharing deal with the NHS are being “driven by a group with a particular view to peddle”. […]

All the data shared with DeepMind will be encrypted and parent company Google will not have access to it. Suleyman said the company was holding itself to “an unprecedented level of oversight”.

That didn’t seem to cut it though.

DeepMind’s data deal with the NHS broke privacy law
“The Royal Free did not have a valid basis for satisfying the common law duty of confidence and therefore the processing of that data breached that duty,” the ICO said in its letter to the Royal Free NHS Trust. “In this light, the processing was not lawful under the Act.” […]

“The Commission is not persuaded that it was necessary and proportionate to process 1.6 million partial patient records in order to test the clinical safety of the application. The processing of these records was, in the Commissioner’s view, excessive,” the ICO said.

And now here we are, some years later, and that eye project is a big hit.

Artificial intelligence equal to experts in detecting eye diseases
The breakthrough research, published online by Nature Medicine, describes how machine-learning technology has been successfully trained on thousands of historic de-personalised eye scans to identify features of eye disease and recommend how patients should be referred for care.

Researchers hope the technology could one day transform the way professionals carry out eye tests, allowing them to spot conditions earlier and prioritise patients with the most serious eye diseases before irreversible damage sets in.

That’s from UCL, one of the project’s partners. I like the use of the phrase ‘historic de-personalised eye scans’. And it doesn’t mention Google once.

Other reports also now seem to be pushing the ‘AI will rescue us’ angle, rather than the previous ‘Google will misuse our data’ line.

DeepMind AI matches health experts at spotting eye diseases
DeepMind’s ultimate aim is to develop and implement a system that can assist the UK’s National Health Service with its ever-growing workload. Accurate AI judgements would lead to faster diagnoses and, in theory, treatment that could save patients’ vision.

Artificial intelligence ‘did not miss a single urgent case’
He told the BBC: “I think this will make most eye specialists gasp because we have shown this algorithm is as good as the world’s leading experts in interpreting these scans.” […]

He said: “Every eye doctor has seen patients go blind due to delays in referral; AI should help us to flag those urgent cases and get them treated early.”

And it seems AI can help with the really tricky problems too.

This robot uses AI to find Waldo, thereby ruining Where’s Waldo
To me, this is like the equivalent of cheating on your math homework by looking for the answers at the back of your textbook. Or worse, like getting a hand-me-down copy of Where’s Waldo and when you open the book, you find that your older cousin has already circled the Waldos in red marker. It’s about the journey, not the destination — the process of methodically scanning pages with your eyes is entirely lost! But of course, no one is actually going to use this robot to take the fun out of Where’s Waldo, it’s just a demonstration of what AutoML can do.

There’s Waldo is a robot that finds Waldo

Dumbing down the chatbots

A quite different take on Google’s AI demo from the other day. Rather than be impressed at how clever the bots appear, because they sound like us, we should be sad at how inefficient we’ve made them, because they sound like us.

Chatbots are saints
Pichai played a recording of Duplex calling a salon to schedule a haircut. This is an informational transaction that a couple of computers could accomplish in a trivial number of microseconds — bip! bap! done! — but with a human on one end of the messaging bus, it turned into a slow-motion train wreck. Completing the transaction required 17 separate data transmissions over the course of an entire minute — an eternity in the machine world. And the human in this case was operating at pretty much peak efficiency. I won’t even tell you what happened when Duplex called a restaurant to reserve a table. You could almost hear the steam coming out of the computer’s ears.

In our arrogance, we humans like to think of natural language processing as a technique aimed at raising the intelligence of machines to the point where they’re able to converse with us. Pichai’s demo suggests the reverse is true. Natural language processing is actually a technique aimed at dumbing down computers to the point where they’re able to converse with us. Google’s great breakthrough with Duplex came in its realization that by sprinkling a few monosyllabic grunts into computer-generated speech — um, ah, mmm — you could trick a human into feeling kinship with the machine. You ace the Turing test by getting machines to speak baby-talk.

Google’s creeping us out again

But it only wants to help, it’s for our own good.

Google wants to cure our phone addiction. How about that for irony?
This is Google doing what it always does. It is trying to be the solution to every aspect of our lives. It already wants to be our librarian, our encyclopedia, our dictionary, our map, our navigator, our wallet, our postman, our calendar, our newsagent, and now it wants to be our therapist. It wants us to believe it’s on our side.

There is something suspect about deploying more technology to use less technology. And something ironic about a company that fuels our tech addiction telling us that it holds the key to weaning us off it. It doubles as good PR, and pre-empts any future criticism about corporate irresponsibility.

And then there’s this. How many times have we had cause to say, ‘just because we can, doesn’t mean we should’?

Google’s new voice bot sounds, um, maybe too real
“Google Assistant making calls pretending to be human not only without disclosing that it’s a bot, but adding ‘ummm’ and ‘aaah’ to deceive the human on the other end with the room cheering it… horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing,” tweeted Zeynep Tufekci, a professor at the University of North Carolina at Chapel Hill who studies the social impacts of technology.

“As digital technologies become better at doing human things, the focus has to be on how to protect humans, how to delineate humans and machines, and how to create reliable signals of each—see 2016. This is straight up, deliberate deception. Not okay,” she added.

No time for friends?

It takes 90 hours to make a new friend
The report, published in the Journal of Social and Personal Relationships, found that it usually takes roughly 50 hours of time together to go from acquaintance to “casual friend” (think drinking buddies, or friends of friends that you see at parties); around 90 hours to become a true-to-form “friend” (you both carve out time to specifically hang out with one another); and over 200 hours to form a BFF-type bond (you feel an emotional connection with this friend).

My first thought, when I read that, was to think ‘haha I don’t have time for that goodness me that’s absolutely ages I’m very busy 90 hours that’s crazy I’ve got a spare 30 minutes next Thursday is next Thursday good for you?’

But really? We’re measuring building relationships in hours? Friendships that can last years and enrich a lifetime?

(Perhaps we should grow our own AI friends.)

Art and AI #1

Subtitled ‘What needs to happen for artificial intelligence to make fine art’, this is a fascinating read on current thinking about art and AI. The author, Hideki Nakazawa, one of the curators of the Artificial Intelligence Art and Aesthetics exhibition in Japan, thinks that, whilst we’re not there yet, we’re not too far away.

Waiting For the Robot Rembrandt
True AI fine art will be both painfully boring and highly stimulating, and that will be represent progress. Beauty, after all, cannot be quantified, and the very act of questioning the definition of aesthetics moves all art forward—something we’ve seen over and over again in the history of human-made art. The realization of AI will bring new dimensions to these questions. It will also be a triumph of materialism, further eroding the specialness of the human species and unveiling a world that has neither mystery nor God in which humans are merely machines made of inanimate materials. If we are right, it will also bring a new generation of artists, and with them, new Eiffel towers beyond our wildest imagination.

The pieces within that exhibition are grouped into four categories: human-made art with human aesthetics, human-made art with machine aesthetics, machine-made art with human aesthetics, and finally machine-made art with machine aesthetics. It’s that last category we’re interested in, but frustratingly it contained “no machine-made art, because none exists that also reflects machine aesthetics. The category was a useful placeholder—and, as we’ll learn, it was not entirely empty.”

What a great way to clarify where all these artworks, projects and systems sit. All too often we find AI and other computer systems merely mimicking the creation of art: the final product may look like art, but without the autonomous intention — without the AI wanting to create for its own sake — the AI is just a tool of the artist-behind-the-curtain, the programmer. For example:

‘Way to Artist’, intelligent robots and a human artist sketch the same image alongside each other
In the very thought-inspiring short film “Way to Artist” by TeamVOID, an artificially intelligent robotic arm and a human artist sit alongside one another to sketch the same image at the same time although with different skills. Without a word spoken, film loudly questions the role that artificial intelligence has within the creative process by putting the robots to the test.

More interestingly, here’s a wonderful piece that would have been placed in the second group of Nakazawa’s exhibition, human-made art with machine aesthetics.

Sarah Meyohas combines virtual reality, 10,000 roses and artificial intelligence in Cloud of Petals
Lastly, visitors can engage with a VR component, an element that replicates Sarah’s initial dream of the petals. There are six different screens and headsets – in a room filled with a customised rose scent – which are all gaze-activated to manipulate the AI generated petals. For example, in one headset petals explode into pixels as soon as you set your eyes on them.

And perhaps category three for these, machine-made art with human aesthetics?

A ‘neurographer’ puts the art in artificial intelligence
Claude Monet used brushes, Jackson Pollock liked a trowel, and Cartier-Bresson toted a Leica. Mario Klingemann makes art using artificial neural networks.

Yes, androids do dream of electric sheep
“Google sets up feedback loop in its image recognition neural network – which looks for patterns in pictures – creating hallucinatory images of animals, buildings and landscapes which veer from beautiful to terrifying.”

Don’t know where to place this one, however — art as a symptom of an AI’s mental ill health?

This artificial intelligence is designed to be mentally unstable
“At one end, we see all the characteristic symptoms of mental illness, hallucinations, attention deficit and mania,” Thaler says. “At the other, we have reduced cognitive flow and depression.” This process is illustrated by DABUS’s artistic output, which combines and mutates images in a progressively more surreal stream of consciousness.