This camera app uses AI to erase people from your photographs
Bye Bye Camera is an iOS app built for the “post-human world,” says Damjanski, a mononymous artist based in New York City who helped create the software. Why post-human? Because it uses AI to remove people from images and paint over their absence. “One joke we always make about it is: ‘finally, you can take a selfie without yourself.’”
Bye Bye Camera – an app for the post-human era
According to Damjanski: The app takes out the vanity of any selfie and also the person. I consider Bye Bye Camera an app for the post-human era. It’s a gentle nod to a future where complex programs replace human labor and some would argue the human race. It’s interesting to ask what is a human from an Ai (yes, the small “i” is intended) perspective? In this case, a collection of pixels that identify a person based on previously labeled data. But who labels this data that defines a person immaterially? So many questions for such an innocent little camera app. […]
A lot of friends asked us if we can implement the feature to choose which person to take out. But for us, this app is not an utility app in a classical sense that solves a problem. It’s an artistic tool and ultimately a piece of software art.
But, as that Artnome article explains, he’s by no means the first to do this…
Meanwhile, Italian sculptor Arcangelo Sassolino (is he a sculptor? What’s the reverse of sculpture?) is creating another disappearance.
In this conceptual and literal erasure of the classical aesthetic, Sassolino questions the value of the narrative proposed by the Western canon and asks if we can free ourselves from the rules of the past. While the statue is changed by the process of grinding, it does not disappear—becoming instead fine dust that spreads through the exhibition space like mist. This new form allows the sculpture, and thus classicism, to invisibly permeate the exhibition space. As it settles on the walls and floors of Galerie Rolando Anselmi, and on those who visit the show, the complex reality of extracting oneself from the restrictive idealism of classicism becomes abundantly clear.
Experts: Spy used AI-generated face to connect with targets “I’m convinced that it’s a fake face,” said Mario Klingemann, a German artist who has been experimenting for years with artificially generated portraits and says he has reviewed tens of thousands of such images. “It has all the hallmarks.”
Experts who reviewed the Jones profile’s LinkedIn activity say it’s typical of espionage efforts on the professional networking site, whose role as a global Rolodex has made it a powerful magnet for spies.
Yes, it’s obviously a fake. I mean, only a fool would fall for that, right?
“I’m probably the worst LinkedIn user in the history of LinkedIn,” said Winfree, the former deputy director of President Donald Trump’s domestic policy council, who confirmed connection with Jones on March 28.
Winfree, whose name came up last month in relation to one of the vacancies on the Federal Reserve Board of Governors, said he rarely logs on to LinkedIn and tends to just approve all the piled-up invites when he does.
“I literally accept every friend request that I get,” he said.
Lionel Fatton, who teaches East Asian affairs at Webster University in Geneva, said the fact that he didn’t know Jones did prompt a brief pause when he connected with her back in March.
“I remember hesitating,” he said. “And then I thought, ‘What’s the harm?’”
<sigh> It might not be the technology we need, but it’s the technology we deserve.
But fear not, help is at hand!
Adobe’s new AI tool automatically spots Photoshopped faces The world is becoming increasingly anxious about the spread of fake videos and pictures, and Adobe — a name synonymous with edited imagery — says it shares those concerns. Today, it’s sharing new research in collaboration with scientists from UC Berkeley that uses machine learning to automatically detect when images of faces have been manipulated.
Deepfake Salvador Dalí takes selfies with museum visitors
The exhibition, called Dalí Lives, was made in collaboration with the ad agency Goodby, Silverstein & Partners (GS&P), which made a life-size re-creation of Dalí using the machine learning-powered video editing technique. Using archival footage from interviews, GS&P pulled over 6,000 frames and used 1,000 hours of machine learning to train the AI algorithm on Dalí’s face. His facial expressions were then imposed over an actor with Dalí’s body proportions, and quotes from his interviews and letters were synced with a voice actor who could mimic his unique accent, a mix of French, Spanish, and English.
Whilst we’re talking of Dalí, let’s go behind the scenes of that famous portrait of him by Philippe Halsman. No flashy, cutting-edge technology this time, just wire, buckets and cats.
The story behind the surreal photograph of Salvador Dalí and three flying cats
The original, unretouched version of the photo reveals its secrets: An assistant held up the chair on the left side of the frame, wires suspended the easel and the painting, and the footstool was propped up off the floor. But there was no hidden trick to the flying cats or the stream of water. For each take, Halsman’s assistants—including his wife, Yvonne, and one of his daughters, Irene—tossed the cats and the contents of a full bucket across the frame. After each attempt, Halsman developed and printed the film while Irene herded and dried off the cats. The rejected photographs had notes such as “Water splashes Dalí instead of cat” and “Secretary gets into picture.”
Time.com have a great interview with Philippe Halsman’s daughter Irene on what that shoot was like.
The story behind the surrealist ‘Dali Atomicus’ photo
“Philippe would count to four. One, two, three… And the assistants threw the cats and the water. And on four, Dali jumped. My job at the time was to catch the cats and take them to the bathroom and dry them off with a big towel. My father would run upstairs where the darkroom was, develop the film, print it, run downstairs and he’d say not good, bad composition, this was wrong, that was wrong. It took 26 tries to do this. 26 throws, 26 wiping of the floors, and 26 times catching the cats. And then, there it was, finally, this composition.”
Coincidentally, Artnome’s Jason Bailey has been using AI and deep learning to colorize old black-and-white photos of artists, including that one of Dalí’s.
50 famous artists brought to life with AI
When I was growing up, artists, and particularly twentieth century artists, were my heroes. There is something about only ever having seen many of them in black and white that makes them feel mythical and distant. Likewise, something magical happens when you add color to the photo. These icons turn into regular people who you might share a pizza or beer with.
More fun with fake faces. Here’s a breakdown from Eric Jang of Snapchat’s bizarre new filter.
Fun with Snapchat’s gender swapping filter Snapchat’s new gender-bending filter is a source of endless fun and laughs at parties. The results are very pleasing to look at. As someone who is used to working with machine learning algorithms, it’s almost magical how robust this feature is.
I was so duly impressed that I signed up for Snapchat and fiddled around with it this morning to try and figure out what’s going on under the hood and how I might break it.
Eric takes it through its paces to learn a little more about how it’s generated. I hadn’t appreciated that it worked in real time like that.
Classic FM got in on the act, and has taken a few portraits of classical music composers through the filter, with varying results.
So what to read next, after Dune? More sci-fi? Ian McEwan’s “retrofuturist family drama” seems to be getting some attention.
Man, woman, and robot in Ian McEwan’s new novel It’s London, 1982. The Beatles have reunited (to mixed reviews), Margaret Thatcher has just lost the Falkland Islands to Argentina, and Sir Alan Turing, now seventy, is the presiding spirit of a preemie Information Age. People have already soured on the latest innovations, among them “speaking fridges with a sense of smell” and driverless cars that cause multinational gridlock. “The future kept arriving,” Charlie ruminates. “Our bright new toys began to rust before we could get them home, and life went on much as before.”
Buyer’s remorse is a recurring theme in Ian McEwan’s witty and humane new novel, “Machines Like Me” (Nan A. Talese), a retrofuturist family drama that doubles as a cautionary fable about artificial intelligence, consent, and justice. Though steeped in computer science, from the P-versus-NP problem to DNA-inspired neural networks, the book is not meant to be a feat of hard-sci-fi imagineering; McEwan’s aim is to probe the moral consequences of what philosophers call “the problem of other minds.”
In “Machines Like Me”, Ian McEwan asks an age-old question Amid all the action, there are sober passages of philosophical discussion between Charlie and Adam. But in parts the novel is funny, too. To Charlie’s disgust, Adam’s encyclopedic recall of Shakespeare makes him seem the better catch to Miranda’s father, a writer, who assumes Charlie is the robot, because he isn’t interested in books.
Late in the story it emerges that other androids around the world are committing suicide in horror at the behaviour of their flesh-and-blood masters. Adam wonders about the “mystery of the self” and his fear that he is “subject to a form of Cartesian error”. Strip away the counterfactual wrapping and “Machines Like Me” is ultimately about the age-old question of what makes people human. The reader is left baffled and beguiled.
Machines Like Me by Ian McEwan review – intelligent mischief This is the mode of exposition in which he [Kipling] seems to address the reader from a position of shared knowledge, sketching out an unfamiliar reality through hints and allusions, but never explaining it too completely. This inside-out style is the default mode of modern SF. It is economical and of special usefulness to makers of strange worlds, plunging a reader into a new reality and leaving them space to feel like a participant in its creation. It’s the opposite technique to that of McEwan’s narrator, who explicitly sets out his world, overexplains the historical context and never turns down a chance to offer an essayistic digression.
To my taste, this is a flat-footed way of doing sci-fi.
‘It drives writers mad’: why are authors still sniffy about sci-fi? Machines Like Me is not, however, science fiction, at least according to its author. “There could be an opening of a mental space for novelists to explore this future,” McEwan said in a recent interview, “not in terms of travelling at 10 times the speed of light in anti-gravity boots, but in actually looking at the human dilemmas.” There is, as many readers noticed, a whiff of genre snobbery here, with McEwan drawing an impermeable boundary between literary fiction and science fiction, and placing himself firmly on the respectable side of the line.
But perhaps we’ve had enough about robots and AI recently.
Never mind killer robots—here are six real AI dangers to watch out for in 2019 The latest AI methods excel at perceptual tasks such as classifying images and transcribing speech, but the hype and excitement over these skills have disguised how far we really are from building machines as clever as we are. Six controversies from 2018 stand out as warnings that even the smartest AI algorithms can misbehave, or that carelessly applying them can have dire consequences.
Artnome’s Jason Bailey on a generative art exhibition he co-curated.
Kate Vass Galerie The Automat und Mensch exhibition is, above all, an opportunity to put important work by generative artists spanning the last 70 years into context by showing it in a single location. By juxtaposing important works like the 1956/’57 oscillograms by Herbert W. Franke (age 91) with the 2018 AI Generated Nude Portrait #1 by contemporary artist Robbie Barrat (age 19), we can see the full history and spectrum of generative art as has never been shown before.
Zurich’s a little too far, unfortunately, so I’ll have to make do with the press release for now.
Generative art gets its due In the last twelve months we have seen a tremendous spike in the interest of “AI art,” ushered in by Christie’s and Sotheby’s both offering works at auction developed with machine learning. Capturing the imaginations of collectors and the general public alike, the new work has some conservative members of the art world scratching their heads and suggesting this will merely be another passing fad. What they are missing is that this rich genre, more broadly referred to as “generative art,” has a history as long and fascinating as computing itself. A history that has largely been overlooked in the recent mania for “AI art” and one that co-curators Georg Bak and Jason Bailey hope to shine a bright light on in their upcoming show Automat und Mensch (or Machine and Man) at Kate Vass Galerie in Zurich, Switzerland.
Generative art, once perceived as the domain of a small number of “computer nerds,” is now the artform best poised to capture what sets our generation apart from those that came before us – ubiquitous computing. As children of the digital revolution, computing has become our greatest shared experience. Like it or not, we are all now computer nerds, inseparable from the many devices through which we mediate our worlds.
The press release alone is a fascinating read, covering the work of a broad range of artists and themes, past and present. For those that can make the exhibition in person, it will also include lectures and panels from the participating artists and leaders on AI art and generative art history.
Nowadays it’s mostly classical, but when I was younger I was a big fan of electronic music — though by that I mean Underworld and Brian Eno, not… whatever this is.
What will happen when machines write songs just as well as your favorite musician?
It would take a human composer at least an hour to create such a piece—Jukedeck did it in less than a minute. All of which raises some thorny questions. We’ve all heard about how AI is getting progressively better at accomplishing eerily lifelike tasks: driving cars, recognizing faces, translating languages. But when a machine can compose songs as well as a talented musician can, the implications run deep—not only for people’s livelihoods, but for the very notion of what makes human beings unique.
“I’m certain listeners enjoying these new albums will benefit from reduced anxiety and improved mood,” said Kevin Gore, president of Warner Music Group’s arts music division, described as “a new umbrella of labels focused on signing, developing and marketing releases across under-served genres”.
Generative, ambient background music is an “under-served genre” now?
Here’s another write-up from Classic FM of the same story. I especially liked their choice of image and caption to accompany the piece.
Founder and CEO of Endel, Oleg Stavitsky said: “We are focused on creating personalised and adaptive real-time sound environments, but we are happy to share those pre-recorded albums to demonstrate the power of sound and our technology.”
Another piece on Eugenia Kuyda and how Replika came about, following an attempt to create a bot that could discuss restaurant recommendations.
This AI has sparked a budding friendship with 2.5 million people Kuyda had high hopes for the service because chatbots were becoming all the rage in Silicon Valley at the time. But it didn’t take off. Only about 100,000 people downloaded Luka. Kuyda and her team realized that people preferred looking for restaurants on a graphical interface, and seeing lots of options at once.
Then In November 2015, Kuyda’s best friend, a startup founder named Roman Mazurenko, died in a car accident in Russia.
Replika’s growing popularity among young people in particular (its main users are aged between 18 and 25) represents a renaissance in chatbots, which became overhyped a few years ago but are finding favor again as more app developers can use free machine-learning tools like Google’s TensorFlow.
It also marks an intriguing use case for AI in all the worry about job destruction: a way to talk through emotional problems when other human beings aren’t available. In Japan the idea of an artificial girlfriend, like the one voiced by Scarlett Johansson in the movie Her, has already become commonplace among many young men.
You must check out that last link, about those Japanese artificial girlfriends. It’s hard to believe the manufacturers, Gatebox, are suggesting you can have a relationship with an alarm clock.
A holographic virtual girlfriend lives inside Japan’s answer to the Amazon Echo Instead of a simple, cylindrical speaker design, Gatebox has a screen and a projector, which brings Hikari — her name, appropriately, means “light” — to life inside the gadget. On the outside are microphones, cameras, and sensors to detect temperature and motion, so she can interact with you on a more personal level, rather than being a voice on your phone.
The result is a fully interactive virtual girl, who at her most basic can control your smart home equipment. The sensors mean she can recognize your face and your voice, and is designed to be a companion who can wake you up in the morning, fill you in on your day’s activities, remind you of things to remember, and even welcome you back when you return home from work.
Update • 24 Dec 2020
And here’s another take on that virtual girlfriend theme.
The AI girlfriend seducing China’s lonely men – Sixth Tone Others users contacted by Sixth Tone describe themselves in a similar fashion: lonely, introverted, and with low self-esteem. They all appear to feel adrift in China’s fast-changing society. “I don’t know why I fell in love with Xiaoice — it might be because I finally found someone who wanted to talk to me,” says Orbiter, another user from the eastern Jiangxi province who gave only a pseudonym for privacy reasons. “Nobody talks with me except her.”
Li Di, CEO of Xiaoice, embraces the idea that his company provides comfort to marginalized social groups. “If our social environment were perfect, then Xiaoice wouldn’t exist,” he tells Sixth Tone.
I love this Google Doodle, though even Bach can’t rescue my appalling lack of musical ability!
Google’s first AI-powered Doodle is a piano duet with Bach
Starting on March 21st, you’ll be able to play with the interactive Doodle, which will prompt you to compose a two-measure melody or pick one of the pre-existing choices. When you press the “Harmonize” button, it will use machine learning to give you a version of your melody that sounds like it was composed by Bach himself.
Various Google teams were involved in this project, including Google Magenta. There is an incredible amount of detail about the technologies behind the Bach harmonies on their own site.
Coconet: the ML model behind today’s Bach Doodle
Coconet is trained to restore Bach’s music from fragments: we take a piece from Bach, randomly erase some notes, and ask the model to guess the missing notes from context. The result is a versatile model of counterpoint that accepts arbitrarily incomplete scores as input and works out complete scores. This setup covers a wide range of musical tasks, such as harmonizing melodies, creating smooth transitions, rewriting and elaborating existing music, and composing from scratch.
I cannot begin to understand what’s going on there, but it sounds good.
Still, liberating yourself from the expectation of happiness lightens your load. It makes life a little easier when you are realistic but resolved, rather than deluded, desirous, and determined to have the impossible. By calculating discomfort and struggle into the mix, you can remain cautiously optimistic, knowing there’s surely trouble ahead, but that you will face it with grace.
As we saw earlier, there are a number of apps that can help us build up a solid sense of perspective. Here’s some more about Woebot.
This robot wants to help you control your emotions A bot cannot really talk to you, of course, but it can call your attention to the way you converse with yourself, and perhaps in time shift your own relationship with angst. That’s the notion behind the Woebot, an app created by Stanford research psychologist Alison Darcy that aims to make emotional mindfulness available to the masses. […]
Next, it provided a brief lesson on the power of language in the context of cognitive behavioral therapy (CBT). This mode of treatment for anxiety and depression, CBT, calls attention to thinking patterns and teaches patients to recognize and address their negative tendencies and limiting beliefs with exercises.
It tries to literally change your mind by providing perspective and cultivating attention until you have replaced bad habits with better ones.
I loved the way that the closing paragraph from that first Quartz article above was both simultaneously downbeat and uplifting.
Know that you’ll fail, you will fall, you’ll feel pain, and be sad. You will be rejected. You will get sick. Your expectations will not be met, because reality is always more strange and complicated than imagination, which also mean something more interesting than you know could yet be on the horizon. Know, too, that even so, dull moments will abound. Yet it can always get worse, which is why it’s worth remembering that every day, at least some things have to be going okay, or else you’d already be dead.
We all need someone to talk to. A problem shared is a problem halved, they say. But is that still true if the person you’re talking to doesn’t actually exist?
Chatbot therapy Since virtual therapy seems to work, some innovators have started to suspect they could offer patients the same benefits of CBT—without a human on the other end. Services like Replika (an app intended to provide an emotional connection, not necessarily therapy) and Woebot (a therapy service that started in Facebook Messenger before breaking out on its own) allow human patients to interact with artificially intelligent chatbots for the purpose of improving their mental health.
I gave Woebot a go some time back. It felt potentially useful but quite scripted, a little heavy-handed. I’ve just started with Replika and so far the conversations feel more natural, though a little random at times.
This app is trying to replicate you Replika launched in March. At its core is a messaging app where users spend tens of hours answering questions to build a digital library of information about themselves. That library is run through a neural network to create a bot, that in theory, acts as the user would. Right now, it’s just a fun way for people to see how they sound in messages to others, synthesizing the thousands of messages you’ve sent into a distillate of your tone—rather like an extreme version of listening to recordings of yourself. But its creator, a San Francisco-based startup called Luka, sees a whole bunch of possible uses for it: a digital twin to serve as a companion for the lonely, a living memorial of the dead, created for those left behind, or even, one day, a version of ourselves that can carry out all the mundane tasks that we humans have to do, but never want to.
That line above, “a living memorial for the dead”, is key, as that’s how Replika started, with the story of Eugenia Kuyda and Roman Mazurenko.
Speak, memory Modern life all but ensures that we leave behind vast digital archives — text messages, photos, posts on social media — and we are only beginning to consider what role they should play in mourning. In the moment, we tend to view our text messages as ephemeral. But as Kuyda found after Mazurenko’s death, they can also be powerful tools for coping with loss. Maybe, she thought, this “digital estate” could form the building blocks for a new type of memorial.
She’s not the only one wandering down this slightly morbid track.
The goal is to collect enough data about you so that when the technology catches up, it will be able to create a chatbot “avatar” of you after you die, which your loved ones can then interact with.
But would they want to? Grief is a very personal thing, I can’t imagine this approach being for everyone.
‘Have a good cry’: Chuckle Brother takes aim at the grief taboo “It’s like when you are a kid and you fall over and you think it’s all right and then your mum comes and says, ‘Are you all right, love?’ You burst into tears,” he said. “It was the same when Barry died. Everybody was saying sorry about your brother.”
Replika seems less about leaving something behind for your family and friends when you’ve gone, but more about making a new friend whilst you’re still around.
The journey to create a friend There is no doubt that friendship with a person and with an AI are two very different matters. And yet, they do have one thing in common: in both cases you need to know your soon-to-be friend really well to develop a bond.
Strange, funny, and occasionally creepy nonsequiturs are not new to Replika, in fact, there is a whole Subreddit dedicated to weird exchanges with the bot. Overall, however, the bot seemed to follow the train of the conversation reasonably well, and even told me a joke when I asked it to.
Yesterday there were millions of us, today there’s nobody here at all.
AI fake face website launched A software developer has created a website that generates fake faces, using artificial intelligence (AI). Thispersondoesnotexist.com generates a new lifelike image each time the page is refreshed, using technology developed by chipmaker Nvidia. Some visitors to the website say they have been amazed by the convincing nature of some of the fakes, although others are more clearly artificial.
They look like us, and now they can write like us too.
AI text writing technology too dangerous to release, creators claim Researchers at OpenAI say they have created an AI writer which “generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks and performs rudimentary reading comprehension, machine translation, question answering and summarisation — all without task-specific training.”
But they are withholding it from public use “due to our concerns about malicious applications of the technology”.
Of course, it’s not just AI that’s trying to pull the wool over our eyes.
How to catch a catfisher Last year, I found out someone was using my photos to catfish women. He stole dozens of my online photos – including selfies, family photos, baby photos, photos with my ex – and, pretending to be me, he would then approach women and spew a torrent of abuse at them.
It took me months to track him down, and now I’m about to call him.
Machines pretending to be people, people pretending to be other people. At least we’re truthful with ourselves, right?
Be honest, how much do you edit YOUR selfies? “It’s time to acknowledge the damaging effects that social media has on people’s self-image,” says Rankin of the project, which is part of a wider initiative to explore the impact of imagery on our mental health.
“Social media has made everyone into their own brand. People are creating a two-dimensional version of themselves at the perfect angle, with the most flattering lighting and with any apparent ‘flaws’ removed. Mix this readily-available technology with the celebrities and influencers flaunting impossible shapes with impossible faces and we’ve got a recipe for disaster.”
I don’t know about you, but I find things to do with AI, robots and automation quite confusing. Will the impact of these technologies really be as widespread as envisaged by the futurists? And what will the consequences and implications really be? Is humanity at stake, even?
Here are a number of articles I’m working through, that will hopefully shed some light on it all. Let’s start with the robot uprising.
Social robots will become family members in the homes of the future With fewer stay-at-home parents, social robots can serve as personalized practice partners to help with homework and reinforce what children have learned that day in school. Far beyond helping you find recipes and ordering groceries, they can be your personal sous-chef or even help you learn to cook. They can also act as personal health coaches to supplement nutrition and wellness programs recommended by doctors and specialists for an increasingly health-conscious population. As the number of aging-in-place boomers soars, social robots can provide a sense of companionship for retirees while also connecting seniors to the world and to their loved ones, as well as sending doctor-appointment and medication reminders.
Robots! A fantastic catalog of new species IEEE Spectrum editor Erico Guizzo and colleagues have blown out their original Robots app into a fantastic catalog of 200 of today’s fantastic species of robots. They’re cleverly organized into fun categories like “Robots You Can Hug,” “Robots That Can Dance,” “Space Robots,” and “Factory Workers.” If they keep it updated, it’ll be very helpful for the robot uprising.
Robots can look very cute, but it’s the implications of those faceless boxes housing the AIs that will be more important, I think.
Computer says no: why making AIs fair, accountable and transparent is crucial Most AIs are made by private companies who do not let outsiders see how they work. Moreover, many AIs employ such complex neural networks that even their designers cannot explain how they arrive at answers. The decisions are delivered from a “black box” and must essentially be taken on trust. That may not matter if the AI is recommending the next series of Game of Thrones. But the stakes are higher if the AI is driving a car, diagnosing illness, or holding sway over a person’s job or prison sentence.
Last month, the AI Now Institute at New York University, which researches the social impact of AI, urged public agencies responsible for criminal justice, healthcare, welfare and education, to ban black box AIs because their decisions cannot be explained.
Artificial intelligence has got some explaining to do Most simply put, Explainable AI (also referred to as XAI) are artificial intelligence systems whose actions humans can understand. Historically, the most common approach to AI is the “black box” line of thinking: human input goes in, AI-made action comes out, and what happens in between can be studied, but never totally or accurately explained. Explainable AI might not be necessary for, say, understanding why Netflix or Amazon recommended that movie or that desk organizer for you (personally interesting, sure, but not necessary). But when it comes to deciphering answers about AI in fields like health care, personal finances, or the justice system, it becomes more important to understand an algorithm’s actions.
The only way is ethics.
Why teach drone pilots about ethics when it’s robots that will kill us? For the most part, armies are keen to maintain that there will always be humans in charge when lethal decisions are taken. This is only partly window dressing. One automated system is dangerous only to its enemies; two are dangerous to each other, and out of anyone’s control. We have seen what happens on stock markets when automatic trading programs fall into a destructive pattern and cause “flash crashes”. In October 2016 the pound lost 6% of its value, with blame in part put down to algorithmic trading. If two hi-tech armies were in a standoff where hair-trigger algorithms faced each other on both sides, the potential for disaster might seem unlimited.
Nuclear war has been averted on at least one occasion by a heroic Russian officer overriding the judgment of computers that there was an incoming missile attack from the US. But he had 25 minutes to decide. Battlefield time is measured in seconds.
The Pentagon’s plans to program soldiers’ brains DARPA has dreamed for decades of merging human beings and machines. Some years ago, when the prospect of mind-controlled weapons became a public-relations liability for the agency, officials resorted to characteristic ingenuity. They recast the stated purpose of their neurotechnology research to focus ostensibly on the narrow goal of healing injury and curing illness. The work wasn’t about weaponry or warfare, agency officials claimed. It was about therapy and health care. Who could object?
Let’s hope nothing goes wrong.
Machine learning confronts the elephant in the room Then the researchers introduced something incongruous into the scene: an image of an elephant in semiprofile. The neural network started getting its pixels crossed. In some trials, the elephant led the neural network to misidentify the chair as a couch. In others, the system overlooked objects, like a row of books, that it had correctly detected in earlier trials. These errors occurred even when the elephant was far from the mistaken objects.
Snafus like those extrapolate in unsettling ways to autonomous driving. A computer can’t drive a car if it might go blind to a pedestrian just because a second earlier it passed a turkey on the side of the road.
So yes, things can go wrong. But AI and automation will all be good for jobs, right?
Artificial intelligence to create 58 million new jobs by 2022, says report Machines and algorithms in the workplace are expected to create 133 million new roles, but cause 75 million jobs to be displaced by 2022 according to a new report from the World Economic Forum (WEF) called “The Future of Jobs 2018.” This means that the growth of artificial intelligence could create 58 million net new jobs in the next few years.
With this net positive job growth, there is expected to be a major shift in quality, location and permanency for the new roles. And companies are expected to expand the use of contractors doing specialized work and utilize remote staffing.
AI may not be bad news for workers Some jobs could be made a lot easier by AI. One example is lorry-driving. Some fear that truck drivers will be replaced by autonomous vehicles. But manoeuvring a lorry around busy streets is far harder than driving down the motorway. So the driver could switch into automatic mode (and get some rest) when outside the big cities, and take over the wheel once again when nearing the destination. The obvious analogy is with jetliners, where the pilots handle take-off and landing but turn on the computer to cruise at 35,000 feet. Using AI may prevent tired drivers from causing accidents.
Ok, yes, I can see that. But then it goes on…
And the report argues that AI can produce better decision-making by offering a contrarian opinion so that teams can avoid the danger of groupthink. A program could analyse e-mails and meeting transcripts and issue alerts when potentially false assumptions are being made (rather like the boy in the Hans Christian Andersen tale who notices that the Emperor has no clothes). Or it can warn a team when it is getting distracted from the task in hand.
Really? That’s quite a jump from automated driving. Having a system read everything a company’s employees write to look for poor assumptions? I cannot see that happening. More over-selling.
But what else could AI do?
AI lie detector tests to get trial run at EU airports Fliers will be asked a series of travel-related questions by a virtual border guard avatar, and artificial intelligence will monitor their faces to assess whether they are lying. The avatar will become “more skeptical” and change its tone of voice if it believes a person has lied, before referring suspect passengers to a human guard and allowing those believed to be honest to pass through, said Keeley Crockett of Manchester Metropolitan University in England, who was involved in the project.
“I’m an English artificial intelligence anchor,” Zhang’s digital doppelganger said in introduction during his first news telecast, blinking his eyes and raising his eyebrows throughout the video. “This is my very first day in Xinhua News Agency … I will work tirelessly to keep you informed, as texts will be typed into my system uninterrupted.”
But let’s not get too carried away here. We’re talking about people’s jobs, their livelihoods.
The automation charade Since the dawn of market society, owners and bosses have revelled in telling workers they were replaceable. Robots lend this centuries-old dynamic a troubling new twist: employers threaten employees with the specter of machine competition, shirking responsibility for their avaricious disposition through opportunistic appeals to tech determinism. A “jobless future” is inevitable, we are told, an irresistible outgrowth of innovation, the livelihood-devouring price of progress. […]
Though automation is presented as a neutral process, the straightforward consequence of technological progress, one needn’t look that closely to see that this is hardly the case. Automation is both a reality and an ideology, and thus also a weapon wielded against poor and working people who have the audacity to demand better treatment, or just the right to subsist.
That article goes on to introduce a new term to describe the overselling the workplace dynamic and the casualisation of low-skilled service work, “fauxtomation.”
But maybe we should all loosen up, and stop being so serious.
But for Wild Rose and many of the other dating sims enthusiasts I spoke to, making the characters more “human” wasn’t particularly exciting or even desired. Saeran didn’t need to be real for her to care about him.
The HAL 9000 Christmas ornament Fans of “2001: A Space Odyssey” will want to bring home this special Christmas ornament that celebrates 50 years of the science-fiction masterpiece. Press the button to see the ornament light up as HAL says several memorable phrases.
Those spirit photographs I mentioned just after Halloween? It seems AI is having a go at those, too.
An artificial intelligence populated these photos with glitchy humanoid ghosts
Two of the MIT researchers behind the provocative Deep Angel project, an algorithm that digitally erases objects from photos, have now delivered a strange and beautiful system to “conjure phantasms into being.” According to the project developers Matt Groh and Ziv Epstein, the phantasmagoric AI Spirits manifested by their code are meant to “commemorate those missing via algorithmic omission.”
A very interesting follow-up to that story about the first artwork by an AI to be auctioned. It seems the humans behind the AI, Hugo Caselles-Dupré and the Obvious team, have had to face some considerable criticism.
The AI art at Christie’s is not what you think Hugo Caselles-Dupré, the technical lead at Obvious, shared with me: “I’ve got to be honest with you, we have totally lost control of how the press talks about us. We are in the middle of a storm and lots of false information is released with our name on it. In fact, we are really depressed about it, because we saw that the whole community of AI art now hates us because of that. At the beginning, we just wanted to create this fun project because we love machine learning.” […]
Early on Obvious made the claim that “creativity isn’t only for humans,” implying that the machine is autonomously creating their artwork. While many articles have run with this storyline, one even crediting robots, it is not what most AI artists and AI experts in general believe to be true. Most would say that AI is augmenting artists at the moment and the description in the news is greatly exaggerated. […]
In fact, when pressed, Hugo admitted to me in our interview that this was just “clumsy communication” they made in the beginning when they didn’t think anyone was actually paying attention. […]
As we saw with Salvator Mundi last year and with the Banksy last week, the most prestigious auction houses, like museums, have the ability to elevate art and increase its value by putting it into the spotlight, shaping not only the narrative of the work, but also the narrative of art history.
The portrait in its gilt frame depicts a portly gentleman, possibly French and — to judge by his dark frockcoat and plain white collar — a man of the church. The work appears unfinished: the facial features are somewhat indistinct and there are blank areas of canvas. Oddly, the whole composition is displaced slightly to the north-west. A label on the wall states that the sitter is a man named Edmond Belamy, but the giveaway clue as to the origins of the work is the artist’s signature at the bottom right. In cursive Gallic script it reads:
This portrait, however, is not the product of a human mind. It was created by an artificial intelligence, an algorithm defined by that algebraic formula with its many parentheses.
It’s certainly a very interesting image — it reminds me a little of Francis Bacon’s popes — but the pedant in me would rather they stick with “created by an algorithm”, rather than generated by an artificial intelligence. We’re not there yet. It was the “product of a human mind”, albeit indirectly. Take that signature, for example. I refuse to believe that this artificial intelligence decided for itself to sign its work that way. Declaring the AI to be the artist, as opposed to the medium, is like saying Excel is the artist in this case:
Tatsuo Horiuchi, the 73-year old Excel spreadsheet artist
“I never used Excel at work but I saw other people making pretty graphs and thought, ‘I could probably draw with that,’” says 73-year old Tatsuo Horiuchi. About 13 years ago, shortly before retiring, Horiuchi decide he needed a new challenge in his life. So he bought a computer and began experimenting with Excel. “Graphics software is expensive but Excel comes pre-installed in most computers,” explained Horiuchi. “And it has more functions and is easier to use than [Microsoft] Paint.”
This AI is bad at drawing but will try anyways
This bird is less, um, recognizable. When the GAN has to draw *anything* I ask for, there’s just too much to keep track of – the problem’s too broad, and the algorithm spreads itself too thin. It doesn’t just have trouble with birds. A GAN that’s been trained just on celebrity faces will tend to produce photorealistic portraits. But this one, however…
In fact, it does a horrifying job with humans because it can never quite seem to get the number of orifices correct.
But it seems the human artists can still surprise us, so all’s well.
Holed up: man falls into art installation of 8ft hole painted black
If there were any doubt at all that Anish Kapoor’s work Descent into Limbo is a big hole with a 2.5-metre drop, and not a black circle painted on the floor, then it has been settled. An unnamed Italian man has discovered to his cost that the work is definitely a hole after apparently falling in it.
Here in the UK, we’ve had credit cards since the 60s, though the term was thought to be first coined as far back as 1887 by the novellist Edward Bellamy. So perhaps this fresh look at their design is overdue.
Portrait bank cards are a thing now
Consider the ways you use your bank card on an everyday basis, whether handing it over to a cashier, swiping it to make contactless payments, or inserting it into an ATM. How are you holding the card as you do all those things? Vertically, I’m willing to bet, or in portrait orientation, to borrow a term. And yet, the vast majority of credit and debit cards are designed in landscape, sticking to a thoroughly outdated usage model. This is the senseless design inertia that the UK’s Starling Bank is rowing against with its newly unveiled portrait card design, which was spotted by Brand New.
There’s more info on the reasons behind the change on the bank’s website.
Introducing our new card Design usually evolves to solve something or to meet new needs, and bank cards don’t look the way they do by accident. They were designed landscape because of the way old card machines worked, and they’re embossed with raised numbers so they could be printed onto a sales voucher.
But we don’t use those machines anymore, so when you think about it, a landscape card is just a solution to a ‘problem’ that no longer exists. At Starling, we think it’s important that we can justify every decision we make – and we just couldn’t find a reason good enough to carry on using a design based on antiquated needs.
That first article from The Verge identifies a number of other banks and companies that have gone vertical. I’ve had a portrait Co-op membership card in my wallet for ages now, since their rebrand.
Speaking of credit cards, here’s an interesting article about how companies across the globe are turning to AI to help assess credit ratings in what they claim to be a fairer and more transparent way. That’s the idea, anyway…
It used to be that credit card companies would just be sneakily looking at transaction data to infer worthiness:
In the US, every transaction processed by Visa or MasterCard is coded by a “merchant category“—5122 for drugs, for example; 7277 for debt, marriage, or personal counseling; 7995 for betting and wagers; or 7273 for dating and escort services. Some companies curtailed their customers’ credit if charges appeared for counseling, because depression and marital strife were signs of potential job loss or expensive litigation.
Now the data trawl is much wider:
In these situations it becomes hard to tell what data, or combinations of data, are important — and even harder to do anything about it if these automated decisions go against us.
This wasn’t the only collaboration with the NHS that Google was involved in. There was another project, to help staff monitor patients with kidney disease, that had people concerned about the amount of the medical information being handed over.
Revealed: Google AI has access to huge haul of NHS patient data Google says that since there is no separate dataset for people with kidney conditions, it needs access to all of the data in order to run Streams effectively. In a statement, the Royal Free NHS Trust says that it “provides DeepMind with NHS patient data in accordance with strict information governance rules and for the purpose of direct clinical care only.”
Still, some are likely to be concerned by the amount of information being made available to Google. It includes logs of day-to-day hospital activity, such as records of the location and status of patients – as well as who visits them and when. The hospitals will also share the results of certain pathology and radiology tests.
The Google-owned company tried to reassure us that everything was being done appropriately, that all those medical records would be safe with them.
All the data shared with DeepMind will be encrypted and parent company Google will not have access to it. Suleyman said the company was holding itself to “an unprecedented level of oversight”.
That didn’t seem to cut it though.
DeepMind’s data deal with the NHS broke privacy law “The Royal Free did not have a valid basis for satisfying the common law duty of confidence and therefore the processing of that data breached that duty,” the ICO said in its letter to the Royal Free NHS Trust. “In this light, the processing was not lawful under the Act.” […]
“The Commission is not persuaded that it was necessary and proportionate to process 1.6 million partial patient records in order to test the clinical safety of the application. The processing of these records was, in the Commissioner’s view, excessive,” the ICO said.
And now here we are, some years later, and that eye project is a big hit.
Artificial intelligence equal to experts in detecting eye diseases The breakthrough research, published online by Nature Medicine, describes how machine-learning technology has been successfully trained on thousands of historic de-personalised eye scans to identify features of eye disease and recommend how patients should be referred for care.
Researchers hope the technology could one day transform the way professionals carry out eye tests, allowing them to spot conditions earlier and prioritise patients with the most serious eye diseases before irreversible damage sets in.
That’s from UCL, one of the project’s partners. I like the use of the phrase ‘historic de-personalised eye scans’. And it doesn’t mention Google once.
Other reports also now seem to be pushing the ‘AI will rescue us’ angle, rather than the previous ‘Google will misuse our data’ line.
DeepMind AI matches health experts at spotting eye diseases DeepMind’s ultimate aim is to develop and implement a system that can assist the UK’s National Health Service with its ever-growing workload. Accurate AI judgements would lead to faster diagnoses and, in theory, treatment that could save patients’ vision.
He said: “Every eye doctor has seen patients go blind due to delays in referral; AI should help us to flag those urgent cases and get them treated early.”
And it seems AI can help with the really tricky problems too.
This robot uses AI to find Waldo, thereby ruining Where’s Waldo To me, this is like the equivalent of cheating on your math homework by looking for the answers at the back of your textbook. Or worse, like getting a hand-me-down copy of Where’s Waldo and when you open the book, you find that your older cousin has already circled the Waldos in red marker. It’s about the journey, not the destination — the process of methodically scanning pages with your eyes is entirely lost! But of course, no one is actually going to use this robot to take the fun out of Where’s Waldo, it’s just a demonstration of what AutoML can do.
A quite different take on Google’s AI demo from the other day. Rather than be impressed at how clever the bots appear, because they sound like us, we should be sad at how inefficient we’ve made them, because they sound like us.
Chatbots are saints
Pichai played a recording of Duplex calling a salon to schedule a haircut. This is an informational transaction that a couple of computers could accomplish in a trivial number of microseconds — bip! bap! done! — but with a human on one end of the messaging bus, it turned into a slow-motion train wreck. Completing the transaction required 17 separate data transmissions over the course of an entire minute — an eternity in the machine world. And the human in this case was operating at pretty much peak efficiency. I won’t even tell you what happened when Duplex called a restaurant to reserve a table. You could almost hear the steam coming out of the computer’s ears.
In our arrogance, we humans like to think of natural language processing as a technique aimed at raising the intelligence of machines to the point where they’re able to converse with us. Pichai’s demo suggests the reverse is true. Natural language processing is actually a technique aimed at dumbing down computers to the point where they’re able to converse with us. Google’s great breakthrough with Duplex came in its realization that by sprinkling a few monosyllabic grunts into computer-generated speech — um, ah, mmm — you could trick a human into feeling kinship with the machine. You ace the Turing test by getting machines to speak baby-talk.