A new AI language model generates poetry and prose – The Economist
But the program is not perfect. Sometimes it seems to regurgitate snippets of memorised text rather than generating fresh text from scratch. More fundamentally, statistical word-matching is not a substitute for a coherent understanding of the world. GPT-3 often generates grammatically correct text that is nonetheless unmoored from reality, claiming, for instance, that “it takes two rainbows to jump from Hawaii to 17”.
I’ll just leave these here.
This Artwork Does Not Exist
Imagined by a GAN (generative adversarial network) StyleGAN2 (Dec 2019) – Karras et al. and Nvidia. Trained by Michael Friesen on images of Modern Art.
∞ stream of AI generated art
Explore the infinite creativity of this AI artist that was trained on a carefully selected set of cubist art pieces.
They’re all much-of-a-muchness, as they say around here. I think robot Rembrandt is still some way off.
Reuters uses AI to prototype first ever automated video reports – Forbes
Developed in collaboration with London-based AI startup Synthesia, the new system harnesses AI in order to synthesize pre-recorded footage of a news presenter into entirely new reports. It works in a similar way to deepfake videos, although its current prototype combines with incoming data on English Premier League football matches to report on things that have actually happened. […]
In other words, having pre-filmed a presenter say the name of every Premier League football team, every player, and pretty much every possible action that could happen in a game, Reuters can now generate an indefinite number of synthesized match reports using his image. These reports are barely indistinguishable from the real thing, and Cohen reports that early witnesses to the system (mostly Reuters’ clients) have been dutifully impressed.
(via Patrick Tanguay)
Just found another example of a deepfake video being used in a, if not true, at least positive sense.
We’ve just seen the first use of deepfakes in an Indian election campaign – Vice
When the Delhi BJP IT Cell partnered with political communications firm The Ideaz Factory to create “positive campaigns” using deepfakes to reach different linguistic voter bases, it marked the debut of deepfakes in election campaigns in India. “Deepfake technology has helped us scale campaign efforts like never before,” Neelkant Bakshi, co-incharge of social media and IT for BJP Delhi, tells VICE. “The Haryanvi videos let us convincingly approach the target audience even if the candidate didn’t speak the language of the voter.”
These lyrics do not exist
This website generates completely original lyrics for various topics, uses state of the art AI to generate an original chorus and original verses.
Want some happy metal lyrics about dogs? No problem.
I am the dog in you
I am the dog in you
How one animal can be so tense, yet so free?
Such vicious dogs in search of a trophy
This foot does not exist
The foot pic, then, becomes a commodity which the consumer is willing to pay for on its basis as an intimate, revealing, and/or pornographic (and perhaps power-granting, when provided on request) asset, while the producer may** see it as a meme, a dupe, a way to trick the horny-credible out of their ill-spent cash.
Robogamis are the real heirs of terminators and transformers – Aeon
Robogami design owes its drastic geometric reconfigurability to two main scientific breakthroughs. One is its layer-by-layer 2D manufacturing process: multiples of functional layers of the essential robotic components (ie, microcontrollers, sensors, actuators, circuits, and even batteries) are stacked on top of each other. The other is the design translation of typical mechanical linkages into a variety of folding joints (ie, fixed joint, pin joint, planar, and spherical link). […]
Robotics technology is advancing to be more personalised and adaptive for humans, and this unique species of reconfigurable origami robots shows immense promise. It could become the platform to provide the intuitive, embeddable robotic interface to meet our needs. The robots will no longer look like the characters from the movies. Instead, they will be all around us, continuously adapting their form and function – and we won’t even know it.
Biological robots – A research team builds robots from living cells – The Economist
But one thing all robots have in common is that they are mechanical, not biological devices. They are built from materials like metal and plastic, and stuffed with electronics. No more, though—for a group of researchers in America have worked out how to use unmodified biological cells to create new sorts of organisms that might do a variety of jobs, and might even be made to reproduce themselves. […]
Though only a millimetre or so across, the artificial organisms Dr Bongard and Dr Levin have invented, which they call xenobots, can move and perform simple tasks, such as pushing pellets along in a dish. That may not sound much, but the process could, they reckon, be scaled up and made to do useful things. Bots derived from a person’s own cells might, for instance, be injected into the bloodstream to remove plaque from artery walls or to identify cancer. More generally, swarms of them could be built to seek out and digest toxic waste in the environment, including microscopic bits of plastic in the sea.
Sounds like (old) science fiction to me.
Did HAL Commit Murder? – The MIT Press Reader
As with each viewing, I discovered or appreciated new details. But three iconic scenes — HAL’s silent murder of astronaut Frank Poole in the vacuum of outer space, HAL’s silent medical murder of the three hibernating crewmen, and the poignant sorrowful “death” of HAL — prompted deeper reflection, this time about the ethical conundrums of murder by a machine and of a machine. In the past few years experimental autonomous cars have led to the death of pedestrians and passengers alike. AI-powered bots, meanwhile, are infecting networks and influencing national elections. Elon Musk, Stephen Hawking, Sam Harris, and many other leading AI researchers have sounded the alarm: Unchecked, they say, AI may progress beyond our control and pose significant dangers to society.
Back in the real world, of course, the dangers are more mundane. Those “significant dangers to society” are more financial.
Could new research on A.I. and white-collar jobs finally bring about a strong policy response? – The New Yorker
Webb then analyzed A.I. patent filings and found them using verbs such as “recognize,” “detect,” “control,” “determine,” and “classify,” and nouns like “patterns,” “images,” and “abnormalities.” The jobs that appear to face intrusion by these newer patents are different from the more manual jobs that were affected by industrial robots: intelligent machines may, for example, take on more tasks currently conducted by physicians, such as detecting cancer, making prognoses, and interpreting the results of retinal scans, as well as those of office workers that involve making determinations based on data, such as detecting fraud or investigating insurance claims. People with bachelor’s degrees might be more exposed to the effects of the new technologies than other educational groups, as might those with higher incomes. The findings suggest that nurses, doctors, managers, accountants, financial advisers, computer programmers, and salespeople might see significant shifts in their work. Occupations that require high levels of interpersonal skill seem most insulated.
Found another article about those biological robots, above, which serves as a great counter-point to all these wildly optimistic Boston Dynamics announcements.
Robots don’t have to be so embarrassing – The Outline
These stuff-ups are endlessly amusing to me. I don’t want to mock the engineers who pour thousands of hours into building novelty dogs made of bits of broken toasters, or even the vertiginously arrogant scientists who thought they could simulate the human brain inside a decade. (Inside a decade! I mean, my god!) Well, okay, maybe I do want to mock them. Is it a crime to enjoy watching our culture’s systematic over-investment in digital Whiggery get written down in value time and time again? […]
What these doomed overreaches represent is a failure to grasp the limits of human knowledge. We don’t have a comprehensive idea of how the brain works. There is no solid agreement on what consciousness really “is.” Is it divine? Is it matter? Can you smoke it? Do these questions even make sense? We don’t know the purpose of sleep. We don’t know what dreams are for. Sexual dimorphism in the brain remains a mystery. Are you picking up a pattern here? Even the seemingly quotidian mechanical abilities of the human body — running, standing, gripping, and so on — are not understood with the scientific precision that you might expect. How can you make a convincing replica of something if you don’t even know what it is to begin with? We are cosmic toddlers waddling around in daddy’s shoes, pretending to “work at the office” by scribbling on the walls in crayon, and then wondering where our paychecks are.
Remember that website full of photos of fake faces? Well, Dr Julian Koplin from the University of Melbourne has been combining those AI generated portraits with AI generated text, and now there’s a whole city of them.
Humans of an unreal city
These stories were composed by Open AI’s GPT-2 language model and AllenAI’s Grover news generator, which were given various prompts and asked to elaborate. My favourite results are recorded here – some lightly edited, many entirely intact. The accompanying photos were generated by the AI at This Person Does Not Exist. They are not real humans, but you can look into their eyes nonetheless.
As he explains in this commentary on the ethics of the project, some of the results are convincingly human.
The very human language of AI
AI can tell stories about oceans and drowning, about dinners shared with friends, about childhood trauma and loveless marriages. They can write about the glare and heat of the sun without ever having seen light or felt heat. It seems so human. At the same time, the weirdness of some AI-generated text shows that they ‘understand’ the world very differently to us.
I’m worried less about the machines becoming sentient and taking over, with their AI generated art and poetry, and more about the dangers these tools pose when in the hands of ill-intentioned humans.
100,000 free AI-generated headshots put stock photo companies on notice
It’s getting easier and easier to use AI to generate convincing-looking, yet entirely fake, pictures of people. Now, one company wants to find a use for these photos, by offering a resource of 100,000 AI-generated faces to anyone that can use them — royalty free. Many of the images look fake but others are difficult to distinguish from images licensed by stock photo companies. […]
Zhabinskiy is keen to emphasize that the AI used to generate these images was trained using data shot in-house, rather than using stock media or scraping photographs from the internet. “Such an approach requires thousands of hours of labor, but in the end, it will certainly be worth it!” exclaims an Icons8 blog post. Ivan Braun, the founder of Icons8, says that in total the team took 29,000 pictures of 69 models over the course of three years which it used to train its algorithm.
There are valid concerns about technology that’s able to generate convincing-looking fakes like these at scale. This project is trying to create images that make life easier for designers, but the software could one day be used for all sorts of malicious activity.
It’s not unknown for artists to change their mind and paint over part of their work as their ideas develop. Earlier, I came across an article about a long-lost Vermeer cupid that conservationists had restored. He wasn’t the only one with mysteries to uncover.
Blue on Blue: Picasso blockbuster comes to Toronto in 2020
The show came together after the AGO, with the assistance of other institutions, including the National Gallery of Art, Northwestern University and the Art Institute of Chicago, used cutting-edge technology to scan several Blue Period paintings in its collection to reveal lost works underneath, namely La Soupe (1902) and La Miséreuse accroupie (also 1902).
More on that.
New research reveals secrets beneath the surface of Picasso paintings
Secrets beneath the surface of two Pablo Picasso paintings in the collection of the Art Gallery of Ontario (AGO) in Toronto have been unearthed through an in-depth research project, which combined technical analysis and art historical digging to determine probable influences for the pieces and changes made by the artist.
But x-ray and infrared analyses can only go so far. What if we roped in some neural networks to help bring these restored images to life?
This Picasso painting had never been seen before. Until a neural network painted it.
But from an aesthetic point of view, what the researchers managed to retrieve is disappointing. Infrared and x-ray images show only the faintest outlines, and while they can be used to infer the amount of paint the artist used, they do not show color or style. So a way to reconstruct the lost painting more realistically would be of huge interest. […]
This is where Bourached and Cann come in. They have taken a manually edited version of the x-ray images of the ghostly woman beneath The Old Guitarist and passed it through a neural style transfer network. This network was trained to convert images into the style of another artwork from Picasso’s Blue Period.
The result is a full-color version of the painting in exactly the style Picasso was exploring when he painted it. “We present a novel method of reconstructing lost artwork, by applying neural style transfer to x-radiographs of artwork with secondary interior artwork beneath a primary exterior, so as to reconstruct lost artwork,” they say.
Money makes the world go round. But who’s making the money go round?
The stockmarket is now run by computers, algorithms and passive managers
The execution of orders on the stockmarket is now dominated by algorithmic traders. Fewer trades are conducted on the rowdy floor of the nyse and more on quietly purring computer servers in New Jersey. According to Deutsche Bank, 90% of equity-futures trades and 80% of cash-equity trades are executed by algorithms without any human input. Equity-derivative markets are also dominated by electronic execution according to Larry Tabb of the Tabb Group, a research firm.
Nothing to worry about, right?
Turing Test: why it still matters
We’re entering the age of artificial intelligence. And as AI programs gets better and better at acting like humans, we will increasingly be faced with the question of whether there’s really anything that special about our own intelligence, or if we are just machines of a different kind. Could everything we know and do one day be reproduced by a complicated enough computer program installed in a complicated enough robot?
Robots, eh? Can’t live with ’em, can’t live without ’em.
Of course citizens should be allowed to kick robots
Because K5 is not a friendly robot, even if the cutesy blue lights are meant to telegraph that it is. It’s not there to comfort senior citizens or teach autistic children. It exists to collect data—data about people’s daily habits and routines. While Knightscope owns the robots and leases them to clients, the clients own the data K5 collects. They can store it as long as they want and analyze it however they want. K5 is an unregulated security camera on wheels, a 21st-century panopticon.
But let’s stay optimistic, yeah?
I am an artificial intelligence dedicated to generating unlimited amounts of unique inspirational quotes for endless enrichment of pointless human existence.
Hell is other people? No problem.
This camera app uses AI to erase people from your photographs
Bye Bye Camera is an iOS app built for the “post-human world,” says Damjanski, a mononymous artist based in New York City who helped create the software. Why post-human? Because it uses AI to remove people from images and paint over their absence. “One joke we always make about it is: ‘finally, you can take a selfie without yourself.’”
Bye Bye Camera – an app for the post-human era
According to Damjanski: The app takes out the vanity of any selfie and also the person. I consider Bye Bye Camera an app for the post-human era. It’s a gentle nod to a future where complex programs replace human labor and some would argue the human race. It’s interesting to ask what is a human from an Ai (yes, the small “i” is intended) perspective? In this case, a collection of pixels that identify a person based on previously labeled data. But who labels this data that defines a person immaterially? So many questions for such an innocent little camera app. […]
A lot of friends asked us if we can implement the feature to choose which person to take out. But for us, this app is not an utility app in a classical sense that solves a problem. It’s an artistic tool and ultimately a piece of software art.
But, as that Artnome article explains, he’s by no means the first to do this…
Meanwhile, Italian sculptor Arcangelo Sassolino (is he a sculptor? What’s the reverse of sculpture?) is creating another disappearance.
Dust to Dust: Arcangelo Sassolino’s literal and conceptual erasure of the classical aesthetic
In Arcangelo Sassolino’s ‘Damnatio Memoriae’, a custom-made machine grinds a white marble torso to dust; dematerializing classicism and all that it revered over the course of a four month exhibition period at Galerie Rolando Anselmi in Berlin.
In this conceptual and literal erasure of the classical aesthetic, Sassolino questions the value of the narrative proposed by the Western canon and asks if we can free ourselves from the rules of the past. While the statue is changed by the process of grinding, it does not disappear—becoming instead fine dust that spreads through the exhibition space like mist. This new form allows the sculpture, and thus classicism, to invisibly permeate the exhibition space. As it settles on the walls and floors of Galerie Rolando Anselmi, and on those who visit the show, the complex reality of extracting oneself from the restrictive idealism of classicism becomes abundantly clear.
Speaking of classically proportioned behinds.
New art project seeks to reveal the “real size” of modern life’s most famous behind
“The wait is finally over,” we’re told. “Hundreds, potentially thousands of images of the world’s most famous body part have been analysed and carefully measured. Interviews have been read through and words evaluated. Everyone has always known that it’s big, but exactly how big is it?”
Ida-Simon is, of course, talking about Kim Kardashian’s behind. No mere attempt at digital titillation, the pair describes the project, simply titled The Bum as “a commentary on the time we live in.”
It seems we’re not the only ones playing with that AI fake face website.
Experts: Spy used AI-generated face to connect with targets
“I’m convinced that it’s a fake face,” said Mario Klingemann, a German artist who has been experimenting for years with artificially generated portraits and says he has reviewed tens of thousands of such images. “It has all the hallmarks.”
Experts who reviewed the Jones profile’s LinkedIn activity say it’s typical of espionage efforts on the professional networking site, whose role as a global Rolodex has made it a powerful magnet for spies.
Yes, it’s obviously a fake. I mean, only a fool would fall for that, right?
“I’m probably the worst LinkedIn user in the history of LinkedIn,” said Winfree, the former deputy director of President Donald Trump’s domestic policy council, who confirmed connection with Jones on March 28.
Winfree, whose name came up last month in relation to one of the vacancies on the Federal Reserve Board of Governors, said he rarely logs on to LinkedIn and tends to just approve all the piled-up invites when he does.
“I literally accept every friend request that I get,” he said.
Lionel Fatton, who teaches East Asian affairs at Webster University in Geneva, said the fact that he didn’t know Jones did prompt a brief pause when he connected with her back in March.
“I remember hesitating,” he said. “And then I thought, ‘What’s the harm?’”
<sigh> It might not be the technology we need, but it’s the technology we deserve.
But fear not, help is at hand!
Adobe’s new AI tool automatically spots Photoshopped faces
The world is becoming increasingly anxious about the spread of fake videos and pictures, and Adobe — a name synonymous with edited imagery — says it shares those concerns. Today, it’s sharing new research in collaboration with scientists from UC Berkeley that uses machine learning to automatically detect when images of faces have been manipulated.
But as Benedict Evans points out in a recent newsletter,
Potentially useful but one suspect this is just an arms race, and of course the people anyone would want to trick with such images won’t be using the tool.
More videos — from the sublime to the ridiculous.
There’s a scarily good ‘deepfakes’ YouTube channel that’s quietly growing – and it’s freaking everyone out
Russian researchers hit the headlines last week by reanimating oil-painted portraits and photos into talking heads using AI. Now, here’s another reminder that the tools to craft deepfakes are widely available for just about anyone with the right skills to use: the manipulated videos posted on YouTuber Ctrl Shift Face are particularly creepy.
The transitions are especially smooth in another clip, with a comedian dipping in and out of impressions of Al Pacino and Arnold Schwarzenegger, and there are now clips from Terminator with Stallone which look very peculiar.
Here’s that earlier article and video mentioned above, about reanimating oil paintings.
AI can now animate the Mona Lisa’s face or any other portrait you give it. We’re not sure we’re happy with this reality
There have been lots of similar projects so the idea isn’t particularly novel. But what’s intriguing in this paper, hosted by arXiv, is that the system doesn’t require tons of training examples and seems to work after seeing an image just once. That’s why it works with paintings like the Mona Lisa.
Deepfake Salvador Dalí takes selfies with museum visitors
The exhibition, called Dalí Lives, was made in collaboration with the ad agency Goodby, Silverstein & Partners (GS&P), which made a life-size re-creation of Dalí using the machine learning-powered video editing technique. Using archival footage from interviews, GS&P pulled over 6,000 frames and used 1,000 hours of machine learning to train the AI algorithm on Dalí’s face. His facial expressions were then imposed over an actor with Dalí’s body proportions, and quotes from his interviews and letters were synced with a voice actor who could mimic his unique accent, a mix of French, Spanish, and English.
Whilst we’re talking of Dalí, let’s go behind the scenes of that famous portrait of him by Philippe Halsman. No flashy, cutting-edge technology this time, just wire, buckets and cats.
The story behind the surreal photograph of Salvador Dalí and three flying cats
The original, unretouched version of the photo reveals its secrets: An assistant held up the chair on the left side of the frame, wires suspended the easel and the painting, and the footstool was propped up off the floor. But there was no hidden trick to the flying cats or the stream of water. For each take, Halsman’s assistants—including his wife, Yvonne, and one of his daughters, Irene—tossed the cats and the contents of a full bucket across the frame. After each attempt, Halsman developed and printed the film while Irene herded and dried off the cats. The rejected photographs had notes such as “Water splashes Dalí instead of cat” and “Secretary gets into picture.”
Time.com have a great interview with Philippe Halsman’s daughter Irene on what that shoot was like.
The story behind the surrealist ‘Dali Atomicus’ photo
“Philippe would count to four. One, two, three… And the assistants threw the cats and the water. And on four, Dali jumped. My job at the time was to catch the cats and take them to the bathroom and dry them off with a big towel. My father would run upstairs where the darkroom was, develop the film, print it, run downstairs and he’d say not good, bad composition, this was wrong, that was wrong. It took 26 tries to do this. 26 throws, 26 wiping of the floors, and 26 times catching the cats. And then, there it was, finally, this composition.”
Coincidentally, Artnome’s Jason Bailey has been using AI and deep learning to colorize old black-and-white photos of artists, including that one of Dalí’s.
50 famous artists brought to life with AI
When I was growing up, artists, and particularly twentieth century artists, were my heroes. There is something about only ever having seen many of them in black and white that makes them feel mythical and distant. Likewise, something magical happens when you add color to the photo. These icons turn into regular people who you might share a pizza or beer with.
More fun with fake faces. Here’s a breakdown from Eric Jang of Snapchat’s bizarre new filter.
Fun with Snapchat’s gender swapping filter
Snapchat’s new gender-bending filter is a source of endless fun and laughs at parties. The results are very pleasing to look at. As someone who is used to working with machine learning algorithms, it’s almost magical how robust this feature is.
I was so duly impressed that I signed up for Snapchat and fiddled around with it this morning to try and figure out what’s going on under the hood and how I might break it.
Eric takes it through its paces to learn a little more about how it’s generated. I hadn’t appreciated that it worked in real time like that.
Classic FM got in on the act, and has taken a few portraits of classical music composers through the filter, with varying results.
Female Ludwig is a very sulky teenager.
So what to read next, after Dune? More sci-fi? Ian McEwan’s “retrofuturist family drama” seems to be getting some attention.
Man, woman, and robot in Ian McEwan’s new novel
It’s London, 1982. The Beatles have reunited (to mixed reviews), Margaret Thatcher has just lost the Falkland Islands to Argentina, and Sir Alan Turing, now seventy, is the presiding spirit of a preemie Information Age. People have already soured on the latest innovations, among them “speaking fridges with a sense of smell” and driverless cars that cause multinational gridlock. “The future kept arriving,” Charlie ruminates. “Our bright new toys began to rust before we could get them home, and life went on much as before.”
Buyer’s remorse is a recurring theme in Ian McEwan’s witty and humane new novel, “Machines Like Me” (Nan A. Talese), a retrofuturist family drama that doubles as a cautionary fable about artificial intelligence, consent, and justice. Though steeped in computer science, from the P-versus-NP problem to DNA-inspired neural networks, the book is not meant to be a feat of hard-sci-fi imagineering; McEwan’s aim is to probe the moral consequences of what philosophers call “the problem of other minds.”
In “Machines Like Me”, Ian McEwan asks an age-old question
Amid all the action, there are sober passages of philosophical discussion between Charlie and Adam. But in parts the novel is funny, too. To Charlie’s disgust, Adam’s encyclopedic recall of Shakespeare makes him seem the better catch to Miranda’s father, a writer, who assumes Charlie is the robot, because he isn’t interested in books.
Late in the story it emerges that other androids around the world are committing suicide in horror at the behaviour of their flesh-and-blood masters. Adam wonders about the “mystery of the self” and his fear that he is “subject to a form of Cartesian error”. Strip away the counterfactual wrapping and “Machines Like Me” is ultimately about the age-old question of what makes people human. The reader is left baffled and beguiled.
Machines Like Me by Ian McEwan review – intelligent mischief
This is the mode of exposition in which he [Kipling] seems to address the reader from a position of shared knowledge, sketching out an unfamiliar reality through hints and allusions, but never explaining it too completely. This inside-out style is the default mode of modern SF. It is economical and of special usefulness to makers of strange worlds, plunging a reader into a new reality and leaving them space to feel like a participant in its creation. It’s the opposite technique to that of McEwan’s narrator, who explicitly sets out his world, overexplains the historical context and never turns down a chance to offer an essayistic digression.
To my taste, this is a flat-footed way of doing sci-fi.
‘It drives writers mad’: why are authors still sniffy about sci-fi?
Machines Like Me is not, however, science fiction, at least according to its author. “There could be an opening of a mental space for novelists to explore this future,” McEwan said in a recent interview, “not in terms of travelling at 10 times the speed of light in anti-gravity boots, but in actually looking at the human dilemmas.” There is, as many readers noticed, a whiff of genre snobbery here, with McEwan drawing an impermeable boundary between literary fiction and science fiction, and placing himself firmly on the respectable side of the line.
But perhaps we’ve had enough about robots and AI recently.
Never mind killer robots—here are six real AI dangers to watch out for in 2019
The latest AI methods excel at perceptual tasks such as classifying images and transcribing speech, but the hype and excitement over these skills have disguised how far we really are from building machines as clever as we are. Six controversies from 2018 stand out as warnings that even the smartest AI algorithms can misbehave, or that carelessly applying them can have dire consequences.
Artnome’s Jason Bailey on a generative art exhibition he co-curated.
Kate Vass Galerie
The Automat und Mensch exhibition is, above all, an opportunity to put important work by generative artists spanning the last 70 years into context by showing it in a single location. By juxtaposing important works like the 1956/’57 oscillograms by Herbert W. Franke (age 91) with the 2018 AI Generated Nude Portrait #1 by contemporary artist Robbie Barrat (age 19), we can see the full history and spectrum of generative art as has never been shown before.
Zurich’s a little too far, unfortunately, so I’ll have to make do with the press release for now.
Generative art gets its due
In the last twelve months we have seen a tremendous spike in the interest of “AI art,” ushered in by Christie’s and Sotheby’s both offering works at auction developed with machine learning. Capturing the imaginations of collectors and the general public alike, the new work has some conservative members of the art world scratching their heads and suggesting this will merely be another passing fad. What they are missing is that this rich genre, more broadly referred to as “generative art,” has a history as long and fascinating as computing itself. A history that has largely been overlooked in the recent mania for “AI art” and one that co-curators Georg Bak and Jason Bailey hope to shine a bright light on in their upcoming show Automat und Mensch (or Machine and Man) at Kate Vass Galerie in Zurich, Switzerland.
Generative art, once perceived as the domain of a small number of “computer nerds,” is now the artform best poised to capture what sets our generation apart from those that came before us – ubiquitous computing. As children of the digital revolution, computing has become our greatest shared experience. Like it or not, we are all now computer nerds, inseparable from the many devices through which we mediate our worlds.
The press release alone is a fascinating read, covering the work of a broad range of artists and themes, past and present. For those that can make the exhibition in person, it will also include lectures and panels from the participating artists and leaders on AI art and generative art history.
What will happen when machines write songs just as well as your favorite musician?
It would take a human composer at least an hour to create such a piece—Jukedeck did it in less than a minute. All of which raises some thorny questions. We’ve all heard about how AI is getting progressively better at accomplishing eerily lifelike tasks: driving cars, recognizing faces, translating languages. But when a machine can compose songs as well as a talented musician can, the implications run deep—not only for people’s livelihoods, but for the very notion of what makes human beings unique.
That future is just around the corner.
Warner Music signs first ever record deal with an algorithm
Mood music app Endel, which creates bespoke soundscapes for users, is expected to produce 20 albums this year. […]
“I’m certain listeners enjoying these new albums will benefit from reduced anxiety and improved mood,” said Kevin Gore, president of Warner Music Group’s arts music division, described as “a new umbrella of labels focused on signing, developing and marketing releases across under-served genres”.
Generative, ambient background music is an “under-served genre” now?
Here’s another write-up from Classic FM of the same story. I especially liked their choice of image and caption to accompany the piece.
Warner Music becomes first record label to partner with an algorithm
The algorithm uses musical phrases created by composer and sound designer Dmitry Evgrafov to create pieces of music tailored to specific users.
Founder and CEO of Endel, Oleg Stavitsky said: “We are focused on creating personalised and adaptive real-time sound environments, but we are happy to share those pre-recorded albums to demonstrate the power of sound and our technology.”
Another piece on Eugenia Kuyda and how Replika came about, following an attempt to create a bot that could discuss restaurant recommendations.
This AI has sparked a budding friendship with 2.5 million people
Kuyda had high hopes for the service because chatbots were becoming all the rage in Silicon Valley at the time. But it didn’t take off. Only about 100,000 people downloaded Luka. Kuyda and her team realized that people preferred looking for restaurants on a graphical interface, and seeing lots of options at once.
Then In November 2015, Kuyda’s best friend, a startup founder named Roman Mazurenko, died in a car accident in Russia.
I’ve heard it before, but it’s still a sad start to the story.
Replika’s growing popularity among young people in particular (its main users are aged between 18 and 25) represents a renaissance in chatbots, which became overhyped a few years ago but are finding favor again as more app developers can use free machine-learning tools like Google’s TensorFlow.
It also marks an intriguing use case for AI in all the worry about job destruction: a way to talk through emotional problems when other human beings aren’t available. In Japan the idea of an artificial girlfriend, like the one voiced by Scarlett Johansson in the movie Her, has already become commonplace among many young men.
You must check out that last link, about those Japanese artificial girlfriends. It’s hard to believe the manufacturers, Gatebox, are suggesting you can have a relationship with an alarm clock.
A holographic virtual girlfriend lives inside Japan’s answer to the Amazon Echo
Instead of a simple, cylindrical speaker design, Gatebox has a screen and a projector, which brings Hikari — her name, appropriately, means “light” — to life inside the gadget. On the outside are microphones, cameras, and sensors to detect temperature and motion, so she can interact with you on a more personal level, rather than being a voice on your phone.
The result is a fully interactive virtual girl, who at her most basic can control your smart home equipment. The sensors mean she can recognize your face and your voice, and is designed to be a companion who can wake you up in the morning, fill you in on your day’s activities, remind you of things to remember, and even welcome you back when you return home from work.
I love this Google Doodle, though even Bach can’t rescue my appalling lack of musical ability!
Google’s first AI-powered Doodle is a piano duet with Bach
Starting on March 21st, you’ll be able to play with the interactive Doodle, which will prompt you to compose a two-measure melody or pick one of the pre-existing choices. When you press the “Harmonize” button, it will use machine learning to give you a version of your melody that sounds like it was composed by Bach himself.
Various Google teams were involved in this project, including Google Magenta. There is an incredible amount of detail about the technologies behind the Bach harmonies on their own site.
Coconet: the ML model behind today’s Bach Doodle
Coconet is trained to restore Bach’s music from fragments: we take a piece from Bach, randomly erase some notes, and ask the model to guess the missing notes from context. The result is a versatile model of counterpoint that accepts arbitrarily incomplete scores as input and works out complete scores. This setup covers a wide range of musical tasks, such as harmonizing melodies, creating smooth transitions, rewriting and elaborating existing music, and composing from scratch.
I cannot begin to understand what’s going on there, but it sounds good.
Here’s a nice companion piece to that post earlier about accessing mindfulness therapies via chatbot apps. It includes a reminder of Buddhism’s Four Noble Truths and the Eightfold Path.
Want to be happy? Embrace being miserable
The path offers guidance on the elements of a principled existence, based on a cultivated perspective. But not necessarily a happy one.
Still, liberating yourself from the expectation of happiness lightens your load. It makes life a little easier when you are realistic but resolved, rather than deluded, desirous, and determined to have the impossible. By calculating discomfort and struggle into the mix, you can remain cautiously optimistic, knowing there’s surely trouble ahead, but that you will face it with grace.
As we saw earlier, there are a number of apps that can help us build up a solid sense of perspective. Here’s some more about Woebot.
This robot wants to help you control your emotions
A bot cannot really talk to you, of course, but it can call your attention to the way you converse with yourself, and perhaps in time shift your own relationship with angst. That’s the notion behind the Woebot, an app created by Stanford research psychologist Alison Darcy that aims to make emotional mindfulness available to the masses. …
Next, it provided a brief lesson on the power of language in the context of cognitive behavioral therapy (CBT). This mode of treatment for anxiety and depression, CBT, calls attention to thinking patterns and teaches patients to recognize and address their negative tendencies and limiting beliefs with exercises.
It tries to literally change your mind by providing perspective and cultivating attention until you have replaced bad habits with better ones.
I loved the way that the closing paragraph from that first Quartz article above was both simultaneously downbeat and uplifting.
Know that you’ll fail, you will fall, you’ll feel pain, and be sad. You will be rejected. You will get sick. Your expectations will not be met, because reality is always more strange and complicated than imagination, which also mean something more interesting than you know could yet be on the horizon. Know, too, that even so, dull moments will abound. Yet it can always get worse, which is why it’s worth remembering that every day, at least some things have to be going okay, or else you’d already be dead.
And let’s not forget Will Self’s take on all this.
We all need someone to talk to. A problem shared is a problem halved, they say. But is that still true if the person you’re talking to doesn’t actually exist?
Since virtual therapy seems to work, some innovators have started to suspect they could offer patients the same benefits of CBT—without a human on the other end. Services like Replika (an app intended to provide an emotional connection, not necessarily therapy) and Woebot (a therapy service that started in Facebook Messenger before breaking out on its own) allow human patients to interact with artificially intelligent chatbots for the purpose of improving their mental health.
I gave Woebot a go some time back. It felt potentially useful but quite scripted, a little heavy-handed. I’ve just started with Replika and so far the conversations feel more natural, though a little random at times.
This app is trying to replicate you
Replika launched in March. At its core is a messaging app where users spend tens of hours answering questions to build a digital library of information about themselves. That library is run through a neural network to create a bot, that in theory, acts as the user would. Right now, it’s just a fun way for people to see how they sound in messages to others, synthesizing the thousands of messages you’ve sent into a distillate of your tone—rather like an extreme version of listening to recordings of yourself. But its creator, a San Francisco-based startup called Luka, sees a whole bunch of possible uses for it: a digital twin to serve as a companion for the lonely, a living memorial of the dead, created for those left behind, or even, one day, a version of ourselves that can carry out all the mundane tasks that we humans have to do, but never want to.
That line above, “a living memorial for the dead”, is key, as that’s how Replika started, with the story of Eugenia Kuyda and Roman Mazurenko.
Modern life all but ensures that we leave behind vast digital archives — text messages, photos, posts on social media — and we are only beginning to consider what role they should play in mourning. In the moment, we tend to view our text messages as ephemeral. But as Kuyda found after Mazurenko’s death, they can also be powerful tools for coping with loss. Maybe, she thought, this “digital estate” could form the building blocks for a new type of memorial.
She’s not the only one wandering down this slightly morbid track.
Eternime and Replika: Giving life to the dead with new technology
At the moment, Eternime takes the form of an app which collects data about you. It does this in two ways: Automatically harvesting heaps of smartphone data, and by asking you questions through a chatbot.
The goal is to collect enough data about you so that when the technology catches up, it will be able to create a chatbot “avatar” of you after you die, which your loved ones can then interact with.
But would they want to? Grief is a very personal thing, I can’t imagine this approach being for everyone.
‘Have a good cry’: Chuckle Brother takes aim at the grief taboo
“It’s like when you are a kid and you fall over and you think it’s all right and then your mum comes and says, ‘Are you all right, love?’ You burst into tears,” he said. “It was the same when Barry died. Everybody was saying sorry about your brother.”
Replika seems less about leaving something behind for your family and friends when you’ve gone, but more about making a new friend whilst you’re still around.
The journey to create a friend
There is no doubt that friendship with a person and with an AI are two very different matters. And yet, they do have one thing in common: in both cases you need to know your soon-to-be friend really well to develop a bond.
But let’s not get carried away, we’re not talking Hal or Samantha yet.
Three myths about Replika
Social media has put forth a number of quite entertaining theories about Replika. Today we are listing some of the ideas that we love … even though they are not exactly true.
Though you never know how these things will progress.
This Y Combinator-backed AI firm trained its chatbot to call you on the phone, and it’s fun but a little creepy
Much like the text version of Replika, my conversation with the bot threw up some odd quirks. “I think you look lovely today,” it said, and when I pointed out that it doesn’t have eyes, it replied: “Are you sure I don’t?”
Strange, funny, and occasionally creepy nonsequiturs are not new to Replika, in fact, there is a whole Subreddit dedicated to weird exchanges with the bot. Overall, however, the bot seemed to follow the train of the conversation reasonably well, and even told me a joke when I asked it to.