AI movie posters
Each of these images was generated by AI based on a brief text description of a movie. Can you guess the movie from the image?
I guessed Mad Max and Black Swan correctly, the rest — not a clue!
Many of us are feeling the pinch these days, as the pandemic continues to take its toll on jobs and livelihoods. But there are still people out there more than happy to keep spending.
Instagram rules but don’t expect loyalty: new report analyses our online art buying behaviour – The Art Newspaper
The online art market has been a rare winner during the Covid-19 pandemic, with rising totals and many new buyers starting their collections digitally. […] Art collectors have also spent more money online, increasing the average spend—29% paid an average of $10,000+ per painting, up from 20% in 2019. Those spending over $50,000 on a work went up to 11% ( 4% in 2019).
A little out of my league, but have you seen this? Unique, original art for under £100. Generative art has a rich background, and I know I’ve highlighted new ways of buying art before, but does this feel a little scammy to anyone else?
ART AI – AI generated paintings
We use artificial intelligence to create a vast variety of original artworks. This allows us to sell each artwork once, making one of a kind art accessible to all. […] When you find something you really love, you don’t always want to share it. We find that we are emotionally connected to the art we make and the art we buy – we want it just for ourselves. Thanks to our advanced artificial intelligence, ART AI makes owning one of a kind AI art accessible to everyone, for the first time ever.
I mean, these types of images are ten-a-penny now, aren’t they?
GANksy – A.I. street artist
We trained a StyleGAN2 neural network using the portfolio of a certain street artist to create GANksy, a twisted visual genius whose work reflects our unsettled times. 256 masterpieces are for sale starting at £1, rising by a pound as each one is purchased.
This Fucked Up Homer Does Not Exist
Created by Thomas Dimson (@turtlesoupy) Based on Lightweight GAN from lucidrains.
That’s crying out to be monetised. The way one Bartkrustyhomer transitions to the next would make for a nightmarishly soothing screensaver, for instance.
I’ll just leave these here.
This Artwork Does Not Exist
Imagined by a GAN (generative adversarial network) StyleGAN2 (Dec 2019) – Karras et al. and Nvidia. Trained by Michael Friesen on images of Modern Art.
∞ stream of AI generated art
Explore the infinite creativity of this AI artist that was trained on a carefully selected set of cubist art pieces.
They’re all much-of-a-muchness, as they say around here. I think robot Rembrandt is still some way off.
It’s not unknown for artists to change their mind and paint over part of their work as their ideas develop. Earlier, I came across an article about a long-lost Vermeer cupid that conservationists had restored. He wasn’t the only one with mysteries to uncover.
Blue on Blue: Picasso blockbuster comes to Toronto in 2020
The show came together after the AGO, with the assistance of other institutions, including the National Gallery of Art, Northwestern University and the Art Institute of Chicago, used cutting-edge technology to scan several Blue Period paintings in its collection to reveal lost works underneath, namely La Soupe (1902) and La Miséreuse accroupie (also 1902).
More on that.
New research reveals secrets beneath the surface of Picasso paintings
Secrets beneath the surface of two Pablo Picasso paintings in the collection of the Art Gallery of Ontario (AGO) in Toronto have been unearthed through an in-depth research project, which combined technical analysis and art historical digging to determine probable influences for the pieces and changes made by the artist.
But x-ray and infrared analyses can only go so far. What if we roped in some neural networks to help bring these restored images to life?
This Picasso painting had never been seen before. Until a neural network painted it.
But from an aesthetic point of view, what the researchers managed to retrieve is disappointing. Infrared and x-ray images show only the faintest outlines, and while they can be used to infer the amount of paint the artist used, they do not show color or style. So a way to reconstruct the lost painting more realistically would be of huge interest. […]
This is where Bourached and Cann come in. They have taken a manually edited version of the x-ray images of the ghostly woman beneath The Old Guitarist and passed it through a neural style transfer network. This network was trained to convert images into the style of another artwork from Picasso’s Blue Period.
The result is a full-color version of the painting in exactly the style Picasso was exploring when he painted it. “We present a novel method of reconstructing lost artwork, by applying neural style transfer to x-radiographs of artwork with secondary interior artwork beneath a primary exterior, so as to reconstruct lost artwork,” they say.
Generative art by Thomas Lin Pedersen
I’m a generative artist focusing mainly on exploring the beauty of dynamic systems. For me, the sweet spot of generative art lies in creating a system that you know well enough to set it up for success, but is so complex that you still get surprised when you see the result. The more I become familiar with a system I’ve developed, the more it feels like a (slightly unpredictable) brush to paint with.
I can’t begin to understand how he’s using R, software normally used for data analysis and statistics, to create such images.
A more traditional approach would be through the use of GANs, as we’ve seen before. (Strange to use the word ‘traditional’ with such a new and emerging field.) Here’s something from Joel Simon, who also takes inspiration from the systems of biology computation and creativity.
Artbreeder — create beautiful, wild and weird images
Simply keep selecting the most interesting image to discover totally new images. Infinitely new random ‘children’ are made from each image. Artbreeder turns the simple act of exploration into creativity. […]
Artbreeder started as an experiment in using breeding and collaboration as methods of exploring high complexity spaces. GAN’s are the engine enabling this. Artbreeder is very similar to, and named after, Picbreeder. It is also inspired by an earlier project of mine Facebook Graffiti which demonstrated the creative capacity of crowds.
Deepfake Salvador Dalí takes selfies with museum visitors
The exhibition, called Dalí Lives, was made in collaboration with the ad agency Goodby, Silverstein & Partners (GS&P), which made a life-size re-creation of Dalí using the machine learning-powered video editing technique. Using archival footage from interviews, GS&P pulled over 6,000 frames and used 1,000 hours of machine learning to train the AI algorithm on Dalí’s face. His facial expressions were then imposed over an actor with Dalí’s body proportions, and quotes from his interviews and letters were synced with a voice actor who could mimic his unique accent, a mix of French, Spanish, and English.
Whilst we’re talking of Dalí, let’s go behind the scenes of that famous portrait of him by Philippe Halsman. No flashy, cutting-edge technology this time, just wire, buckets and cats.
The story behind the surreal photograph of Salvador Dalí and three flying cats
The original, unretouched version of the photo reveals its secrets: An assistant held up the chair on the left side of the frame, wires suspended the easel and the painting, and the footstool was propped up off the floor. But there was no hidden trick to the flying cats or the stream of water. For each take, Halsman’s assistants—including his wife, Yvonne, and one of his daughters, Irene—tossed the cats and the contents of a full bucket across the frame. After each attempt, Halsman developed and printed the film while Irene herded and dried off the cats. The rejected photographs had notes such as “Water splashes Dalí instead of cat” and “Secretary gets into picture.”
Time.com have a great interview with Philippe Halsman’s daughter Irene on what that shoot was like.
The story behind the surrealist ‘Dali Atomicus’ photo
“Philippe would count to four. One, two, three… And the assistants threw the cats and the water. And on four, Dali jumped. My job at the time was to catch the cats and take them to the bathroom and dry them off with a big towel. My father would run upstairs where the darkroom was, develop the film, print it, run downstairs and he’d say not good, bad composition, this was wrong, that was wrong. It took 26 tries to do this. 26 throws, 26 wiping of the floors, and 26 times catching the cats. And then, there it was, finally, this composition.”
Coincidentally, Artnome’s Jason Bailey has been using AI and deep learning to colorize old black-and-white photos of artists, including that one of Dalí’s.
50 famous artists brought to life with AI
When I was growing up, artists, and particularly twentieth century artists, were my heroes. There is something about only ever having seen many of them in black and white that makes them feel mythical and distant. Likewise, something magical happens when you add color to the photo. These icons turn into regular people who you might share a pizza or beer with.
Artnome’s Jason Bailey on a generative art exhibition he co-curated.
Kate Vass Galerie
The Automat und Mensch exhibition is, above all, an opportunity to put important work by generative artists spanning the last 70 years into context by showing it in a single location. By juxtaposing important works like the 1956/’57 oscillograms by Herbert W. Franke (age 91) with the 2018 AI Generated Nude Portrait #1 by contemporary artist Robbie Barrat (age 19), we can see the full history and spectrum of generative art as has never been shown before.
Zurich’s a little too far, unfortunately, so I’ll have to make do with the press release for now.
Generative art gets its due
In the last twelve months we have seen a tremendous spike in the interest of “AI art,” ushered in by Christie’s and Sotheby’s both offering works at auction developed with machine learning. Capturing the imaginations of collectors and the general public alike, the new work has some conservative members of the art world scratching their heads and suggesting this will merely be another passing fad. What they are missing is that this rich genre, more broadly referred to as “generative art,” has a history as long and fascinating as computing itself. A history that has largely been overlooked in the recent mania for “AI art” and one that co-curators Georg Bak and Jason Bailey hope to shine a bright light on in their upcoming show Automat und Mensch (or Machine and Man) at Kate Vass Galerie in Zurich, Switzerland.
Generative art, once perceived as the domain of a small number of “computer nerds,” is now the artform best poised to capture what sets our generation apart from those that came before us – ubiquitous computing. As children of the digital revolution, computing has become our greatest shared experience. Like it or not, we are all now computer nerds, inseparable from the many devices through which we mediate our worlds.
The press release alone is a fascinating read, covering the work of a broad range of artists and themes, past and present. For those that can make the exhibition in person, it will also include lectures and panels from the participating artists and leaders on AI art and generative art history.
A very interesting follow-up to that story about the first artwork by an AI to be auctioned. It seems the humans behind the AI, Hugo Caselles-Dupré and the Obvious team, have had to face some considerable criticism.
The AI art at Christie’s is not what you think
Hugo Caselles-Dupré, the technical lead at Obvious, shared with me: “I’ve got to be honest with you, we have totally lost control of how the press talks about us. We are in the middle of a storm and lots of false information is released with our name on it. In fact, we are really depressed about it, because we saw that the whole community of AI art now hates us because of that. At the beginning, we just wanted to create this fun project because we love machine learning.” […]
Early on Obvious made the claim that “creativity isn’t only for humans,” implying that the machine is autonomously creating their artwork. While many articles have run with this storyline, one even crediting robots, it is not what most AI artists and AI experts in general believe to be true. Most would say that AI is augmenting artists at the moment and the description in the news is greatly exaggerated. […]
In fact, when pressed, Hugo admitted to me in our interview that this was just “clumsy communication” they made in the beginning when they didn’t think anyone was actually paying attention. […]
As we saw with Salvator Mundi last year and with the Banksy last week, the most prestigious auction houses, like museums, have the ability to elevate art and increase its value by putting it into the spotlight, shaping not only the narrative of the work, but also the narrative of art history.
More about computer science’s latest foray into the art world.
The first piece of AI-generated art to come to auction
As part of the ongoing dialogue over AI and art, Christie’s will become the first auction house to offer a work of art created by an algorithm.
The portrait in its gilt frame depicts a portly gentleman, possibly French and — to judge by his dark frockcoat and plain white collar — a man of the church. The work appears unfinished: the facial features are somewhat indistinct and there are blank areas of canvas. Oddly, the whole composition is displaced slightly to the north-west. A label on the wall states that the sitter is a man named Edmond Belamy, but the giveaway clue as to the origins of the work is the artist’s signature at the bottom right. In cursive Gallic script it reads:
This portrait, however, is not the product of a human mind. It was created by an artificial intelligence, an algorithm defined by that algebraic formula with its many parentheses.
It’s certainly a very interesting image — it reminds me a little of Francis Bacon’s popes — but the pedant in me would rather they stick with “created by an algorithm”, rather than generated by an artificial intelligence. We’re not there yet. It was the “product of a human mind”, albeit indirectly. Take that signature, for example. I refuse to believe that this artificial intelligence decided for itself to sign its work that way. Declaring the AI to be the artist, as opposed to the medium, is like saying Excel is the artist in this case:
Tatsuo Horiuchi, the 73-year old Excel spreadsheet artist
“I never used Excel at work but I saw other people making pretty graphs and thought, ‘I could probably draw with that,’” says 73-year old Tatsuo Horiuchi. About 13 years ago, shortly before retiring, Horiuchi decide he needed a new challenge in his life. So he bought a computer and began experimenting with Excel. “Graphics software is expensive but Excel comes pre-installed in most computers,” explained Horiuchi. “And it has more functions and is easier to use than [Microsoft] Paint.”
This AI is bad at drawing but will try anyways
This bird is less, um, recognizable. When the GAN has to draw *anything* I ask for, there’s just too much to keep track of – the problem’s too broad, and the algorithm spreads itself too thin. It doesn’t just have trouble with birds. A GAN that’s been trained just on celebrity faces will tend to produce photorealistic portraits. But this one, however…
In fact, it does a horrifying job with humans because it can never quite seem to get the number of orifices correct.
But it seems the human artists can still surprise us, so all’s well.
Holed up: man falls into art installation of 8ft hole painted black
If there were any doubt at all that Anish Kapoor’s work Descent into Limbo is a big hole with a 2.5-metre drop, and not a black circle painted on the floor, then it has been settled. An unnamed Italian man has discovered to his cost that the work is definitely a hole after apparently falling in it.
Nigel Farage’s £25,000 portrait failed to attract a single bid at prestigious art show
The former Ukip leader has been a dealt a blow after the work, by painter David Griffiths, raised no interest at the Royal Academy’s summer exhibition in London.
Subtitled ‘What needs to happen for artificial intelligence to make fine art’, this is a fascinating read on current thinking about art and AI. The author, Hideki Nakazawa, one of the curators of the Artificial Intelligence Art and Aesthetics exhibition in Japan, thinks that, whilst we’re not there yet, we’re not too far away.
Waiting For the Robot Rembrandt
True AI fine art will be both painfully boring and highly stimulating, and that will be represent progress. Beauty, after all, cannot be quantified, and the very act of questioning the definition of aesthetics moves all art forward—something we’ve seen over and over again in the history of human-made art. The realization of AI will bring new dimensions to these questions. It will also be a triumph of materialism, further eroding the specialness of the human species and unveiling a world that has neither mystery nor God in which humans are merely machines made of inanimate materials. If we are right, it will also bring a new generation of artists, and with them, new Eiffel towers beyond our wildest imagination.
The pieces within that exhibition are grouped into four categories: human-made art with human aesthetics, human-made art with machine aesthetics, machine-made art with human aesthetics, and finally machine-made art with machine aesthetics. It’s that last category we’re interested in, but frustratingly it contained “no machine-made art, because none exists that also reflects machine aesthetics. The category was a useful placeholder—and, as we’ll learn, it was not entirely empty.”
What a great way to clarify where all these artworks, projects and systems sit. All too often we find AI and other computer systems merely mimicking the creation of art: the final product may look like art, but without the autonomous intention — without the AI wanting to create for its own sake — the AI is just a tool of the artist-behind-the-curtain, the programmer. For example:
‘Way to Artist’, intelligent robots and a human artist sketch the same image alongside each other
In the very thought-inspiring short film “Way to Artist” by TeamVOID, an artificially intelligent robotic arm and a human artist sit alongside one another to sketch the same image at the same time although with different skills. Without a word spoken, film loudly questions the role that artificial intelligence has within the creative process by putting the robots to the test.
More interestingly, here’s a wonderful piece that would have been placed in the second group of Nakazawa’s exhibition, human-made art with machine aesthetics.
Sarah Meyohas combines virtual reality, 10,000 roses and artificial intelligence in Cloud of Petals
Lastly, visitors can engage with a VR component, an element that replicates Sarah’s initial dream of the petals. There are six different screens and headsets – in a room filled with a customised rose scent – which are all gaze-activated to manipulate the AI generated petals. For example, in one headset petals explode into pixels as soon as you set your eyes on them.
And perhaps category three for these, machine-made art with human aesthetics?
A ‘neurographer’ puts the art in artificial intelligence
Claude Monet used brushes, Jackson Pollock liked a trowel, and Cartier-Bresson toted a Leica. Mario Klingemann makes art using artificial neural networks.
Yes, androids do dream of electric sheep
“Google sets up feedback loop in its image recognition neural network – which looks for patterns in pictures – creating hallucinatory images of animals, buildings and landscapes which veer from beautiful to terrifying.”
Don’t know where to place this one, however — art as a symptom of an AI’s mental ill health?
This artificial intelligence is designed to be mentally unstable
“At one end, we see all the characteristic symptoms of mental illness, hallucinations, attention deficit and mania,” Thaler says. “At the other, we have reduced cognitive flow and depression.” This process is illustrated by DABUS’s artistic output, which combines and mutates images in a progressively more surreal stream of consciousness.