Art and AI #2

More about computer science’s latest foray into the art world.

The first piece of AI-generated art to come to auction
As part of the ongoing dialogue over AI and art, Christie’s will become the first auction house to offer a work of art created by an algorithm.

art-and-ai-2-2

The portrait in its gilt frame depicts a portly gentleman, possibly French and — to judge by his dark frockcoat and plain white collar — a man of the church. The work appears unfinished: the facial features are somewhat indistinct and there are blank areas of canvas. Oddly, the whole composition is displaced slightly to the north-west. A label on the wall states that the sitter is a man named Edmond Belamy, but the giveaway clue as to the origins of the work is the artist’s signature at the bottom right. In cursive Gallic script it reads:

art-and-ai-2-3

This portrait, however, is not the product of a human mind. It was created by an artificial intelligence, an algorithm defined by that algebraic formula with its many parentheses.

It’s certainly a very interesting image — it reminds me a little of Francis Bacon’s popes — but the pedant in me would rather they stick with “created by an algorithm”, rather than generated by an artificial intelligence. We’re not there yet. It was the “product of a human mind”, albeit indirectly. Take that signature, for example. I refuse to believe that this artificial intelligence decided for itself to sign its work that way. Declaring the AI to be the artist, as opposed to the medium, is like saying Excel is the artist in this case:

Tatsuo Horiuchi, the 73-year old Excel spreadsheet artist
“I never used Excel at work but I saw other people making pretty graphs and thought, ‘I could probably draw with that,’” says 73-year old Tatsuo Horiuchi. About 13 years ago, shortly before retiring, Horiuchi decide he needed a new challenge in his life. So he bought a computer and began experimenting with Excel. “Graphics software is expensive but Excel comes pre-installed in most computers,” explained Horiuchi. “And it has more functions and is easier to use than [Microsoft] Paint.”

Those are amazing paintings, by the way. Colossal has more, as well as a link to an interview with Tatsuo. But anyway, here’s some more AI art.

This AI is bad at drawing but will try anyways
This bird is less, um, recognizable. When the GAN has to draw *anything* I ask for, there’s just too much to keep track of – the problem’s too broad, and the algorithm spreads itself too thin. It doesn’t just have trouble with birds. A GAN that’s been trained just on celebrity faces will tend to produce photorealistic portraits. But this one, however…

art-and-ai-2-4

In fact, it does a horrifying job with humans because it can never quite seem to get the number of orifices correct.

But it seems the human artists can still surprise us, so all’s well.

Holed up: man falls into art installation of 8ft hole painted black
If there were any doubt at all that Anish Kapoor’s work Descent into Limbo is a big hole with a 2.5-metre drop, and not a black circle painted on the floor, then it has been settled. An unnamed Italian man has discovered to his cost that the work is definitely a hole after apparently falling in it.

Nigel Farage’s £25,000 portrait failed to attract a single bid at prestigious art show
The former Ukip leader has been a dealt a blow after the work, by painter David Griffiths, raised no interest at the Royal Academy’s summer exhibition in London.

Creative credit cards

Here in the UK, we’ve had credit cards since the 60s, though the term was thought to be first coined as far back as 1887 by the novellist Edward Bellamy. So perhaps this fresh look at their design is overdue.

Portrait bank cards are a thing now
Consider the ways you use your bank card on an everyday basis, whether handing it over to a cashier, swiping it to make contactless payments, or inserting it into an ATM. How are you holding the card as you do all those things? Vertically, I’m willing to bet, or in portrait orientation, to borrow a term. And yet, the vast majority of credit and debit cards are designed in landscape, sticking to a thoroughly outdated usage model. This is the senseless design inertia that the UK’s Starling Bank is rowing against with its newly unveiled portrait card design, which was spotted by Brand New.

There’s more info on the reasons behind the change on the bank’s website.

Introducing our new card
Design usually evolves to solve something or to meet new needs, and bank cards don’t look the way they do by accident. They were designed landscape because of the way old card machines worked, and they’re embossed with raised numbers so they could be printed onto a sales voucher.

But we don’t use those machines anymore, so when you think about it, a landscape card is just a solution to a ‘problem’ that no longer exists. At Starling, we think it’s important that we can justify every decision we make – and we just couldn’t find a reason good enough to carry on using a design based on antiquated needs.

That first article from The Verge identifies a number of other banks and companies that have gone vertical. I’ve had a portrait Co-op membership card in my wallet for ages now, since their rebrand.

creative-credit-cards-2

Speaking of credit cards, here’s an interesting article about how companies across the globe are turning to AI to help assess credit ratings in what they claim to be a fairer and more transparent way. That’s the idea, anyway…

Algorithms are making the same mistakes assessing credit scores that humans did a century ago

It used to be that credit card companies would just be sneakily looking at transaction data to infer worthiness:

In the US, every transaction processed by Visa or MasterCard is coded by a “merchant category“—5122 for drugs, for example; 7277 for debt, marriage, or personal counseling; 7995 for betting and wagers; or 7273 for dating and escort services. Some companies curtailed their customers’ credit if charges appeared for counseling, because depression and marital strife were signs of potential job loss or expensive litigation.

Now the data trawl is much wider:

ZestFinance’s patent describes the use of payments data, social behavior, browsing behaviors, and details of users’ social networks as well as “any social graph informational for any or all members of the borrower’s network.” Similarly, Branch’s privacy policy mentions such factors as personal data, text message logs, social media data, financial data, and handset details including make, model, and browser type.

In these situations it becomes hard to tell what data, or combinations of data, are important — and even harder to do anything about it if these automated decisions go against us.

AI to the rescue

In 2016 the RNIB announced a project between the NHS and DeepMind, Google’s artificial intelligence company.

Artificial intelligence to look for early signs of eye conditions humans might miss
With the number of people affected by sight loss in the UK predicted to double by 2050, Moorfields Eye Hospital NHS Foundation Trust and DeepMind Health have joined forces to explore how new technologies can help medical research into eye diseases.

This wasn’t the only collaboration with the NHS that Google was involved in. There was another project, to help staff monitor patients with kidney disease, that had people concerned about the amount of the medical information being handed over.

Revealed: Google AI has access to huge haul of NHS patient data
Google says that since there is no separate dataset for people with kidney conditions, it needs access to all of the data in order to run Streams effectively. In a statement, the Royal Free NHS Trust says that it “provides DeepMind with NHS patient data in accordance with strict information governance rules and for the purpose of direct clinical care only.”

Still, some are likely to be concerned by the amount of information being made available to Google. It includes logs of day-to-day hospital activity, such as records of the location and status of patients – as well as who visits them and when. The hospitals will also share the results of certain pathology and radiology tests.

The Google-owned company tried to reassure us that everything was being done appropriately, that all those medical records would be safe with them.

DeepMind hits back at criticism of its NHS data-sharing deal
DeepMind co-founder Mustafa Suleyman has said negative headlines surrounding his company’s data-sharing deal with the NHS are being “driven by a group with a particular view to peddle”. […]

All the data shared with DeepMind will be encrypted and parent company Google will not have access to it. Suleyman said the company was holding itself to “an unprecedented level of oversight”.

That didn’t seem to cut it though.

DeepMind’s data deal with the NHS broke privacy law
“The Royal Free did not have a valid basis for satisfying the common law duty of confidence and therefore the processing of that data breached that duty,” the ICO said in its letter to the Royal Free NHS Trust. “In this light, the processing was not lawful under the Act.” […]

“The Commission is not persuaded that it was necessary and proportionate to process 1.6 million partial patient records in order to test the clinical safety of the application. The processing of these records was, in the Commissioner’s view, excessive,” the ICO said.

And now here we are, some years later, and that eye project is a big hit.

Artificial intelligence equal to experts in detecting eye diseases
The breakthrough research, published online by Nature Medicine, describes how machine-learning technology has been successfully trained on thousands of historic de-personalised eye scans to identify features of eye disease and recommend how patients should be referred for care.

Researchers hope the technology could one day transform the way professionals carry out eye tests, allowing them to spot conditions earlier and prioritise patients with the most serious eye diseases before irreversible damage sets in.

That’s from UCL, one of the project’s partners. I like the use of the phrase ‘historic de-personalised eye scans’. And it doesn’t mention Google once.

Other reports also now seem to be pushing the ‘AI will rescue us’ angle, rather than the previous ‘Google will misuse our data’ line.

DeepMind AI matches health experts at spotting eye diseases
DeepMind’s ultimate aim is to develop and implement a system that can assist the UK’s National Health Service with its ever-growing workload. Accurate AI judgements would lead to faster diagnoses and, in theory, treatment that could save patients’ vision.

Artificial intelligence ‘did not miss a single urgent case’
He told the BBC: “I think this will make most eye specialists gasp because we have shown this algorithm is as good as the world’s leading experts in interpreting these scans.” […]

He said: “Every eye doctor has seen patients go blind due to delays in referral; AI should help us to flag those urgent cases and get them treated early.”

And it seems AI can help with the really tricky problems too.

This robot uses AI to find Waldo, thereby ruining Where’s Waldo
To me, this is like the equivalent of cheating on your math homework by looking for the answers at the back of your textbook. Or worse, like getting a hand-me-down copy of Where’s Waldo and when you open the book, you find that your older cousin has already circled the Waldos in red marker. It’s about the journey, not the destination — the process of methodically scanning pages with your eyes is entirely lost! But of course, no one is actually going to use this robot to take the fun out of Where’s Waldo, it’s just a demonstration of what AutoML can do.

There’s Waldo is a robot that finds Waldo

Dumbing down the chatbots

A quite different take on Google’s AI demo from the other day. Rather than be impressed at how clever the bots appear, because they sound like us, we should be sad at how inefficient we’ve made them, because they sound like us.

Chatbots are saints
Pichai played a recording of Duplex calling a salon to schedule a haircut. This is an informational transaction that a couple of computers could accomplish in a trivial number of microseconds — bip! bap! done! — but with a human on one end of the messaging bus, it turned into a slow-motion train wreck. Completing the transaction required 17 separate data transmissions over the course of an entire minute — an eternity in the machine world. And the human in this case was operating at pretty much peak efficiency. I won’t even tell you what happened when Duplex called a restaurant to reserve a table. You could almost hear the steam coming out of the computer’s ears.

In our arrogance, we humans like to think of natural language processing as a technique aimed at raising the intelligence of machines to the point where they’re able to converse with us. Pichai’s demo suggests the reverse is true. Natural language processing is actually a technique aimed at dumbing down computers to the point where they’re able to converse with us. Google’s great breakthrough with Duplex came in its realization that by sprinkling a few monosyllabic grunts into computer-generated speech — um, ah, mmm — you could trick a human into feeling kinship with the machine. You ace the Turing test by getting machines to speak baby-talk.

Google’s creeping us out again

But it only wants to help, it’s for our own good.

Google wants to cure our phone addiction. How about that for irony?
This is Google doing what it always does. It is trying to be the solution to every aspect of our lives. It already wants to be our librarian, our encyclopedia, our dictionary, our map, our navigator, our wallet, our postman, our calendar, our newsagent, and now it wants to be our therapist. It wants us to believe it’s on our side.

There is something suspect about deploying more technology to use less technology. And something ironic about a company that fuels our tech addiction telling us that it holds the key to weaning us off it. It doubles as good PR, and pre-empts any future criticism about corporate irresponsibility.

And then there’s this. How many times have we had cause to say, ‘just because we can, doesn’t mean we should’?

Google’s new voice bot sounds, um, maybe too real
“Google Assistant making calls pretending to be human not only without disclosing that it’s a bot, but adding ‘ummm’ and ‘aaah’ to deceive the human on the other end with the room cheering it… horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing,” tweeted Zeynep Tufekci, a professor at the University of North Carolina at Chapel Hill who studies the social impacts of technology.

“As digital technologies become better at doing human things, the focus has to be on how to protect humans, how to delineate humans and machines, and how to create reliable signals of each—see 2016. This is straight up, deliberate deception. Not okay,” she added.

No time for friends?

It takes 90 hours to make a new friend
The report, published in the Journal of Social and Personal Relationships, found that it usually takes roughly 50 hours of time together to go from acquaintance to “casual friend” (think drinking buddies, or friends of friends that you see at parties); around 90 hours to become a true-to-form “friend” (you both carve out time to specifically hang out with one another); and over 200 hours to form a BFF-type bond (you feel an emotional connection with this friend).

My first thought, when I read that, was to think ‘haha I don’t have time for that goodness me that’s absolutely ages I’m very busy 90 hours that’s crazy I’ve got a spare 30 minutes next Thursday is next Thursday good for you?’

But really? We’re measuring building relationships in hours? Friendships that can last years and enrich a lifetime?

(Perhaps we should grow our own AI friends.)

Art and AI

Subtitled ‘What needs to happen for artificial intelligence to make fine art’, this is a fascinating read on current thinking about art and AI. The author, Hideki Nakazawa, one of the curators of the Artificial Intelligence Art and Aesthetics exhibition in Japan, thinks that, whilst we’re not there yet, we’re not too far away.

Waiting For the Robot Rembrandt
True AI fine art will be both painfully boring and highly stimulating, and that will be represent progress. Beauty, after all, cannot be quantified, and the very act of questioning the definition of aesthetics moves all art forward—something we’ve seen over and over again in the history of human-made art. The realization of AI will bring new dimensions to these questions. It will also be a triumph of materialism, further eroding the specialness of the human species and unveiling a world that has neither mystery nor God in which humans are merely machines made of inanimate materials. If we are right, it will also bring a new generation of artists, and with them, new Eiffel towers beyond our wildest imagination.

The pieces within that exhibition are grouped into four categories: human-made art with human aesthetics, human-made art with machine aesthetics, machine-made art with human aesthetics, and finally machine-made art with machine aesthetics. It’s that last category we’re interested in, but frustratingly it contained “no machine-made art, because none exists that also reflects machine aesthetics. The category was a useful placeholder—and, as we’ll learn, it was not entirely empty.”

What a great way to clarify where all these artworks, projects and systems sit. All too often we find AI and other computer systems merely mimicking the creation of art: the final product may look like art, but without the autonomous intention — without the AI wanting to create for its own sake — the AI is just a tool of the artist-behind-the-curtain, the programmer. For example:

‘Way to Artist’, intelligent robots and a human artist sketch the same image alongside each other
In the very thought-inspiring short film “Way to Artist” by TeamVOID, an artificially intelligent robotic arm and a human artist sit alongside one another to sketch the same image at the same time although with different skills. Without a word spoken, film loudly questions the role that artificial intelligence has within the creative process by putting the robots to the test.

More interestingly, here’s a wonderful piece that would have been placed in the second group of Nakazawa’s exhibition, human-made art with machine aesthetics.

Sarah Meyohas combines virtual reality, 10,000 roses and artificial intelligence in Cloud of Petals
Lastly, visitors can engage with a VR component, an element that replicates Sarah’s initial dream of the petals. There are six different screens and headsets – in a room filled with a customised rose scent – which are all gaze-activated to manipulate the AI generated petals. For example, in one headset petals explode into pixels as soon as you set your eyes on them.

And perhaps category three for these, machine-made art with human aesthetics?

A ‘neurographer’ puts the art in artificial intelligence
Claude Monet used brushes, Jackson Pollock liked a trowel, and Cartier-Bresson toted a Leica. Mario Klingemann makes art using artificial neural networks.

Yes, androids do dream of electric sheep
“Google sets up feedback loop in its image recognition neural network – which looks for patterns in pictures – creating hallucinatory images of animals, buildings and landscapes which veer from beautiful to terrifying.”

Don’t know where to place this one, however — art as a symptom of an AI’s mental ill health?

This artificial intelligence is designed to be mentally unstable
“At one end, we see all the characteristic symptoms of mental illness, hallucinations, attention deficit and mania,” Thaler says. “At the other, we have reduced cognitive flow and depression.” This process is illustrated by DABUS’s artistic output, which combines and mutates images in a progressively more surreal stream of consciousness.