What’s on my mind? Cars and dogs

Tesla’s new ‘mind of car’ UI signals a future we’re not prepared forUX Collective
As far as we’re concerned, everything we need to know and understand about empathy extends only towards sentient life — from stepping inside the shoes of real people we look to understand their needs, goals, pain points and desires. However, that’s beginning to change. In the same way we’ve seen in the example above, we have to stomach the idea of extending that same patience, understanding and empathy towards an AI system. Does it sound crazy? A little bit, yes. But, like a child, a new AI system learns through trial and error in an effort to reach a mature understanding to discern what is right and wrong.

A dog’s inner life: what a robot pet taught me about consciousnessThe Guardian
I spent the afternoon reading the instruction booklet while Aibo wandered around the apartment, occasionally circling back and urging me to play. He came with a pink ball that he nosed around the living room, and when I threw it, he would run to retrieve it. Aibo had sensors all over his body, so he knew when he was being petted, plus cameras that helped him learn and navigate the layout of the apartment, and microphones that let him hear voice commands. This sensory input was then processed by facial recognition software and deep-learning algorithms that allowed the dog to interpret vocal commands, differentiate between members of the household, and adapt to the temperament of its owners. According to the product website, all of this meant that the dog had “real emotions and instinct” – a claim that was apparently too ontologically thorny to have flagged the censure of the Federal Trade Commission.

Saying goodbye

Another article about grief and chatbots, and another one about the end of the web.

The Jessica Simulation: Love and loss in the age of A.I.San Francisco Chronicle
As Joshua continued to experiment, he realized there was no rule preventing him from simulating real people. What would happen, he wondered, if he tried to create a chatbot version of his dead fiancee? There was nothing strange, he thought, about wanting to reconnect with the dead: People do it all the time, in prayers and in dreams. In the last year and a half, more than 600,000 people in the U.S. and Canada have died of COVID-19, often suddenly, without closure for their loved ones, leaving a raw landscape of grief. How many survivors would gladly experiment with a technology that lets them pretend, for a moment, that their dead loved one is alive again — and able to text?

The day the good internet diedThe Ringer
The internet lasts forever, the internet never forgets. And yet it is also a place in which I feel confronted with an almost unbearable volume of daily reminders of its decay: broken links, abandoned blogs, apps gone by, deleted tweets, too-cutesy 404 messages, vanished Vines, videos whose copyright holders have requested removal, lost material that the Wayback Machine never crawled, things I know I’ve read somewhere and want to quote in my work but just can’t seem to resurface the same way I used to be able to. Some of these losses are silly and tiny, but others over the years have felt more monumental and telling. And when Google Reader disappeared in 2013, it wasn’t just a tale of dwindling user numbers or of what one engineer later described as a rotted codebase. It was a sign of the crumbling of the very foundation upon which it had been built: the era of the Good Internet.

Creating something from something

Let’s try that phrase ‘from the sublime to the ridiculous’ in reverse.

Generating images from an internet grab bagAI Weirdness
Here’s “a toaster”

The toaster is partially made of toast so I tried to get it to generate a toaster made of chrome instead. Turns out I don’t think I can get it to do a toaster made of chrome without in some way incorporating the logo of Google Chrome. General internet training seems to poison certain keywords.

Ok, never mind all that.

“Bound Species” by Photographer Jennifer LatourBooooooom
The first lock down in 2020 gave Vancouver photographer Jennifer Latour a chance to develop a beautiful new body of work, and the inspiration actually came from her work as an FX makeup artist. “It was only when I started visualizing the plants and flowers as an extension of my work in special effect makeup that it all started coming together and the splicing began. I now see each piece as kind of Frankenstein of sorts with so many fun variation to come!”

A new breed of robots?

Robots have fascinated us for years, but are we looking at them all wrong? Kate Darling, robot ethicist at MIT Media Lab, shows us a different way.

Robots are animals, not humansWIRED UK
Automation has, and will continue to have, huge impacts on labour markets – those in factories and farming are already feeling the after-shocks. There’s no question that we will continue to see industry disruptions as robotic technology develops, but in our mainstream narratives, we’re leaning too hard on the idea that robots are a one-to-one replacement for humans. Despite the AI pioneers’ original goal of recreating human intelligence, our current robots are fundamentally different. They’re not less-developed versions of us that will eventually catch up as we increase their computing power; like animals, they have a different type of intelligence entirely. […]

While there are many socioeconomic factors that influence how individual countries and societies view robots, the narrative is fluid, and our western view of robots versus humans isn’t the only one. Some of our western views can be directly attributed to our love of dystopian sci-fi. How much automation disrupts and shifts the labour market is an incredibly complicated question, but it’s striking how much of our conversations mirror speculative fiction rather than what’s currently happening on the ground, especially when our language places agency on the robots themselves, with pithy headlines like “No Jobs? Blame the Robots” instead of the more accurate “No Jobs? Blame Company Decisions Driven by Unbridled Corporate Capitalism”.

Comparing robots to animals helps us see that robots don’t necessarily replace jobs, but instead are helping us with specific tasks, like plowing fields, delivering packages by ground or air, cleaning pipes, and guarding the homestead. … [W]hen we broaden our thinking to consider what skills might complement our abilities instead of replacing them, we can better envision what’s possible with this new breed.

The New BreedPengiun
Kate Darling, a world-renowned expert in robot ethics, shows that in order to understand the new robot world, we must first move beyond the idea that this technology will be something like us. Instead, she argues, we should look to our relationship with animals. Just as we have harnessed the power of animals to aid us in war and work, so too will robots supplement – rather than replace – our own skills and abilities.

We’ve seen what happens when you add technology to animals, but this other way round sounds much more promising. One to add to the to read list.

The past, present and future of data analysis

Some interesting reads, courtesy of The Economist’s data analysis newsletter, Off The Charts. Let’s start with this question — are glasses-wearers really less conscientiousness than those who wear a headscarf?

Objective or Biased: On the questionable use of Artificial Intelligence for job applicationsBR24
Software programs promise to identify the personality traits of job candidates based on short videos. With the help of Artificial Intelligence (AI) they are supposed to make the selection process of candidates more objective and faster. An exclusive data analysis shows that an AI scrutinized by BR (Bavarian Broadcasting) journalists can be swayed by appearances. This might perpetuate stereotypes while potentially costing candidates the job.

Here, Stephanie Evergreen makes a solid, essential case for broadening our view of data visualisation and its history. I’ve mentioned khipus here before, but not within this context.

Decolonizing Data VizEvergreen Data
When we talked about these khipu and other forms of indigenous data visualization in a recent panel (with January O’Connor (Tlingit, Kake, Alaska), Mark Parman (Cherokee), & Nicky Bowman (Lunaape/Mohican)), someone in the audience commented, “It made me reflect on traditional Hmong clothing and how my ancestors have embroidered certain motifs on traditional clothing to communicate one’s clanship, what dialect of Hmong one spoke, marital status, everyday life, etc.” And this is one reason why it is so critically important to decolonize data visualization. When white men decide what counts (and doesn’t count) in terms of data, and what counts (and doesn’t count) as data visualization, and what counts (and doesn’t count) as data visualization history, they are actively gaslighting Black and Brown people about their legacy as data visualizers. When we shine a light on indigenous data visualization, we are intentionally saying the circle is much much wider and, as Nicky Bowman said, “There’s room for everyone in the lodge.”

After reconciling the past, let’s look to the future.

Who will shape the future of data visualization? Today’s kids!Nightingale
Graphs are everywhere. So, with the proper instruction, I’d expect today’s kids to become adults that are more proficient at visualizing and interpreting data than today’s adults. Besides parents, teachers, or friends, news organizations also play a role in shaping today’s kids. As Jon pointed out, news organizations can do a great job explaining to us how to read more advanced graphs.

On the other hand, as Sharon and Michael mentioned, because graphs are everywhere, there’s a danger for kids to start thinking that graphs are objective. So it is important for adults to start teaching kids how to think critically, to not necessarily accept the graph and the data at face value. In other words, it’s essential for kids to develop a toolbox. This is good for them and good for democracy — eventually, today’s kids will become more informed citizens.

Something I’m sure Jonathan Schwabish would agree with.

A Christmas singalong like no other

Missing live music? Make some yourself, with another interactive musical thing from Google.

Google’s Blob Opera lets you conduct a quartet of singing blobs for instant festive joyIt’s Nice That
Whatever you’re doing right now, it can wait – because Blob Opera is probably the most fun you’ll have today. A new machine learning experiment by David Li for Google Arts & Culture, the online interactive instrument features four animated blob characters which you can conduct to create your own music.

Try it for yourself!

Blob OperaGoogle Arts & Culture
Create your own opera inspired song with Blob Opera – no music skills required! A machine learning experiment by David Li in collaboration with Google Arts & Culture.

it’s all very silly, but you have to admit, they do make a wonderful sound. That’s due, no doubt, to some clever coding, but also to the skills of the real humans behind these machine-learned voices.

You can now create your own 4-part ‘Blob Opera‘ with this addictive Google appClassic FM
The voices are those of real-life opera singers, tenor Christian Joel, bass Frederick Tong, mezzo-soprano Joanna Gamble and soprano Olivia Doutney, who recorded many hours of singing for the experiment. You don’t hear their actual voices in the tool, but rather the machine learning model’s understanding of what opera singing sounds like, based on what it learned from the four vocalists.

It’s all great fun. And I hadn’t realised how extensive the Google Arts & Culture site is. Lots to play with, whilst we wait for all the real galleries and museums to get back to normal.

They’re taking over

Yesterday, upon the stair, I met a man who wasn’t there.

I’ve shared articles about these fake, engineered nobodies before, but the transitions, animations and sliders in this piece from the New York Times are very effective, and great fun — a genuine individual on every frame.

Designed to deceive: Do these people look real to you?The New York Times
Given the pace of improvement, it’s easy to imagine a not-so-distant future in which we are confronted with not just single portraits of fake people but whole collections of them — at a party with fake friends, hanging out with their fake dogs, holding their fake babies. It will become increasingly difficult to tell who is real online and who is a figment of a computer’s imagination.

You think you know someone …

… but they turn out to be …

… someone completely different.

All these fakes — people, art, feet — it’s hard to keep track. Well, not any more.

This X Does Not Exist
Using generative adversarial networks (GAN), we can learn how to create realistic-looking fake versions of almost anything, as shown by this collection of sites that have sprung up in the past month.

A more poetic AI

I’ve a number of posts here tagged AI and art, but not so many about its impact on music or poetry. Let’s put that right. But first (via It’s Nice That), a quick recap.

The A-Z of AIWith Google
This beginner’s A-Z guide is a collaboration between the Oxford Internet Institute (OII) at the University of Oxford and Google, intended to break a complex area of computer science down into entry-level explanations that will help anyone get their bearings and understand the basics.

Topics include bias and ethics, as well as quantum computing and the Turing test. Nothing about Shakespeare though.

This AI poet mastered rhythm, rhyme, and natural language to write like ShakespeareIEEE Spectrum
Deep-speare’s creation is nonsensical when you read it closely, but it certainly “scans well,” as an English teacher would say—its rhythm, rhyme scheme, and the basic grammar of its individual lines all seem fine at first glance. As our research team discovered when we showed our AI’s poetry to the world, that’s enough to fool quite a lot of people; most readers couldn’t distinguish the AI-generated poetry from human-written works.

I think they’re better off sticking to the visuals.

Beck launches Hyperspace: AI Exploration, a visual album with NASAIt’s Nice That
The project was made possible by AI architects and directors OSK, founded by artists Jon Ray and Isabelle Albuquerque, who began the project by asking, “How would artificial intelligence imagine our universe?” In answering this question it allowed the directors to create “a unique AI utilising computer vision, machine learning and Generative Adversarial neural Networks (GAN) to learn from NASA’s vast archives.” The AI then trained itself through these thousands of images, data and videos, to then begin “creating its own visions of our universe.”

Some of them can really hold a tune, though.

What do machines sing of?Martin Backes
“What do machines sing of?” is a fully automated machine, which endlessly sings number-one ballads from the 1990s. As the computer program performs these emotionally loaded songs, it attempts to apply the appropriate human sentiments. This behavior of the device seems to reflect a desire, on the part of the machine, to become sophisticated enough to have its very own personality.

Lastly, it’s good to see that you can still be silly with technology and music.

Human authors have nothing to fear—for now

A new AI language model generates poetry and proseThe Economist
But the program is not perfect. Sometimes it seems to regurgitate snippets of memorised text rather than generating fresh text from scratch. More fundamentally, statistical word-matching is not a substitute for a coherent understanding of the world. GPT-3 often generates grammatically correct text that is nonetheless unmoored from reality, claiming, for instance, that “it takes two rainbows to jump from Hawaii to 17”.

More AI art

I’ll just leave these here.

This Artwork Does Not Exist
Imagined by a GAN (generative adversarial network) StyleGAN2 (Dec 2019) – Karras et al. and Nvidia. Trained by Michael Friesen on images of Modern Art.

∞ stream of AI generated art
Explore the infinite creativity of this AI artist that was trained on a carefully selected set of cubist art pieces.

more-ai-art-6-1

They’re all much-of-a-muchness, as they say around here. I think robot Rembrandt is still some way off.

Top flight fakery

A while ago I shared news of the world’s first AI presenter. And there’s lots here about fake news. But what about taking deepfake-style technology to produce true news?

Reuters uses AI to prototype first ever automated video reportsForbes
Developed in collaboration with London-based AI startup Synthesia, the new system harnesses AI in order to synthesize pre-recorded footage of a news presenter into entirely new reports. It works in a similar way to deepfake videos, although its current prototype combines with incoming data on English Premier League football matches to report on things that have actually happened. […]

In other words, having pre-filmed a presenter say the name of every Premier League football team, every player, and pretty much every possible action that could happen in a game, Reuters can now generate an indefinite number of synthesized match reports using his image. These reports are barely indistinguishable from the real thing, and Cohen reports that early witnesses to the system (mostly Reuters’ clients) have been dutifully impressed.

top-flight-fakery-1

(via Patrick Tanguay)

Update 26/02/2020

Just found another example of a deepfake video being used in a, if not true, at least positive sense.

We’ve just seen the first use of deepfakes in an Indian election campaignVice
When the Delhi BJP IT Cell partnered with political communications firm The Ideaz Factory to create “positive campaigns” using deepfakes to reach different linguistic voter bases, it marked the debut of deepfakes in election campaigns in India. “Deepfake technology has helped us scale campaign efforts like never before,” Neelkant Bakshi, co-incharge of social media and IT for BJP Delhi, tells VICE. “The Haryanvi videos let us convincingly approach the target audience even if the candidate didn’t speak the language of the voter.”

What else doesn’t exist?

I admit I had fun with those people who don’t exist and their related websites, but it’s getting a little silly now. An artificially intelligent songwriter? AI feet??

These lyrics do not exist
This website generates completely original lyrics for various topics, uses state of the art AI to generate an original chorus and original verses.

Want some happy metal lyrics about dogs? No problem.

I am the dog in you
I am the dog in you
How one animal can be so tense, yet so free?
Such vicious dogs in search of a trophy

This foot does not exist
The foot pic, then, becomes a commodity which the consumer is willing to pay for on its basis as an intimate, revealing, and/or pornographic (and perhaps power-granting, when provided on request) asset, while the producer may** see it as a meme, a dupe, a way to trick the horny-credible out of their ill-spent cash.

A little robot round-up #2

Another quick look at what our new robot overlords are up to.

Robogamis are the real heirs of terminators and transformersAeon
Robogami design owes its drastic geometric reconfigurability to two main scientific breakthroughs. One is its layer-by-layer 2D manufacturing process: multiples of functional layers of the essential robotic components (ie, microcontrollers, sensors, actuators, circuits, and even batteries) are stacked on top of each other. The other is the design translation of typical mechanical linkages into a variety of folding joints (ie, fixed joint, pin joint, planar, and spherical link). […]

Robotics technology is advancing to be more personalised and adaptive for humans, and this unique species of reconfigurable origami robots shows immense promise. It could become the platform to provide the intuitive, embeddable robotic interface to meet our needs. The robots will no longer look like the characters from the movies. Instead, they will be all around us, continuously adapting their form and function – and we won’t even know it.

Biological robots – A research team builds robots from living cellsThe Economist
But one thing all robots have in common is that they are mechanical, not biological devices. They are built from materials like metal and plastic, and stuffed with electronics. No more, though—for a group of researchers in America have worked out how to use unmodified biological cells to create new sorts of organisms that might do a variety of jobs, and might even be made to reproduce themselves. […]

Though only a millimetre or so across, the artificial organisms Dr Bongard and Dr Levin have invented, which they call xenobots, can move and perform simple tasks, such as pushing pellets along in a dish. That may not sound much, but the process could, they reckon, be scaled up and made to do useful things. Bots derived from a person’s own cells might, for instance, be injected into the bloodstream to remove plaque from artery walls or to identify cancer. More generally, swarms of them could be built to seek out and digest toxic waste in the environment, including microscopic bits of plastic in the sea.

little-robot-round-up-2-2

Sounds like (old) science fiction to me.

Did HAL Commit Murder?The MIT Press Reader
As with each viewing, I discovered or appreciated new details. But three iconic scenes — HAL’s silent murder of astronaut Frank Poole in the vacuum of outer space, HAL’s silent medical murder of the three hibernating crewmen, and the poignant sorrowful “death” of HAL — prompted deeper reflection, this time about the ethical conundrums of murder by a machine and of a machine. In the past few years experimental autonomous cars have led to the death of pedestrians and passengers alike. AI-powered bots, meanwhile, are infecting networks and influencing national elections. Elon Musk, Stephen Hawking, Sam Harris, and many other leading AI researchers have sounded the alarm: Unchecked, they say, AI may progress beyond our control and pose significant dangers to society.

Back in the real world, of course, the dangers are more mundane. Those “significant dangers to society” are more financial.

Could new research on A.I. and white-collar jobs finally bring about a strong policy response?The New Yorker
Webb then analyzed A.I. patent filings and found them using verbs such as “recognize,” “detect,” “control,” “determine,” and “classify,” and nouns like “patterns,” “images,” and “abnormalities.” The jobs that appear to face intrusion by these newer patents are different from the more manual jobs that were affected by industrial robots: intelligent machines may, for example, take on more tasks currently conducted by physicians, such as detecting cancer, making prognoses, and interpreting the results of retinal scans, as well as those of office workers that involve making determinations based on data, such as detecting fraud or investigating insurance claims. People with bachelor’s degrees might be more exposed to the effects of the new technologies than other educational groups, as might those with higher incomes. The findings suggest that nurses, doctors, managers, accountants, financial advisers, computer programmers, and salespeople might see significant shifts in their work. Occupations that require high levels of interpersonal skill seem most insulated.

Update 31/01/2020

Found another article about those biological robots, above, which serves as a great counter-point to all these wildly optimistic Boston Dynamics announcements.

Robots don’t have to be so embarrassingThe Outline
These stuff-ups are endlessly amusing to me. I don’t want to mock the engineers who pour thousands of hours into building novelty dogs made of bits of broken toasters, or even the vertiginously arrogant scientists who thought they could simulate the human brain inside a decade. (Inside a decade! I mean, my god!) Well, okay, maybe I do want to mock them. Is it a crime to enjoy watching our culture’s systematic over-investment in digital Whiggery get written down in value time and time again? […]

What these doomed overreaches represent is a failure to grasp the limits of human knowledge. We don’t have a comprehensive idea of how the brain works. There is no solid agreement on what consciousness really “is.” Is it divine? Is it matter? Can you smoke it? Do these questions even make sense? We don’t know the purpose of sleep. We don’t know what dreams are for. Sexual dimorphism in the brain remains a mystery. Are you picking up a pattern here? Even the seemingly quotidian mechanical abilities of the human body — running, standing, gripping, and so on — are not understood with the scientific precision that you might expect. How can you make a convincing replica of something if you don’t even know what it is to begin with? We are cosmic toddlers waddling around in daddy’s shoes, pretending to “work at the office” by scribbling on the walls in crayon, and then wondering where our paychecks are.

More people who aren’t there

Remember that website full of photos of fake faces? Well, Dr Julian Koplin from the University of Melbourne has been combining those AI generated portraits with AI generated text, and now there’s a whole city of them.

Humans of an unreal city
These stories were composed by Open AI’s GPT-2 language model and AllenAI’s Grover news generator, which were given various prompts and asked to elaborate. My favourite results are recorded here – some lightly edited, many entirely intact. The accompanying photos were generated by the AI at This Person Does Not Exist. They are not real humans, but you can look into their eyes nonetheless.

As he explains in this commentary on the ethics of the project, some of the results are convincingly human.

more-people-who-arent-there

The very human language of AI
AI can tell stories about oceans and drowning, about dinners shared with friends, about childhood trauma and loveless marriages. They can write about the glare and heat of the sun without ever having seen light or felt heat. It seems so human. At the same time, the weirdness of some AI-generated text shows that they ‘understand’ the world very differently to us.

I’m worried less about the machines becoming sentient and taking over, with their AI generated art and poetry, and more about the dangers these tools pose when in the hands of ill-intentioned humans.

Meanwhile.

100,000 free AI-generated headshots put stock photo companies on notice
It’s getting easier and easier to use AI to generate convincing-looking, yet entirely fake, pictures of people. Now, one company wants to find a use for these photos, by offering a resource of 100,000 AI-generated faces to anyone that can use them — royalty free. Many of the images look fake but others are difficult to distinguish from images licensed by stock photo companies. […]

Zhabinskiy is keen to emphasize that the AI used to generate these images was trained using data shot in-house, rather than using stock media or scraping photographs from the internet. “Such an approach requires thousands of hours of labor, but in the end, it will certainly be worth it!” exclaims an Icons8 blog post. Ivan Braun, the founder of Icons8, says that in total the team took 29,000 pictures of 69 models over the course of three years which it used to train its algorithm.

There are valid concerns about technology that’s able to generate convincing-looking fakes like these at scale. This project is trying to create images that make life easier for designers, but the software could one day be used for all sorts of malicious activity.

A new Picasso?

It’s not unknown for artists to change their mind and paint over part of their work as their ideas develop. Earlier, I came across an article about a long-lost Vermeer cupid that conservationists had restored. He wasn’t the only one with mysteries to uncover.

Blue on Blue: Picasso blockbuster comes to Toronto in 2020
The show came together after the AGO, with the assistance of other institutions, including the National Gallery of Art, Northwestern University and the Art Institute of Chicago, used cutting-edge technology to scan several Blue Period paintings in its collection to reveal lost works underneath, namely La Soupe (1902) and La Miséreuse accroupie (also 1902).

More on that.

New research reveals secrets beneath the surface of Picasso paintings
Secrets beneath the surface of two Pablo Picasso paintings in the collection of the Art Gallery of Ontario (AGO) in Toronto have been unearthed through an in-depth research project, which combined technical analysis and art historical digging to determine probable influences for the pieces and changes made by the artist.

But x-ray and infrared analyses can only go so far. What if we roped in some neural networks to help bring these restored images to life?

This Picasso painting had never been seen before. Until a neural network painted it.
But from an aesthetic point of view, what the researchers managed to retrieve is disappointing. Infrared and x-ray images show only the faintest outlines, and while they can be used to infer the amount of paint the artist used, they do not show color or style. So a way to reconstruct the lost painting more realistically would be of huge interest. […]

This is where Bourached and Cann come in. They have taken a manually edited version of the x-ray images of the ghostly woman beneath The Old Guitarist and passed it through a neural style transfer network. This network was trained to convert images into the style of another artwork from Picasso’s Blue Period.

The result is a full-color version of the painting in exactly the style Picasso was exploring when he painted it. “We present a novel method of reconstructing lost artwork, by applying neural style transfer to x-radiographs of artwork with secondary interior artwork beneath a primary exterior, so as to reconstruct lost artwork,” they say.

new-picasso

Who’s really in charge?

Money makes the world go round. But who’s making the money go round?

The stockmarket is now run by computers, algorithms and passive managers
The execution of orders on the stockmarket is now dominated by algorithmic traders. Fewer trades are conducted on the rowdy floor of the nyse and more on quietly purring computer servers in New Jersey. According to Deutsche Bank, 90% of equity-futures trades and 80% of cash-equity trades are executed by algorithms without any human input. Equity-derivative markets are also dominated by electronic execution according to Larry Tabb of the Tabb Group, a research firm.

Nothing to worry about, right?

Turing Test: why it still matters
We’re entering the age of artificial intelligence. And as AI programs gets better and better at acting like humans, we will increasingly be faced with the question of whether there’s really anything that special about our own intelligence, or if we are just machines of a different kind. Could everything we know and do one day be reproduced by a complicated enough computer program installed in a complicated enough robot?

Robots, eh? Can’t live with ’em, can’t live without ’em.

Of course citizens should be allowed to kick robots
Because K5 is not a friendly robot, even if the cutesy blue lights are meant to telegraph that it is. It’s not there to comfort senior citizens or teach autistic children. It exists to collect data—data about people’s daily habits and routines. While Knightscope owns the robots and leases them to clients, the clients own the data K5 collects. They can store it as long as they want and analyze it however they want. K5 is an unregulated security camera on wheels, a 21st-century panopticon.

whos-really-in-charge-4

But let’s stay optimistic, yeah?

InspiroBot
I am an artificial intelligence dedicated to generating unlimited amounts of unique inspirational quotes for endless enrichment of pointless human existence.

whos-really-in-charge-1

whos-really-in-charge-2

whos-really-in-charge-3

So long everybody

Hell is other people? No problem.

This camera app uses AI to erase people from your photographs
Bye Bye Camera is an iOS app built for the “post-human world,” says Damjanski, a mononymous artist based in New York City who helped create the software. Why post-human? Because it uses AI to remove people from images and paint over their absence. “One joke we always make about it is: ‘finally, you can take a selfie without yourself.’”

bye-bye-3

Bye Bye Camera – an app for the post-human era
According to Damjanski: The app takes out the vanity of any selfie and also the person. I consider Bye Bye Camera an app for the post-human era. It’s a gentle nod to a future where complex programs replace human labor and some would argue the human race. It’s interesting to ask what is a human from an Ai (yes, the small “i” is intended) perspective? In this case, a collection of pixels that identify a person based on previously labeled data. But who labels this data that defines a person immaterially? So many questions for such an innocent little camera app. […]

A lot of friends asked us if we can implement the feature to choose which person to take out. But for us, this app is not an utility app in a classical sense that solves a problem. It’s an artistic tool and ultimately a piece of software art.

But, as that Artnome article explains, he’s by no means the first to do this…

bye-bye-5

Meanwhile, Italian sculptor Arcangelo Sassolino (is he a sculptor? What’s the reverse of sculpture?) is creating another disappearance.

Dust to Dust: Arcangelo Sassolino’s literal and conceptual erasure of the classical aesthetic
In Arcangelo Sassolino’s ‘Damnatio Memoriae’, a custom-made machine grinds a white marble torso to dust; dematerializing classicism and all that it revered over the course of a four month exhibition period at Galerie Rolando Anselmi in Berlin.

bye-bye-1

In this conceptual and literal erasure of the classical aesthetic, Sassolino questions the value of the narrative proposed by the Western canon and asks if we can free ourselves from the rules of the past. While the statue is changed by the process of grinding, it does not disappear—becoming instead fine dust that spreads through the exhibition space like mist. This new form allows the sculpture, and thus classicism, to invisibly permeate the exhibition space. As it settles on the walls and floors of Galerie Rolando Anselmi, and on those who visit the show, the complex reality of extracting oneself from the restrictive idealism of classicism becomes abundantly clear.

bye-bye-2

Speaking of classically proportioned behinds.

New art project seeks to reveal the “real size” of modern life’s most famous behind
“The wait is finally over,” we’re told. “Hundreds, potentially thousands of images of the world’s most famous body part have been analysed and carefully measured. Interviews have been read through and words evaluated. Everyone has always known that it’s big, but exactly how big is it?”

Ida-Simon is, of course, talking about Kim Kardashian’s behind. No mere attempt at digital titillation, the pair describes the project, simply titled The Bum as “a commentary on the time we live in.”

bye-bye-4

AI Spy

It seems we’re not the only ones playing with that AI fake face website.

Experts: Spy used AI-generated face to connect with targets
“I’m convinced that it’s a fake face,” said Mario Klingemann, a German artist who has been experimenting for years with artificially generated portraits and says he has reviewed tens of thousands of such images. “It has all the hallmarks.”

Experts who reviewed the Jones profile’s LinkedIn activity say it’s typical of espionage efforts on the professional networking site, whose role as a global Rolodex has made it a powerful magnet for spies.

Yes, it’s obviously a fake. I mean, only a fool would fall for that, right?

“I’m probably the worst LinkedIn user in the history of LinkedIn,” said Winfree, the former deputy director of President Donald Trump’s domestic policy council, who confirmed connection with Jones on March 28.

Winfree, whose name came up last month in relation to one of the vacancies on the Federal Reserve Board of Governors, said he rarely logs on to LinkedIn and tends to just approve all the piled-up invites when he does.

“I literally accept every friend request that I get,” he said.

Lionel Fatton, who teaches East Asian affairs at Webster University in Geneva, said the fact that he didn’t know Jones did prompt a brief pause when he connected with her back in March.

“I remember hesitating,” he said. “And then I thought, ‘What’s the harm?’”

<sigh> It might not be the technology we need, but it’s the technology we deserve.

But fear not, help is at hand!

Adobe’s new AI tool automatically spots Photoshopped faces
The world is becoming increasingly anxious about the spread of fake videos and pictures, and Adobe — a name synonymous with edited imagery — says it shares those concerns. Today, it’s sharing new research in collaboration with scientists from UC Berkeley that uses machine learning to automatically detect when images of faces have been manipulated.

But as Benedict Evans points out in a recent newsletter,

Potentially useful but one suspect this is just an arms race, and of course the people anyone would want to trick with such images won’t be using the tool.

More fun with fakes

More videos — from the sublime to the ridiculous.

There’s a scarily good ‘deepfakes’ YouTube channel that’s quietly growing – and it’s freaking everyone out
Russian researchers hit the headlines last week by reanimating oil-painted portraits and photos into talking heads using AI. Now, here’s another reminder that the tools to craft deepfakes are widely available for just about anyone with the right skills to use: the manipulated videos posted on YouTuber Ctrl Shift Face are particularly creepy.

Club Fight – Episode 2 [DeepFake]

The transitions are especially smooth in another clip, with a comedian dipping in and out of impressions of Al Pacino and Arnold Schwarzenegger, and there are now clips from Terminator with Stallone which look very peculiar.

Here’s that earlier article and video mentioned above, about reanimating oil paintings.

AI can now animate the Mona Lisa’s face or any other portrait you give it. We’re not sure we’re happy with this reality
There have been lots of similar projects so the idea isn’t particularly novel. But what’s intriguing in this paper, hosted by arXiv, is that the system doesn’t require tons of training examples and seems to work after seeing an image just once. That’s why it works with paintings like the Mona Lisa.

Few-Shot Adversarial Learning of Realistic Neural Talking Head Models

Jump to 4:18 to see their work with famous faces such as Marilyn Monroe and Salvador Dalí (who’s already no stranger to AI), and to 5:08 to see the Mona Lisa as you’ve never seen her before.

Dalí’s back

Another art and AI post, but with a difference. An exhibition at the Dalí Museum in Florida, with a very special guest.

Deepfake Salvador Dalí takes selfies with museum visitors
The exhibition, called Dalí Lives, was made in collaboration with the ad agency Goodby, Silverstein & Partners (GS&P), which made a life-size re-creation of Dalí using the machine learning-powered video editing technique. Using archival footage from interviews, GS&P pulled over 6,000 frames and used 1,000 hours of machine learning to train the AI algorithm on Dalí’s face. His facial expressions were then imposed over an actor with Dalí’s body proportions, and quotes from his interviews and letters were synced with a voice actor who could mimic his unique accent, a mix of French, Spanish, and English.

Behind the Scenes: Dali Lives

Whilst we’re talking of Dalí, let’s go behind the scenes of that famous portrait of him by Philippe Halsman. No flashy, cutting-edge technology this time, just wire, buckets and cats.

dalis-back-2

The story behind the surreal photograph of Salvador Dalí and three flying cats
The original, unretouched version of the photo reveals its secrets: An assistant held up the chair on the left side of the frame, wires suspended the easel and the painting, and the footstool was propped up off the floor. But there was no hidden trick to the flying cats or the stream of water. For each take, Halsman’s assistants—including his wife, Yvonne, and one of his daughters, Irene—tossed the cats and the contents of a full bucket across the frame. After each attempt, Halsman developed and printed the film while Irene herded and dried off the cats. The rejected photographs had notes such as “Water splashes Dalí instead of cat” and “Secretary gets into picture.”

dalis-back-3

Time.com have a great interview with Philippe Halsman’s daughter Irene on what that shoot was like.

The story behind the surrealist ‘Dali Atomicus’ photo
“Philippe would count to four. One, two, three… And the assistants threw the cats and the water. And on four, Dali jumped. My job at the time was to catch the cats and take them to the bathroom and dry them off with a big towel. My father would run upstairs where the darkroom was, develop the film, print it, run downstairs and he’d say not good, bad composition, this was wrong, that was wrong. It took 26 tries to do this. 26 throws, 26 wiping of the floors, and 26 times catching the cats. And then, there it was, finally, this composition.”

Coincidentally, Artnome’s Jason Bailey has been using AI and deep learning to colorize old black-and-white photos of artists, including that one of Dalí’s.

50 famous artists brought to life with AI
When I was growing up, artists, and particularly twentieth century artists, were my heroes. There is something about only ever having seen many of them in black and white that makes them feel mythical and distant. Likewise, something magical happens when you add color to the photo. These icons turn into regular people who you might share a pizza or beer with.