Robots have fascinated us for years, but are we looking at them all wrong? Kate Darling, robot ethicist at MIT Media Lab, shows us a different way.
Robots are animals, not humans – WIRED UK Automation has, and will continue to have, huge impacts on labour markets – those in factories and farming are already feeling the after-shocks. There’s no question that we will continue to see industry disruptions as robotic technology develops, but in our mainstream narratives, we’re leaning too hard on the idea that robots are a one-to-one replacement for humans. Despite the AI pioneers’ original goal of recreating human intelligence, our current robots are fundamentally different. They’re not less-developed versions of us that will eventually catch up as we increase their computing power; like animals, they have a different type of intelligence entirely. […]
While there are many socioeconomic factors that influence how individual countries and societies view robots, the narrative is fluid, and our western view of robots versus humans isn’t the only one. Some of our western views can be directly attributed to our love of dystopian sci-fi. How much automation disrupts and shifts the labour market is an incredibly complicated question, but it’s striking how much of our conversations mirror speculative fiction rather than what’s currently happening on the ground, especially when our language places agency on the robots themselves, with pithy headlines like “No Jobs? Blame the Robots” instead of the more accurate “No Jobs? Blame Company Decisions Driven by Unbridled Corporate Capitalism”.
Comparing robots to animals helps us see that robots don’t necessarily replace jobs, but instead are helping us with specific tasks, like plowing fields, delivering packages by ground or air, cleaning pipes, and guarding the homestead. … [W]hen we broaden our thinking to consider what skills might complement our abilities instead of replacing them, we can better envision what’s possible with this new breed.
The New Breed – Pengiun Kate Darling, a world-renowned expert in robot ethics, shows that in order to understand the new robot world, we must first move beyond the idea that this technology will be something like us. Instead, she argues, we should look to our relationship with animals. Just as we have harnessed the power of animals to aid us in war and work, so too will robots supplement – rather than replace – our own skills and abilities.
Some interesting reads, courtesy of The Economist’s data analysis newsletter, Off The Charts. Let’s start with this question — are glasses-wearers really less conscientiousness than those who wear a headscarf?
Objective or Biased: On the questionable use of Artificial Intelligence for job applications – BR24 Software programs promise to identify the personality traits of job candidates based on short videos. With the help of Artificial Intelligence (AI) they are supposed to make the selection process of candidates more objective and faster. An exclusive data analysis shows that an AI scrutinized by BR (Bavarian Broadcasting) journalists can be swayed by appearances. This might perpetuate stereotypes while potentially costing candidates the job.
Here, Stephanie Evergreen makes a solid, essential case for broadening our view of data visualisation and its history. I’ve mentioned khipus here before, but not within this context.
Decolonizing Data Viz – Evergreen Data When we talked about these khipu and other forms of indigenous data visualization in a recent panel (with January O’Connor (Tlingit, Kake, Alaska), Mark Parman (Cherokee), & Nicky Bowman (Lunaape/Mohican)), someone in the audience commented, “It made me reflect on traditional Hmong clothing and how my ancestors have embroidered certain motifs on traditional clothing to communicate one’s clanship, what dialect of Hmong one spoke, marital status, everyday life, etc.” And this is one reason why it is so critically important to decolonize data visualization. When white men decide what counts (and doesn’t count) in terms of data, and what counts (and doesn’t count) as data visualization, and what counts (and doesn’t count) as data visualization history, they are actively gaslighting Black and Brown people about their legacy as data visualizers. When we shine a light on indigenous data visualization, we are intentionally saying the circle is much much wider and, as Nicky Bowman said, “There’s room for everyone in the lodge.”
After reconciling the past, let’s look to the future.
Who will shape the future of data visualization? Today’s kids! – Nightingale Graphs are everywhere. So, with the proper instruction, I’d expect today’s kids to become adults that are more proficient at visualizing and interpreting data than today’s adults. Besides parents, teachers, or friends, news organizations also play a role in shaping today’s kids. As Jon pointed out, news organizations can do a great job explaining to us how to read more advanced graphs.
On the other hand, as Sharon and Michael mentioned, because graphs are everywhere, there’s a danger for kids to start thinking that graphs are objective. So it is important for adults to start teaching kids how to think critically, to not necessarily accept the graph and the data at face value. In other words, it’s essential for kids to develop a toolbox. This is good for them and good for democracy — eventually, today’s kids will become more informed citizens.
Blob Opera – Google Arts & Culture Create your own opera inspired song with Blob Opera – no music skills required! A machine learning experiment by David Li in collaboration with Google Arts & Culture.
it’s all very silly, but you have to admit, they do make a wonderful sound. That’s due, no doubt, to some clever coding, but also to the skills of the real humans behind these machine-learned voices.
You can now create your own 4-part ‘Blob Opera‘ with this addictive Google app – Classic FM The voices are those of real-life opera singers, tenor Christian Joel, bass Frederick Tong, mezzo-soprano Joanna Gamble and soprano Olivia Doutney, who recorded many hours of singing for the experiment. You don’t hear their actual voices in the tool, but rather the machine learning model’s understanding of what opera singing sounds like, based on what it learned from the four vocalists.
Yesterday, upon the stair, I met a man who wasn’t there.
I’ve shared articles about these fake, engineered nobodies before, but the transitions, animations and sliders in this piece from the New York Times are very effective, and great fun — a genuine individual on every frame.
Designed to deceive: Do these people look real to you? – The New York Times Given the pace of improvement, it’s easy to imagine a not-so-distant future in which we are confronted with not just single portraits of fake people but whole collections of them — at a party with fake friends, hanging out with their fake dogs, holding their fake babies. It will become increasingly difficult to tell who is real online and who is a figment of a computer’s imagination.
You think you know someone …
… but they turn out to be …
… someone completely different.
All these fakes — people, art, feet — it’s hard to keep track. Well, not any more.
This X Does Not Exist Using generative adversarial networks (GAN), we can learn how to create realistic-looking fake versions of almost anything, as shown by this collection of sites that have sprung up in the past month.
I’ve a number of posts here tagged AI and art, but not so many about its impact on music or poetry. Let’s put that right. But first (via It’s Nice That), a quick recap.
The A-Z of AI – With Google This beginner’s A-Z guide is a collaboration between the Oxford Internet Institute (OII) at the University of Oxford and Google, intended to break a complex area of computer science down into entry-level explanations that will help anyone get their bearings and understand the basics.
This AI poet mastered rhythm, rhyme, and natural language to write like Shakespeare – IEEE Spectrum Deep-speare’s creation is nonsensical when you read it closely, but it certainly “scans well,” as an English teacher would say—its rhythm, rhyme scheme, and the basic grammar of its individual lines all seem fine at first glance. As our research team discovered when we showed our AI’s poetry to the world, that’s enough to fool quite a lot of people; most readers couldn’t distinguish the AI-generated poetry from human-written works.
I think they’re better off sticking to the visuals.
Beck launches Hyperspace: AI Exploration, a visual album with NASA – It’s Nice That The project was made possible by AI architects and directors OSK, founded by artists Jon Ray and Isabelle Albuquerque, who began the project by asking, “How would artificial intelligence imagine our universe?” In answering this question it allowed the directors to create “a unique AI utilising computer vision, machine learning and Generative Adversarial neural Networks (GAN) to learn from NASA’s vast archives.” The AI then trained itself through these thousands of images, data and videos, to then begin “creating its own visions of our universe.”
Some of them can really hold a tune, though.
What do machines sing of? – Martin Backes “What do machines sing of?” is a fully automated machine, which endlessly sings number-one ballads from the 1990s. As the computer program performs these emotionally loaded songs, it attempts to apply the appropriate human sentiments. This behavior of the device seems to reflect a desire, on the part of the machine, to become sophisticated enough to have its very own personality.
Lastly, it’s good to see that you can still be silly with technology and music.
A new AI language model generates poetry and prose – The Economist
But the program is not perfect. Sometimes it seems to regurgitate snippets of memorised text rather than generating fresh text from scratch. More fundamentally, statistical word-matching is not a substitute for a coherent understanding of the world. GPT-3 often generates grammatically correct text that is nonetheless unmoored from reality, claiming, for instance, that “it takes two rainbows to jump from Hawaii to 17”.
A while ago I shared news of the world’s first AI presenter. And there’s lots here about fake news. But what about taking deepfake-style technology to produce true news?
Reuters uses AI to prototype first ever automated video reports – Forbes Developed in collaboration with London-based AI startup Synthesia, the new system harnesses AI in order to synthesize pre-recorded footage of a news presenter into entirely new reports. It works in a similar way to deepfake videos, although its current prototype combines with incoming data on English Premier League football matches to report on things that have actually happened. […]
In other words, having pre-filmed a presenter say the name of every Premier League football team, every player, and pretty much every possible action that could happen in a game, Reuters can now generate an indefinite number of synthesized match reports using his image. These reports are barely indistinguishable from the real thing, and Cohen reports that early witnesses to the system (mostly Reuters’ clients) have been dutifully impressed.
Just found another example of a deepfake video being used in a, if not true, at least positive sense.
We’ve just seen the first use of deepfakes in an Indian election campaign – Vice
When the Delhi BJP IT Cell partnered with political communications firm The Ideaz Factory to create “positive campaigns” using deepfakes to reach different linguistic voter bases, it marked the debut of deepfakes in election campaigns in India. “Deepfake technology has helped us scale campaign efforts like never before,” Neelkant Bakshi, co-incharge of social media and IT for BJP Delhi, tells VICE. “The Haryanvi videos let us convincingly approach the target audience even if the candidate didn’t speak the language of the voter.”
These lyrics do not exist
This website generates completely original lyrics for various topics, uses state of the art AI to generate an original chorus and original verses.
Want some happy metal lyrics about dogs? No problem.
I am the dog in you I am the dog in you How one animal can be so tense, yet so free? Such vicious dogs in search of a trophy
This foot does not exist
The foot pic, then, becomes a commodity which the consumer is willing to pay for on its basis as an intimate, revealing, and/or pornographic (and perhaps power-granting, when provided on request) asset, while the producer may** see it as a meme, a dupe, a way to trick the horny-credible out of their ill-spent cash.
Robogamis are the real heirs of terminators and transformers – Aeon Robogami design owes its drastic geometric reconfigurability to two main scientific breakthroughs. One is its layer-by-layer 2D manufacturing process: multiples of functional layers of the essential robotic components (ie, microcontrollers, sensors, actuators, circuits, and even batteries) are stacked on top of each other. The other is the design translation of typical mechanical linkages into a variety of folding joints (ie, fixed joint, pin joint, planar, and spherical link). […]
Robotics technology is advancing to be more personalised and adaptive for humans, and this unique species of reconfigurable origami robots shows immense promise. It could become the platform to provide the intuitive, embeddable robotic interface to meet our needs. The robots will no longer look like the characters from the movies. Instead, they will be all around us, continuously adapting their form and function – and we won’t even know it.
Biological robots – A research team builds robots from living cells – The Economist But one thing all robots have in common is that they are mechanical, not biological devices. They are built from materials like metal and plastic, and stuffed with electronics. No more, though—for a group of researchers in America have worked out how to use unmodified biological cells to create new sorts of organisms that might do a variety of jobs, and might even be made to reproduce themselves. […]
Though only a millimetre or so across, the artificial organisms Dr Bongard and Dr Levin have invented, which they call xenobots, can move and perform simple tasks, such as pushing pellets along in a dish. That may not sound much, but the process could, they reckon, be scaled up and made to do useful things. Bots derived from a person’s own cells might, for instance, be injected into the bloodstream to remove plaque from artery walls or to identify cancer. More generally, swarms of them could be built to seek out and digest toxic waste in the environment, including microscopic bits of plastic in the sea.
Sounds like (old) science fiction to me.
Did HAL Commit Murder? – The MIT Press Reader As with each viewing, I discovered or appreciated new details. But three iconic scenes — HAL’s silent murder of astronaut Frank Poole in the vacuum of outer space, HAL’s silent medical murder of the three hibernating crewmen, and the poignant sorrowful “death” of HAL — prompted deeper reflection, this time about the ethical conundrums of murder by a machine and of a machine. In the past few years experimental autonomous cars have led to the death of pedestrians and passengers alike. AI-powered bots, meanwhile, are infecting networks and influencing national elections. Elon Musk, Stephen Hawking, Sam Harris, and many other leading AI researchers have sounded the alarm: Unchecked, they say, AI may progress beyond our control and pose significant dangers to society.
Back in the real world, of course, the dangers are more mundane. Those “significant dangers to society” are more financial.
Could new research on A.I. and white-collar jobs finally bring about a strong policy response? – The New Yorker Webb then analyzed A.I. patent filings and found them using verbs such as “recognize,” “detect,” “control,” “determine,” and “classify,” and nouns like “patterns,” “images,” and “abnormalities.” The jobs that appear to face intrusion by these newer patents are different from the more manual jobs that were affected by industrial robots: intelligent machines may, for example, take on more tasks currently conducted by physicians, such as detecting cancer, making prognoses, and interpreting the results of retinal scans, as well as those of office workers that involve making determinations based on data, such as detecting fraud or investigating insurance claims. People with bachelor’s degrees might be more exposed to the effects of the new technologies than other educational groups, as might those with higher incomes. The findings suggest that nurses, doctors, managers, accountants, financial advisers, computer programmers, and salespeople might see significant shifts in their work. Occupations that require high levels of interpersonal skill seem most insulated.
Found another article about those biological robots, above, which serves as a great counter-point to all these wildly optimistic Boston Dynamics announcements.
Robots don’t have to be so embarrassing – The Outline These stuff-ups are endlessly amusing to me. I don’t want to mock the engineers who pour thousands of hours into building novelty dogs made of bits of broken toasters, or even the vertiginously arrogant scientists who thought they could simulate the human brain inside a decade. (Inside a decade! I mean, my god!) Well, okay, maybe I do want to mock them. Is it a crime to enjoy watching our culture’s systematic over-investment in digital Whiggery get written down in value time and time again? […]
What these doomed overreaches represent is a failure to grasp the limits of human knowledge. We don’t have a comprehensive idea of how the brain works. There is no solid agreement on what consciousness really “is.” Is it divine? Is it matter? Can you smoke it? Do these questions even make sense? We don’t know the purpose of sleep. We don’t know what dreams are for. Sexual dimorphism in the brain remains a mystery. Are you picking up a pattern here? Even the seemingly quotidian mechanical abilities of the human body — running, standing, gripping, and so on — are not understood with the scientific precision that you might expect. How can you make a convincing replica of something if you don’t even know what it is to begin with? We are cosmic toddlers waddling around in daddy’s shoes, pretending to “work at the office” by scribbling on the walls in crayon, and then wondering where our paychecks are.
Remember that website full of photos of fake faces? Well, Dr Julian Koplin from the University of Melbourne has been combining those AI generated portraits with AI generated text, and now there’s a whole city of them.
Humans of an unreal city
These stories were composed by Open AI’s GPT-2 language model and AllenAI’s Grover news generator, which were given various prompts and asked to elaborate. My favourite results are recorded here – some lightly edited, many entirely intact. The accompanying photos were generated by the AI at This Person Does Not Exist. They are not real humans, but you can look into their eyes nonetheless.
As he explains in this commentary on the ethics of the project, some of the results are convincingly human.
The very human language of AI
AI can tell stories about oceans and drowning, about dinners shared with friends, about childhood trauma and loveless marriages. They can write about the glare and heat of the sun without ever having seen light or felt heat. It seems so human. At the same time, the weirdness of some AI-generated text shows that they ‘understand’ the world very differently to us.
I’m worried less about the machines becoming sentient and taking over, with their AI generated art and poetry, and more about the dangers these tools pose when in the hands of ill-intentioned humans.
Zhabinskiy is keen to emphasize that the AI used to generate these images was trained using data shot in-house, rather than using stock media or scraping photographs from the internet. “Such an approach requires thousands of hours of labor, but in the end, it will certainly be worth it!” exclaims an Icons8 blog post. Ivan Braun, the founder of Icons8, says that in total the team took 29,000 pictures of 69 models over the course of three years which it used to train its algorithm.
There are valid concerns about technology that’s able to generate convincing-looking fakes like these at scale. This project is trying to create images that make life easier for designers, but the software could one day be used for all sorts of malicious activity.
It’s not unknown for artists to change their mind and paint over part of their work as their ideas develop. Earlier, I came across an article about a long-lost Vermeer cupid that conservationists had restored. He wasn’t the only one with mysteries to uncover.
Blue on Blue: Picasso blockbuster comes to Toronto in 2020
The show came together after the AGO, with the assistance of other institutions, including the National Gallery of Art, Northwestern University and the Art Institute of Chicago, used cutting-edge technology to scan several Blue Period paintings in its collection to reveal lost works underneath, namely La Soupe (1902) and La Miséreuse accroupie (also 1902).
More on that.
New research reveals secrets beneath the surface of Picasso paintings
Secrets beneath the surface of two Pablo Picasso paintings in the collection of the Art Gallery of Ontario (AGO) in Toronto have been unearthed through an in-depth research project, which combined technical analysis and art historical digging to determine probable influences for the pieces and changes made by the artist.
But x-ray and infrared analyses can only go so far. What if we roped in some neural networks to help bring these restored images to life?
This Picasso painting had never been seen before. Until a neural network painted it.
But from an aesthetic point of view, what the researchers managed to retrieve is disappointing. Infrared and x-ray images show only the faintest outlines, and while they can be used to infer the amount of paint the artist used, they do not show color or style. So a way to reconstruct the lost painting more realistically would be of huge interest. […]
This is where Bourached and Cann come in. They have taken a manually edited version of the x-ray images of the ghostly woman beneath The Old Guitarist and passed it through a neural style transfer network. This network was trained to convert images into the style of another artwork from Picasso’s Blue Period.
The result is a full-color version of the painting in exactly the style Picasso was exploring when he painted it. “We present a novel method of reconstructing lost artwork, by applying neural style transfer to x-radiographs of artwork with secondary interior artwork beneath a primary exterior, so as to reconstruct lost artwork,” they say.
Money makes the world go round. But who’s making the money go round?
The stockmarket is now run by computers, algorithms and passive managers
The execution of orders on the stockmarket is now dominated by algorithmic traders. Fewer trades are conducted on the rowdy floor of the nyse and more on quietly purring computer servers in New Jersey. According to Deutsche Bank, 90% of equity-futures trades and 80% of cash-equity trades are executed by algorithms without any human input. Equity-derivative markets are also dominated by electronic execution according to Larry Tabb of the Tabb Group, a research firm.
Nothing to worry about, right?
Turing Test: why it still matters
We’re entering the age of artificial intelligence. And as AI programs gets better and better at acting like humans, we will increasingly be faced with the question of whether there’s really anything that special about our own intelligence, or if we are just machines of a different kind. Could everything we know and do one day be reproduced by a complicated enough computer program installed in a complicated enough robot?
Robots, eh? Can’t live with ’em, can’t live without ’em.
Of course citizens should be allowed to kick robots
Because K5 is not a friendly robot, even if the cutesy blue lights are meant to telegraph that it is. It’s not there to comfort senior citizens or teach autistic children. It exists to collect data—data about people’s daily habits and routines. While Knightscope owns the robots and leases them to clients, the clients own the data K5 collects. They can store it as long as they want and analyze it however they want. K5 is an unregulated security camera on wheels, a 21st-century panopticon.
But let’s stay optimistic, yeah?
I am an artificial intelligence dedicated to generating unlimited amounts of unique inspirational quotes for endless enrichment of pointless human existence.
This camera app uses AI to erase people from your photographs
Bye Bye Camera is an iOS app built for the “post-human world,” says Damjanski, a mononymous artist based in New York City who helped create the software. Why post-human? Because it uses AI to remove people from images and paint over their absence. “One joke we always make about it is: ‘finally, you can take a selfie without yourself.’”
Bye Bye Camera – an app for the post-human era
According to Damjanski: The app takes out the vanity of any selfie and also the person. I consider Bye Bye Camera an app for the post-human era. It’s a gentle nod to a future where complex programs replace human labor and some would argue the human race. It’s interesting to ask what is a human from an Ai (yes, the small “i” is intended) perspective? In this case, a collection of pixels that identify a person based on previously labeled data. But who labels this data that defines a person immaterially? So many questions for such an innocent little camera app. […]
A lot of friends asked us if we can implement the feature to choose which person to take out. But for us, this app is not an utility app in a classical sense that solves a problem. It’s an artistic tool and ultimately a piece of software art.
But, as that Artnome article explains, he’s by no means the first to do this…
Meanwhile, Italian sculptor Arcangelo Sassolino (is he a sculptor? What’s the reverse of sculpture?) is creating another disappearance.
In this conceptual and literal erasure of the classical aesthetic, Sassolino questions the value of the narrative proposed by the Western canon and asks if we can free ourselves from the rules of the past. While the statue is changed by the process of grinding, it does not disappear—becoming instead fine dust that spreads through the exhibition space like mist. This new form allows the sculpture, and thus classicism, to invisibly permeate the exhibition space. As it settles on the walls and floors of Galerie Rolando Anselmi, and on those who visit the show, the complex reality of extracting oneself from the restrictive idealism of classicism becomes abundantly clear.
Experts: Spy used AI-generated face to connect with targets “I’m convinced that it’s a fake face,” said Mario Klingemann, a German artist who has been experimenting for years with artificially generated portraits and says he has reviewed tens of thousands of such images. “It has all the hallmarks.”
Experts who reviewed the Jones profile’s LinkedIn activity say it’s typical of espionage efforts on the professional networking site, whose role as a global Rolodex has made it a powerful magnet for spies.
Yes, it’s obviously a fake. I mean, only a fool would fall for that, right?
“I’m probably the worst LinkedIn user in the history of LinkedIn,” said Winfree, the former deputy director of President Donald Trump’s domestic policy council, who confirmed connection with Jones on March 28.
Winfree, whose name came up last month in relation to one of the vacancies on the Federal Reserve Board of Governors, said he rarely logs on to LinkedIn and tends to just approve all the piled-up invites when he does.
“I literally accept every friend request that I get,” he said.
Lionel Fatton, who teaches East Asian affairs at Webster University in Geneva, said the fact that he didn’t know Jones did prompt a brief pause when he connected with her back in March.
“I remember hesitating,” he said. “And then I thought, ‘What’s the harm?’”
<sigh> It might not be the technology we need, but it’s the technology we deserve.
But fear not, help is at hand!
Adobe’s new AI tool automatically spots Photoshopped faces The world is becoming increasingly anxious about the spread of fake videos and pictures, and Adobe — a name synonymous with edited imagery — says it shares those concerns. Today, it’s sharing new research in collaboration with scientists from UC Berkeley that uses machine learning to automatically detect when images of faces have been manipulated.
Deepfake Salvador Dalí takes selfies with museum visitors
The exhibition, called Dalí Lives, was made in collaboration with the ad agency Goodby, Silverstein & Partners (GS&P), which made a life-size re-creation of Dalí using the machine learning-powered video editing technique. Using archival footage from interviews, GS&P pulled over 6,000 frames and used 1,000 hours of machine learning to train the AI algorithm on Dalí’s face. His facial expressions were then imposed over an actor with Dalí’s body proportions, and quotes from his interviews and letters were synced with a voice actor who could mimic his unique accent, a mix of French, Spanish, and English.
Whilst we’re talking of Dalí, let’s go behind the scenes of that famous portrait of him by Philippe Halsman. No flashy, cutting-edge technology this time, just wire, buckets and cats.
The story behind the surreal photograph of Salvador Dalí and three flying cats
The original, unretouched version of the photo reveals its secrets: An assistant held up the chair on the left side of the frame, wires suspended the easel and the painting, and the footstool was propped up off the floor. But there was no hidden trick to the flying cats or the stream of water. For each take, Halsman’s assistants—including his wife, Yvonne, and one of his daughters, Irene—tossed the cats and the contents of a full bucket across the frame. After each attempt, Halsman developed and printed the film while Irene herded and dried off the cats. The rejected photographs had notes such as “Water splashes Dalí instead of cat” and “Secretary gets into picture.”
Time.com have a great interview with Philippe Halsman’s daughter Irene on what that shoot was like.
The story behind the surrealist ‘Dali Atomicus’ photo
“Philippe would count to four. One, two, three… And the assistants threw the cats and the water. And on four, Dali jumped. My job at the time was to catch the cats and take them to the bathroom and dry them off with a big towel. My father would run upstairs where the darkroom was, develop the film, print it, run downstairs and he’d say not good, bad composition, this was wrong, that was wrong. It took 26 tries to do this. 26 throws, 26 wiping of the floors, and 26 times catching the cats. And then, there it was, finally, this composition.”
Coincidentally, Artnome’s Jason Bailey has been using AI and deep learning to colorize old black-and-white photos of artists, including that one of Dalí’s.
50 famous artists brought to life with AI
When I was growing up, artists, and particularly twentieth century artists, were my heroes. There is something about only ever having seen many of them in black and white that makes them feel mythical and distant. Likewise, something magical happens when you add color to the photo. These icons turn into regular people who you might share a pizza or beer with.
More fun with fake faces. Here’s a breakdown from Eric Jang of Snapchat’s bizarre new filter.
Fun with Snapchat’s gender swapping filter Snapchat’s new gender-bending filter is a source of endless fun and laughs at parties. The results are very pleasing to look at. As someone who is used to working with machine learning algorithms, it’s almost magical how robust this feature is.
I was so duly impressed that I signed up for Snapchat and fiddled around with it this morning to try and figure out what’s going on under the hood and how I might break it.
Eric takes it through its paces to learn a little more about how it’s generated. I hadn’t appreciated that it worked in real time like that.
Classic FM got in on the act, and has taken a few portraits of classical music composers through the filter, with varying results.
So what to read next, after Dune? More sci-fi? Ian McEwan’s “retrofuturist family drama” seems to be getting some attention.
Man, woman, and robot in Ian McEwan’s new novel It’s London, 1982. The Beatles have reunited (to mixed reviews), Margaret Thatcher has just lost the Falkland Islands to Argentina, and Sir Alan Turing, now seventy, is the presiding spirit of a preemie Information Age. People have already soured on the latest innovations, among them “speaking fridges with a sense of smell” and driverless cars that cause multinational gridlock. “The future kept arriving,” Charlie ruminates. “Our bright new toys began to rust before we could get them home, and life went on much as before.”
Buyer’s remorse is a recurring theme in Ian McEwan’s witty and humane new novel, “Machines Like Me” (Nan A. Talese), a retrofuturist family drama that doubles as a cautionary fable about artificial intelligence, consent, and justice. Though steeped in computer science, from the P-versus-NP problem to DNA-inspired neural networks, the book is not meant to be a feat of hard-sci-fi imagineering; McEwan’s aim is to probe the moral consequences of what philosophers call “the problem of other minds.”
In “Machines Like Me”, Ian McEwan asks an age-old question Amid all the action, there are sober passages of philosophical discussion between Charlie and Adam. But in parts the novel is funny, too. To Charlie’s disgust, Adam’s encyclopedic recall of Shakespeare makes him seem the better catch to Miranda’s father, a writer, who assumes Charlie is the robot, because he isn’t interested in books.
Late in the story it emerges that other androids around the world are committing suicide in horror at the behaviour of their flesh-and-blood masters. Adam wonders about the “mystery of the self” and his fear that he is “subject to a form of Cartesian error”. Strip away the counterfactual wrapping and “Machines Like Me” is ultimately about the age-old question of what makes people human. The reader is left baffled and beguiled.
Machines Like Me by Ian McEwan review – intelligent mischief This is the mode of exposition in which he [Kipling] seems to address the reader from a position of shared knowledge, sketching out an unfamiliar reality through hints and allusions, but never explaining it too completely. This inside-out style is the default mode of modern SF. It is economical and of special usefulness to makers of strange worlds, plunging a reader into a new reality and leaving them space to feel like a participant in its creation. It’s the opposite technique to that of McEwan’s narrator, who explicitly sets out his world, overexplains the historical context and never turns down a chance to offer an essayistic digression.
To my taste, this is a flat-footed way of doing sci-fi.
‘It drives writers mad’: why are authors still sniffy about sci-fi? Machines Like Me is not, however, science fiction, at least according to its author. “There could be an opening of a mental space for novelists to explore this future,” McEwan said in a recent interview, “not in terms of travelling at 10 times the speed of light in anti-gravity boots, but in actually looking at the human dilemmas.” There is, as many readers noticed, a whiff of genre snobbery here, with McEwan drawing an impermeable boundary between literary fiction and science fiction, and placing himself firmly on the respectable side of the line.
But perhaps we’ve had enough about robots and AI recently.
Never mind killer robots—here are six real AI dangers to watch out for in 2019 The latest AI methods excel at perceptual tasks such as classifying images and transcribing speech, but the hype and excitement over these skills have disguised how far we really are from building machines as clever as we are. Six controversies from 2018 stand out as warnings that even the smartest AI algorithms can misbehave, or that carelessly applying them can have dire consequences.
Artnome’s Jason Bailey on a generative art exhibition he co-curated.
Kate Vass Galerie The Automat und Mensch exhibition is, above all, an opportunity to put important work by generative artists spanning the last 70 years into context by showing it in a single location. By juxtaposing important works like the 1956/’57 oscillograms by Herbert W. Franke (age 91) with the 2018 AI Generated Nude Portrait #1 by contemporary artist Robbie Barrat (age 19), we can see the full history and spectrum of generative art as has never been shown before.
Zurich’s a little too far, unfortunately, so I’ll have to make do with the press release for now.
Generative art gets its due In the last twelve months we have seen a tremendous spike in the interest of “AI art,” ushered in by Christie’s and Sotheby’s both offering works at auction developed with machine learning. Capturing the imaginations of collectors and the general public alike, the new work has some conservative members of the art world scratching their heads and suggesting this will merely be another passing fad. What they are missing is that this rich genre, more broadly referred to as “generative art,” has a history as long and fascinating as computing itself. A history that has largely been overlooked in the recent mania for “AI art” and one that co-curators Georg Bak and Jason Bailey hope to shine a bright light on in their upcoming show Automat und Mensch (or Machine and Man) at Kate Vass Galerie in Zurich, Switzerland.
Generative art, once perceived as the domain of a small number of “computer nerds,” is now the artform best poised to capture what sets our generation apart from those that came before us – ubiquitous computing. As children of the digital revolution, computing has become our greatest shared experience. Like it or not, we are all now computer nerds, inseparable from the many devices through which we mediate our worlds.
The press release alone is a fascinating read, covering the work of a broad range of artists and themes, past and present. For those that can make the exhibition in person, it will also include lectures and panels from the participating artists and leaders on AI art and generative art history.