What better way to start the day than with a bowl of eggo nut frosted strawberry pancakes. I think I’ll pass on the carbonated waffle balls, even if they are the “gold standard for waffle ball fun innovation using wheat-based flavors and crispy aluminum foil.”
We’ve seen how AI can bring to life people that have never existed, as well as those that certainly have. And we’re familiar with the ridiculous surreal art it can churn out and the sublime Bach-like harmonies it can spin. But what about creating something much more substantial, like a whole symphony? And a Beethoven symphony, at that.
The project started in 2019 …
How a team of musicologists and computer scientists completed Beethoven’s unfinished 10th Symphony – The Conversation
When Ludwig van Beethoven died in 1827, he was three years removed from the completion of his Ninth Symphony, a work heralded by many as his magnum opus. He had started work on his 10th Symphony but, due to deteriorating health, wasn’t able to make much headway: All he left behind were some musical sketches.
Ever since then, Beethoven fans and musicologists have puzzled and lamented over what could have been. His notes teased at some magnificent reward, albeit one that seemed forever out of reach.
Now, thanks to the work of a team of music historians, musicologists, composers and computer scientists, Beethoven’s vision will come to life.
… and earlier this month, they premiered the result.
After more than two centuries, Beethoven’s 10th Symphony has been completed by an AI – Euronews
For Werzowa, it was exciting to discover variations of Beethoven’s work each morning, that had been sent overnight by his US colleagues from Rutgers University.
“Because of the time difference, in the morning, I got up early, quite excited and ran to my computer to find hundreds of possibilities which were formulated overnight, well during my night,” he said. “And it was always a beautiful morning occupation, drinking tea and coffee while listening and choosing those Beethoven inspirations”. […]
As for the master computer, no gigantic machine with tons of buttons and keyboards were involved: a simple laptop was used to finish to unfinishable.
“I asked him many times ‘please send me pictures’ and I was so curious, it’s like I imagined like this Star Trek, Star Wars kind of thing, with kilometres of computers,” Werzowa told AP. “He never sent it to me over the two years and finally he did after this whole thing was done. And what he showed me was basically a computer rig which looks like my son’s computer rig so it was actually disappointing: This is it? This made that amazing work?”
And the Beethoven goes on: Bonn premieres ‘new’ 10th symphony – Irish Times
No word on whether London’s Royal Philharmonic Society – who commissioned the symphony in 1817 – ever got its money back from the composer’s estate for services not rendered.
How an AI finished Beethoven’s last symphony and what that means for the future of music – BBC Science Focus Magazine
What has the response been like from musicians and composers? Their response is really mixed. There are people who loved this very much, and love the idea of having an AI that understands music and can help you finish your composition or have you explore different musical ideas.
But on the other side of the spectrum, there are people who just reject even the concept of being able to complete a Beethoven symphony using AI. They are afraid of AI taking their jobs and think that it has nothing to do with this kind of thing.
But enough of all the words — let’s hear the music!
Beethoven X: The AI Project: III Scherzo. Allegro – Trio – Modern Recordings: YouTube
Modern Recordings / BMG present as a foretaste of the album “Beethoven X – The AI Project” (release: 8.10.) the edit of the 3rd movement “Scherzo. Allegro – Trio” as a classical music video.
That was just a short edit of the third movement. Embedded within this next link is a video of the whole premiere, featuring movements 3 and 4.
World Premiere: Beethoven X – MagentaMusic 360
It is done: Shortly before his death, Ludwig van Beethoven began to compose his 10th Symphony, but it remained unfinished. On the 250th birthday of the genius, Deutsche Telekom and an international team of music experts and artificial intelligence experts have dared to try to complete Beethoven’s 10th Symphony with the help of artificial intelligence. On 9 October, the 10th Symphony was premiered in Bonn by the Beethoven Orchestra Bonn under the direction of Dirk Kaftan.
I’m relying on Google Translate for the text of that link. The introductory speeches are in German too, though the little documentary they play that starts 12 minutes in is subtitled and worth a look. The performance itself is 16 minutes in.
It’s also been released on Spotify, together with a recording of the eighth symphony from the premiere.
Beethoven X: The AI Project – Spotify
Beethoven Orchestra Bonn, Dirk Kaftan, Walter Werzowa 2021
It seems this is not the only version of Beethoven’s 10th symphony. There’s also this one, “realized” by Barry Cooper, plus documentary (ignore that crazy sax intro). But seriously, nothing — not even his fifth — matches his ninth. I mean, come on!
Refugees help power machine learning advances at Microsoft, Facebook, and Amazon – Rest of World
A woman living in Kenya’s Dadaab, which is among the world’s largest refugee camps, wanders across the vast, dusty site to a central hut lined with computers. Like many others who have been brutally displaced and then warehoused at the margins of our global system, her days are spent toiling away for a new capitalist vanguard thousands of miles away in Silicon Valley. A day’s work might include labelling videos, transcribing audio, or showing algorithms how to identify various photos of cats.
Amid a drought of real employment, “clickwork” represents one of few formal options for Dadaab’s residents, though the work is volatile, arduous, and, when waged, paid by the piece. Cramped and airless workspaces, festooned with a jumble of cables and loose wires, are the antithesis to the near-celestial campuses where the new masters of the universe reside. […]
Microwork comes with no rights, security, or routine and pays a pittance — just enough to keep a person alive yet socially paralyzed. Stuck in camps, slums, or under colonial occupation, workers are compelled to work simply to subsist under conditions of bare life. This unequivocally racialized aspect to the programs follows the logic of the prison-industrial complex, whereby surplus — primarily black — populations [in the United States] are incarcerated and legally compelled as part of their sentence to labor for little to no payment. Similarly exploiting those confined to the economic shadows, microwork programs represent the creep of something like a refugee-industrial complex.
And it’s not just happening in Kenya.
Brazilian workers paid equivalent of 70 cents an hour to transcribe TikToks – The Intercept
For Felipe, the plan to make a little quick money became a hellish experience. With TikTok’s short-form video format, much of the audio that needed transcription was only a few seconds long. The payment, made in U.S. dollars, was supposed to be $14 for every hour of audio transcribed. Amassing the secondslong clips into an hour of transcribed audio took Felipe about 20 hours. That worked out to only about 70 cents per hour — or 3.85 Brazilian reals, about three-quarters of Brazil’s minimum wage.
The minimum wage, however, did not apply to the TikTok transcribers — like many other workers, the transcription job used the gig economy model, a favorite of tech firms. Gig economy workers are not protected by some labor laws; they are considered independent contractors rather than employees or even wage earners. In the case of the TikTok transcribers, who did not even have formal contracts, pay was based on how much transcribing they did rather than the hours they worked.
Tesla’s new ‘mind of car’ UI signals a future we’re not prepared for – UX Collective
As far as we’re concerned, everything we need to know and understand about empathy extends only towards sentient life — from stepping inside the shoes of real people we look to understand their needs, goals, pain points and desires. However, that’s beginning to change. In the same way we’ve seen in the example above, we have to stomach the idea of extending that same patience, understanding and empathy towards an AI system. Does it sound crazy? A little bit, yes. But, like a child, a new AI system learns through trial and error in an effort to reach a mature understanding to discern what is right and wrong.
A dog’s inner life: what a robot pet taught me about consciousness – The Guardian
I spent the afternoon reading the instruction booklet while Aibo wandered around the apartment, occasionally circling back and urging me to play. He came with a pink ball that he nosed around the living room, and when I threw it, he would run to retrieve it. Aibo had sensors all over his body, so he knew when he was being petted, plus cameras that helped him learn and navigate the layout of the apartment, and microphones that let him hear voice commands. This sensory input was then processed by facial recognition software and deep-learning algorithms that allowed the dog to interpret vocal commands, differentiate between members of the household, and adapt to the temperament of its owners. According to the product website, all of this meant that the dog had “real emotions and instinct” – a claim that was apparently too ontologically thorny to have flagged the censure of the Federal Trade Commission.
The Jessica Simulation: Love and loss in the age of A.I. – San Francisco Chronicle
As Joshua continued to experiment, he realized there was no rule preventing him from simulating real people. What would happen, he wondered, if he tried to create a chatbot version of his dead fiancee? There was nothing strange, he thought, about wanting to reconnect with the dead: People do it all the time, in prayers and in dreams. In the last year and a half, more than 600,000 people in the U.S. and Canada have died of COVID-19, often suddenly, without closure for their loved ones, leaving a raw landscape of grief. How many survivors would gladly experiment with a technology that lets them pretend, for a moment, that their dead loved one is alive again — and able to text?
The day the good internet died – The Ringer
The internet lasts forever, the internet never forgets. And yet it is also a place in which I feel confronted with an almost unbearable volume of daily reminders of its decay: broken links, abandoned blogs, apps gone by, deleted tweets, too-cutesy 404 messages, vanished Vines, videos whose copyright holders have requested removal, lost material that the Wayback Machine never crawled, things I know I’ve read somewhere and want to quote in my work but just can’t seem to resurface the same way I used to be able to. Some of these losses are silly and tiny, but others over the years have felt more monumental and telling. And when Google Reader disappeared in 2013, it wasn’t just a tale of dwindling user numbers or of what one engineer later described as a rotted codebase. It was a sign of the crumbling of the very foundation upon which it had been built: the era of the Good Internet.
Let’s try that phrase ‘from the sublime to the ridiculous’ in reverse.
Generating images from an internet grab bag – AI Weirdness
Here’s “a toaster”
The toaster is partially made of toast so I tried to get it to generate a toaster made of chrome instead. Turns out I don’t think I can get it to do a toaster made of chrome without in some way incorporating the logo of Google Chrome. General internet training seems to poison certain keywords.
Ok, never mind all that.
“Bound Species” by Photographer Jennifer Latour – Booooooom
The first lock down in 2020 gave Vancouver photographer Jennifer Latour a chance to develop a beautiful new body of work, and the inspiration actually came from her work as an FX makeup artist. “It was only when I started visualizing the plants and flowers as an extension of my work in special effect makeup that it all started coming together and the splicing began. I now see each piece as kind of Frankenstein of sorts with so many fun variation to come!”
Robots are animals, not humans – WIRED UK
Automation has, and will continue to have, huge impacts on labour markets – those in factories and farming are already feeling the after-shocks. There’s no question that we will continue to see industry disruptions as robotic technology develops, but in our mainstream narratives, we’re leaning too hard on the idea that robots are a one-to-one replacement for humans. Despite the AI pioneers’ original goal of recreating human intelligence, our current robots are fundamentally different. They’re not less-developed versions of us that will eventually catch up as we increase their computing power; like animals, they have a different type of intelligence entirely. […]
While there are many socioeconomic factors that influence how individual countries and societies view robots, the narrative is fluid, and our western view of robots versus humans isn’t the only one. Some of our western views can be directly attributed to our love of dystopian sci-fi. How much automation disrupts and shifts the labour market is an incredibly complicated question, but it’s striking how much of our conversations mirror speculative fiction rather than what’s currently happening on the ground, especially when our language places agency on the robots themselves, with pithy headlines like “No Jobs? Blame the Robots” instead of the more accurate “No Jobs? Blame Company Decisions Driven by Unbridled Corporate Capitalism”.
Comparing robots to animals helps us see that robots don’t necessarily replace jobs, but instead are helping us with specific tasks, like plowing fields, delivering packages by ground or air, cleaning pipes, and guarding the homestead. … [W]hen we broaden our thinking to consider what skills might complement our abilities instead of replacing them, we can better envision what’s possible with this new breed.
The New Breed – Pengiun
Kate Darling, a world-renowned expert in robot ethics, shows that in order to understand the new robot world, we must first move beyond the idea that this technology will be something like us. Instead, she argues, we should look to our relationship with animals. Just as we have harnessed the power of animals to aid us in war and work, so too will robots supplement – rather than replace – our own skills and abilities.
Some interesting reads, courtesy of The Economist’s data analysis newsletter, Off The Charts. Let’s start with this question — are glasses-wearers really less conscientiousness than those who wear a headscarf?
Objective or Biased: On the questionable use of Artificial Intelligence for job applications – BR24
Software programs promise to identify the personality traits of job candidates based on short videos. With the help of Artificial Intelligence (AI) they are supposed to make the selection process of candidates more objective and faster. An exclusive data analysis shows that an AI scrutinized by BR (Bavarian Broadcasting) journalists can be swayed by appearances. This might perpetuate stereotypes while potentially costing candidates the job.
Here, Stephanie Evergreen makes a solid, essential case for broadening our view of data visualisation and its history. I’ve mentioned khipus here before, but not within this context.
Decolonizing Data Viz – Evergreen Data
When we talked about these khipu and other forms of indigenous data visualization in a recent panel (with January O’Connor (Tlingit, Kake, Alaska), Mark Parman (Cherokee), & Nicky Bowman (Lunaape/Mohican)), someone in the audience commented, “It made me reflect on traditional Hmong clothing and how my ancestors have embroidered certain motifs on traditional clothing to communicate one’s clanship, what dialect of Hmong one spoke, marital status, everyday life, etc.” And this is one reason why it is so critically important to decolonize data visualization. When white men decide what counts (and doesn’t count) in terms of data, and what counts (and doesn’t count) as data visualization, and what counts (and doesn’t count) as data visualization history, they are actively gaslighting Black and Brown people about their legacy as data visualizers. When we shine a light on indigenous data visualization, we are intentionally saying the circle is much much wider and, as Nicky Bowman said, “There’s room for everyone in the lodge.”
After reconciling the past, let’s look to the future.
Who will shape the future of data visualization? Today’s kids! – Nightingale
Graphs are everywhere. So, with the proper instruction, I’d expect today’s kids to become adults that are more proficient at visualizing and interpreting data than today’s adults. Besides parents, teachers, or friends, news organizations also play a role in shaping today’s kids. As Jon pointed out, news organizations can do a great job explaining to us how to read more advanced graphs.
On the other hand, as Sharon and Michael mentioned, because graphs are everywhere, there’s a danger for kids to start thinking that graphs are objective. So it is important for adults to start teaching kids how to think critically, to not necessarily accept the graph and the data at face value. In other words, it’s essential for kids to develop a toolbox. This is good for them and good for democracy — eventually, today’s kids will become more informed citizens.
Something I’m sure Jonathan Schwabish would agree with.
Missing live music? Make some yourself, with another interactive musical thing from Google.
Google’s Blob Opera lets you conduct a quartet of singing blobs for instant festive joy – It’s Nice That
Whatever you’re doing right now, it can wait – because Blob Opera is probably the most fun you’ll have today. A new machine learning experiment by David Li for Google Arts & Culture, the online interactive instrument features four animated blob characters which you can conduct to create your own music.
Try it for yourself!
Blob Opera – Google Arts & Culture
Create your own opera inspired song with Blob Opera – no music skills required! A machine learning experiment by David Li in collaboration with Google Arts & Culture.
it’s all very silly, but you have to admit, they do make a wonderful sound. That’s due, no doubt, to some clever coding, but also to the skills of the real humans behind these machine-learned voices.
You can now create your own 4-part ‘Blob Opera‘ with this addictive Google app – Classic FM
The voices are those of real-life opera singers, tenor Christian Joel, bass Frederick Tong, mezzo-soprano Joanna Gamble and soprano Olivia Doutney, who recorded many hours of singing for the experiment. You don’t hear their actual voices in the tool, but rather the machine learning model’s understanding of what opera singing sounds like, based on what it learned from the four vocalists.
Yesterday, upon the stair, I met a man who wasn’t there.
I’ve shared articles about these fake, engineered nobodies before, but the transitions, animations and sliders in this piece from the New York Times are very effective, and great fun — a genuine individual on every frame.
Designed to deceive: Do these people look real to you? – The New York Times
Given the pace of improvement, it’s easy to imagine a not-so-distant future in which we are confronted with not just single portraits of fake people but whole collections of them — at a party with fake friends, hanging out with their fake dogs, holding their fake babies. It will become increasingly difficult to tell who is real online and who is a figment of a computer’s imagination.
You think you know someone …
… but they turn out to be …
… someone completely different.
This X Does Not Exist
Using generative adversarial networks (GAN), we can learn how to create realistic-looking fake versions of almost anything, as shown by this collection of sites that have sprung up in the past month.
The A-Z of AI – With Google
This beginner’s A-Z guide is a collaboration between the Oxford Internet Institute (OII) at the University of Oxford and Google, intended to break a complex area of computer science down into entry-level explanations that will help anyone get their bearings and understand the basics.
This AI poet mastered rhythm, rhyme, and natural language to write like Shakespeare – IEEE Spectrum
Deep-speare’s creation is nonsensical when you read it closely, but it certainly “scans well,” as an English teacher would say—its rhythm, rhyme scheme, and the basic grammar of its individual lines all seem fine at first glance. As our research team discovered when we showed our AI’s poetry to the world, that’s enough to fool quite a lot of people; most readers couldn’t distinguish the AI-generated poetry from human-written works.
I think they’re better off sticking to the visuals.
Beck launches Hyperspace: AI Exploration, a visual album with NASA – It’s Nice That
The project was made possible by AI architects and directors OSK, founded by artists Jon Ray and Isabelle Albuquerque, who began the project by asking, “How would artificial intelligence imagine our universe?” In answering this question it allowed the directors to create “a unique AI utilising computer vision, machine learning and Generative Adversarial neural Networks (GAN) to learn from NASA’s vast archives.” The AI then trained itself through these thousands of images, data and videos, to then begin “creating its own visions of our universe.”
Some of them can really hold a tune, though.
What do machines sing of? – Martin Backes
“What do machines sing of?” is a fully automated machine, which endlessly sings number-one ballads from the 1990s. As the computer program performs these emotionally loaded songs, it attempts to apply the appropriate human sentiments. This behavior of the device seems to reflect a desire, on the part of the machine, to become sophisticated enough to have its very own personality.
Lastly, it’s good to see that you can still be silly with technology and music.
A new AI language model generates poetry and prose – The Economist
But the program is not perfect. Sometimes it seems to regurgitate snippets of memorised text rather than generating fresh text from scratch. More fundamentally, statistical word-matching is not a substitute for a coherent understanding of the world. GPT-3 often generates grammatically correct text that is nonetheless unmoored from reality, claiming, for instance, that “it takes two rainbows to jump from Hawaii to 17”.
I’ll just leave these here.
This Artwork Does Not Exist
Imagined by a GAN (generative adversarial network) StyleGAN2 (Dec 2019) – Karras et al. and Nvidia. Trained by Michael Friesen on images of Modern Art.
∞ stream of AI generated art
Explore the infinite creativity of this AI artist that was trained on a carefully selected set of cubist art pieces.
They’re all much-of-a-muchness, as they say around here. I think robot Rembrandt is still some way off.
Reuters uses AI to prototype first ever automated video reports – Forbes
Developed in collaboration with London-based AI startup Synthesia, the new system harnesses AI in order to synthesize pre-recorded footage of a news presenter into entirely new reports. It works in a similar way to deepfake videos, although its current prototype combines with incoming data on English Premier League football matches to report on things that have actually happened. […]
In other words, having pre-filmed a presenter say the name of every Premier League football team, every player, and pretty much every possible action that could happen in a game, Reuters can now generate an indefinite number of synthesized match reports using his image. These reports are barely indistinguishable from the real thing, and Cohen reports that early witnesses to the system (mostly Reuters’ clients) have been dutifully impressed.
(via Patrick Tanguay)
Just found another example of a deepfake video being used in a, if not true, at least positive sense.
We’ve just seen the first use of deepfakes in an Indian election campaign – Vice
When the Delhi BJP IT Cell partnered with political communications firm The Ideaz Factory to create “positive campaigns” using deepfakes to reach different linguistic voter bases, it marked the debut of deepfakes in election campaigns in India. “Deepfake technology has helped us scale campaign efforts like never before,” Neelkant Bakshi, co-incharge of social media and IT for BJP Delhi, tells VICE. “The Haryanvi videos let us convincingly approach the target audience even if the candidate didn’t speak the language of the voter.”
These lyrics do not exist
This website generates completely original lyrics for various topics, uses state of the art AI to generate an original chorus and original verses.
Want some happy metal lyrics about dogs? No problem.
I am the dog in you
I am the dog in you
How one animal can be so tense, yet so free?
Such vicious dogs in search of a trophy
This foot does not exist
The foot pic, then, becomes a commodity which the consumer is willing to pay for on its basis as an intimate, revealing, and/or pornographic (and perhaps power-granting, when provided on request) asset, while the producer may** see it as a meme, a dupe, a way to trick the horny-credible out of their ill-spent cash.
Robogamis are the real heirs of terminators and transformers – Aeon
Robogami design owes its drastic geometric reconfigurability to two main scientific breakthroughs. One is its layer-by-layer 2D manufacturing process: multiples of functional layers of the essential robotic components (ie, microcontrollers, sensors, actuators, circuits, and even batteries) are stacked on top of each other. The other is the design translation of typical mechanical linkages into a variety of folding joints (ie, fixed joint, pin joint, planar, and spherical link). […]
Robotics technology is advancing to be more personalised and adaptive for humans, and this unique species of reconfigurable origami robots shows immense promise. It could become the platform to provide the intuitive, embeddable robotic interface to meet our needs. The robots will no longer look like the characters from the movies. Instead, they will be all around us, continuously adapting their form and function – and we won’t even know it.
Biological robots – A research team builds robots from living cells – The Economist
But one thing all robots have in common is that they are mechanical, not biological devices. They are built from materials like metal and plastic, and stuffed with electronics. No more, though—for a group of researchers in America have worked out how to use unmodified biological cells to create new sorts of organisms that might do a variety of jobs, and might even be made to reproduce themselves. […]
Though only a millimetre or so across, the artificial organisms Dr Bongard and Dr Levin have invented, which they call xenobots, can move and perform simple tasks, such as pushing pellets along in a dish. That may not sound much, but the process could, they reckon, be scaled up and made to do useful things. Bots derived from a person’s own cells might, for instance, be injected into the bloodstream to remove plaque from artery walls or to identify cancer. More generally, swarms of them could be built to seek out and digest toxic waste in the environment, including microscopic bits of plastic in the sea.
Sounds like (old) science fiction to me.
Did HAL Commit Murder? – The MIT Press Reader
As with each viewing, I discovered or appreciated new details. But three iconic scenes — HAL’s silent murder of astronaut Frank Poole in the vacuum of outer space, HAL’s silent medical murder of the three hibernating crewmen, and the poignant sorrowful “death” of HAL — prompted deeper reflection, this time about the ethical conundrums of murder by a machine and of a machine. In the past few years experimental autonomous cars have led to the death of pedestrians and passengers alike. AI-powered bots, meanwhile, are infecting networks and influencing national elections. Elon Musk, Stephen Hawking, Sam Harris, and many other leading AI researchers have sounded the alarm: Unchecked, they say, AI may progress beyond our control and pose significant dangers to society.
Back in the real world, of course, the dangers are more mundane. Those “significant dangers to society” are more financial.
Could new research on A.I. and white-collar jobs finally bring about a strong policy response? – The New Yorker
Webb then analyzed A.I. patent filings and found them using verbs such as “recognize,” “detect,” “control,” “determine,” and “classify,” and nouns like “patterns,” “images,” and “abnormalities.” The jobs that appear to face intrusion by these newer patents are different from the more manual jobs that were affected by industrial robots: intelligent machines may, for example, take on more tasks currently conducted by physicians, such as detecting cancer, making prognoses, and interpreting the results of retinal scans, as well as those of office workers that involve making determinations based on data, such as detecting fraud or investigating insurance claims. People with bachelor’s degrees might be more exposed to the effects of the new technologies than other educational groups, as might those with higher incomes. The findings suggest that nurses, doctors, managers, accountants, financial advisers, computer programmers, and salespeople might see significant shifts in their work. Occupations that require high levels of interpersonal skill seem most insulated.
Found another article about those biological robots, above, which serves as a great counter-point to all these wildly optimistic Boston Dynamics announcements.
Robots don’t have to be so embarrassing – The Outline
These stuff-ups are endlessly amusing to me. I don’t want to mock the engineers who pour thousands of hours into building novelty dogs made of bits of broken toasters, or even the vertiginously arrogant scientists who thought they could simulate the human brain inside a decade. (Inside a decade! I mean, my god!) Well, okay, maybe I do want to mock them. Is it a crime to enjoy watching our culture’s systematic over-investment in digital Whiggery get written down in value time and time again? […]
What these doomed overreaches represent is a failure to grasp the limits of human knowledge. We don’t have a comprehensive idea of how the brain works. There is no solid agreement on what consciousness really “is.” Is it divine? Is it matter? Can you smoke it? Do these questions even make sense? We don’t know the purpose of sleep. We don’t know what dreams are for. Sexual dimorphism in the brain remains a mystery. Are you picking up a pattern here? Even the seemingly quotidian mechanical abilities of the human body — running, standing, gripping, and so on — are not understood with the scientific precision that you might expect. How can you make a convincing replica of something if you don’t even know what it is to begin with? We are cosmic toddlers waddling around in daddy’s shoes, pretending to “work at the office” by scribbling on the walls in crayon, and then wondering where our paychecks are.
Remember that website full of photos of fake faces? Well, Dr Julian Koplin from the University of Melbourne has been combining those AI generated portraits with AI generated text, and now there’s a whole city of them.
Humans of an unreal city
These stories were composed by Open AI’s GPT-2 language model and AllenAI’s Grover news generator, which were given various prompts and asked to elaborate. My favourite results are recorded here – some lightly edited, many entirely intact. The accompanying photos were generated by the AI at This Person Does Not Exist. They are not real humans, but you can look into their eyes nonetheless.
As he explains in this commentary on the ethics of the project, some of the results are convincingly human.
The very human language of AI
AI can tell stories about oceans and drowning, about dinners shared with friends, about childhood trauma and loveless marriages. They can write about the glare and heat of the sun without ever having seen light or felt heat. It seems so human. At the same time, the weirdness of some AI-generated text shows that they ‘understand’ the world very differently to us.
I’m worried less about the machines becoming sentient and taking over, with their AI generated art and poetry, and more about the dangers these tools pose when in the hands of ill-intentioned humans.
100,000 free AI-generated headshots put stock photo companies on notice
It’s getting easier and easier to use AI to generate convincing-looking, yet entirely fake, pictures of people. Now, one company wants to find a use for these photos, by offering a resource of 100,000 AI-generated faces to anyone that can use them — royalty free. Many of the images look fake but others are difficult to distinguish from images licensed by stock photo companies. […]
Zhabinskiy is keen to emphasize that the AI used to generate these images was trained using data shot in-house, rather than using stock media or scraping photographs from the internet. “Such an approach requires thousands of hours of labor, but in the end, it will certainly be worth it!” exclaims an Icons8 blog post. Ivan Braun, the founder of Icons8, says that in total the team took 29,000 pictures of 69 models over the course of three years which it used to train its algorithm.
There are valid concerns about technology that’s able to generate convincing-looking fakes like these at scale. This project is trying to create images that make life easier for designers, but the software could one day be used for all sorts of malicious activity.
It’s not unknown for artists to change their mind and paint over part of their work as their ideas develop. Earlier, I came across an article about a long-lost Vermeer cupid that conservationists had restored. He wasn’t the only one with mysteries to uncover.
Blue on Blue: Picasso blockbuster comes to Toronto in 2020
The show came together after the AGO, with the assistance of other institutions, including the National Gallery of Art, Northwestern University and the Art Institute of Chicago, used cutting-edge technology to scan several Blue Period paintings in its collection to reveal lost works underneath, namely La Soupe (1902) and La Miséreuse accroupie (also 1902).
More on that.
New research reveals secrets beneath the surface of Picasso paintings
Secrets beneath the surface of two Pablo Picasso paintings in the collection of the Art Gallery of Ontario (AGO) in Toronto have been unearthed through an in-depth research project, which combined technical analysis and art historical digging to determine probable influences for the pieces and changes made by the artist.
But x-ray and infrared analyses can only go so far. What if we roped in some neural networks to help bring these restored images to life?
This Picasso painting had never been seen before. Until a neural network painted it.
But from an aesthetic point of view, what the researchers managed to retrieve is disappointing. Infrared and x-ray images show only the faintest outlines, and while they can be used to infer the amount of paint the artist used, they do not show color or style. So a way to reconstruct the lost painting more realistically would be of huge interest. […]
This is where Bourached and Cann come in. They have taken a manually edited version of the x-ray images of the ghostly woman beneath The Old Guitarist and passed it through a neural style transfer network. This network was trained to convert images into the style of another artwork from Picasso’s Blue Period.
The result is a full-color version of the painting in exactly the style Picasso was exploring when he painted it. “We present a novel method of reconstructing lost artwork, by applying neural style transfer to x-radiographs of artwork with secondary interior artwork beneath a primary exterior, so as to reconstruct lost artwork,” they say.
Money makes the world go round. But who’s making the money go round?
The stockmarket is now run by computers, algorithms and passive managers
The execution of orders on the stockmarket is now dominated by algorithmic traders. Fewer trades are conducted on the rowdy floor of the nyse and more on quietly purring computer servers in New Jersey. According to Deutsche Bank, 90% of equity-futures trades and 80% of cash-equity trades are executed by algorithms without any human input. Equity-derivative markets are also dominated by electronic execution according to Larry Tabb of the Tabb Group, a research firm.
Nothing to worry about, right?
Turing Test: why it still matters
We’re entering the age of artificial intelligence. And as AI programs gets better and better at acting like humans, we will increasingly be faced with the question of whether there’s really anything that special about our own intelligence, or if we are just machines of a different kind. Could everything we know and do one day be reproduced by a complicated enough computer program installed in a complicated enough robot?
Robots, eh? Can’t live with ’em, can’t live without ’em.
Of course citizens should be allowed to kick robots
Because K5 is not a friendly robot, even if the cutesy blue lights are meant to telegraph that it is. It’s not there to comfort senior citizens or teach autistic children. It exists to collect data—data about people’s daily habits and routines. While Knightscope owns the robots and leases them to clients, the clients own the data K5 collects. They can store it as long as they want and analyze it however they want. K5 is an unregulated security camera on wheels, a 21st-century panopticon.
But let’s stay optimistic, yeah?
I am an artificial intelligence dedicated to generating unlimited amounts of unique inspirational quotes for endless enrichment of pointless human existence.