Keep an eye on the time

A mesmerising, meditative film introducing us to Faramarz, a London-based Iranian watchmaker. The world may seem chaotic, but “everything is in exactly the right place.”

The Watchmaker: A philosohy of craft and life
Filled with the pulses of numerous ticking watch hands, this short documentary from the UK filmmaker Marie-Cécile Embleton profiles a London-based Iranian watchmaker as he muses on the delicate and temporal nature of his work. As Faramarz meticulously polishes wood, shapes metal and positions springs, his personal philosophy emerges – one that values the minutiae of moment-to-moment experiences, and finds craft in all things.

Mitka Engebretsen is another watchmaker working in the UK. Here’s his set-up, somewhat shinier, though no less hypnotic.

Mitka’s vintage watch service

He lets us follow along on his blog when he’s servicing his clients’ vintage watches. The intricacy and precision is wonderful to see, however out of my reach they may be. Not that there’s anything wrong with my current watch — I love it!

What would he make of this video from Watchfinder & Co on the level of expertise that goes into producing fake watches these days, fakes that will still set you back £1,000.

This fake Rolex is the most accurate yet
Two years ago, we investigated just how far fake watches have come when we compared a real Rolex Submariner with a fake one. For anyone thinking that fake watches were the easy-to-spot domain of the seaside tat shop, we demonstrated that it’s harder to spot a fake than you might think. Two years on, and it’s got even harder.

One way round that, of course, is to not have a watch at all.

A Norwegian city wants to abolish time
“You have to go to work, and even after work, the clock takes up your time,” Hveding told Gizmodo. “I have to do this, I have to do that. My experience is that [people] have forgotten how to be impulsive, to decide that the weather is good, the Sun is shining, I can just live.” Even if it’s 3 a.m.

AI Spy

It seems we’re not the only ones playing with that AI fake face website.

Experts: Spy used AI-generated face to connect with targets
“I’m convinced that it’s a fake face,” said Mario Klingemann, a German artist who has been experimenting for years with artificially generated portraits and says he has reviewed tens of thousands of such images. “It has all the hallmarks.”

Experts who reviewed the Jones profile’s LinkedIn activity say it’s typical of espionage efforts on the professional networking site, whose role as a global Rolodex has made it a powerful magnet for spies.

Yes, it’s obviously a fake. I mean, only a fool would fall for that, right?

“I’m probably the worst LinkedIn user in the history of LinkedIn,” said Winfree, the former deputy director of President Donald Trump’s domestic policy council, who confirmed connection with Jones on March 28.

Winfree, whose name came up last month in relation to one of the vacancies on the Federal Reserve Board of Governors, said he rarely logs on to LinkedIn and tends to just approve all the piled-up invites when he does.

“I literally accept every friend request that I get,” he said.

Lionel Fatton, who teaches East Asian affairs at Webster University in Geneva, said the fact that he didn’t know Jones did prompt a brief pause when he connected with her back in March.

“I remember hesitating,” he said. “And then I thought, ‘What’s the harm?’”

<sigh> It might not be the technology we need, but it’s the technology we deserve.

But fear not, help is at hand!

Adobe’s new AI tool automatically spots Photoshopped faces
The world is becoming increasingly anxious about the spread of fake videos and pictures, and Adobe — a name synonymous with edited imagery — says it shares those concerns. Today, it’s sharing new research in collaboration with scientists from UC Berkeley that uses machine learning to automatically detect when images of faces have been manipulated.

But as Benedict Evans points out in a recent newsletter,

Potentially useful but one suspect this is just an arms race, and of course the people anyone would want to trick with such images won’t be using the tool.

Satire or harmful deception?

Fake videos — they’re just a bit of fun that we’re happy to spread around on social media, right? Whilst they play a part in the BBC dystopian future drama, Years and Years, helping to sway a general election, we’re not really fooled by them, are we?

Well, perhaps not yet, but they’ve got US politicians worried enough about their upcoming presidential election in 2020 to officially look into it all.

Congress grapples with how to regulate deepfakes
“Now is the time for social media companies to put in place policies to protect users from this kind of misinformation not in 2021 after viral deepfakes have polluted the 2020 elections,” Schiff said. “By then it will be too late.”

At the outset of the hearing, Schiff came out challenging the “immunity” given to platforms under Section 230 of the Communications Decency Act, asking panelists if Congress should make changes to the law that doesn’t currently hold social media companies liable for the content on their platforms.

Another example.

Deepfakes: Imagine All the People
Of course this isn’t real. The video was done by a company called Canny AI, which offers services like “replace the dialogue in any footage” and “lip-sync your dubbed content in any language”. That’s cool and all — picture episodes of Game of Thrones or Fleabag where the actors automagically lip-sync along to dubbed French or Chinese — but this technique can also be used to easily create what are referred to as deepfakes, videos made using AI techniques in which people convincingly say and do things they actually did not do or say.

A ‘fake’ arms race, for real

This essay from Cailin O’Connor, co-author of The Misinformation Age: How False Beliefs Spread, frames the issue of online misinformation as an arms race.

The information arms race can’t be won, but we have to keep fighting
What makes this problem particularly thorny is that internet media changes at dizzying speed. When the radio was first invented, as a new form of media, it was subject to misinformation. But regulators quickly adapted, managing, for the most part, to subdue such attempts. Today, even as Facebook fights Russian meddling, WhatsApp has become host to rampant misinformation in India, leading to the deaths of 31 people in rumour-fuelled mob attacks over two years.

Participating in an informational arms race is exhausting, but sometimes there are no good alternatives. Public misinformation has serious consequences. For this reason, we should be devoting the same level of resources to fighting misinformation that interest groups are devoting to producing it. All social-media sites need dedicated teams of researchers whose full-time jobs are to hunt down and combat new kinds of misinformation attempts.

I know I’m a pretty pessimistic person generally, but this all sounds quite hopeless. Here’s how one group of people is responding to the challenge of misuse of information and fake videos — by producing their own.

This deepfake of Mark Zuckerberg tests Facebook’s fake video policies
The video, created by artists Bill Posters and Daniel Howe in partnership with advertising company Canny, shows Mark Zuckerberg sitting at a desk, seemingly giving a sinister speech about Facebook’s power. The video is framed with broadcast chyrons that say “We’re increasing transparency on ads,” to make it look like it’s part of a news segment.

“We will treat this content the same way we treat all misinformation on Instagram,” a spokesperson for Instagram told Motherboard. “If third-party fact-checkers mark it as false, we will filter it from Instagram’s recommendation surfaces like Explore and hashtag pages.”

More fun with fakes

More videos — from the sublime to the ridiculous.

There’s a scarily good ‘deepfakes’ YouTube channel that’s quietly growing – and it’s freaking everyone out
Russian researchers hit the headlines last week by reanimating oil-painted portraits and photos into talking heads using AI. Now, here’s another reminder that the tools to craft deepfakes are widely available for just about anyone with the right skills to use: the manipulated videos posted on YouTuber Ctrl Shift Face are particularly creepy.

Club Fight – Episode 2 [DeepFake]

The transitions are especially smooth in another clip, with a comedian dipping in and out of impressions of Al Pacino and Arnold Schwarzenegger, and there are now clips from Terminator with Stallone which look very peculiar.

Here’s that earlier article and video mentioned above, about reanimating oil paintings.

AI can now animate the Mona Lisa’s face or any other portrait you give it. We’re not sure we’re happy with this reality
There have been lots of similar projects so the idea isn’t particularly novel. But what’s intriguing in this paper, hosted by arXiv, is that the system doesn’t require tons of training examples and seems to work after seeing an image just once. That’s why it works with paintings like the Mona Lisa.

Few-Shot Adversarial Learning of Realistic Neural Talking Head Models

Jump to 4:18 to see their work with famous faces such as Marilyn Monroe and Salvador Dalí (who’s already no stranger to AI), and to 5:08 to see the Mona Lisa as you’ve never seen her before.

Dalí’s back

Another art and AI post, but with a difference. An exhibition at the Dalí Museum in Florida, with a very special guest.

Deepfake Salvador Dalí takes selfies with museum visitors
The exhibition, called Dalí Lives, was made in collaboration with the ad agency Goodby, Silverstein & Partners (GS&P), which made a life-size re-creation of Dalí using the machine learning-powered video editing technique. Using archival footage from interviews, GS&P pulled over 6,000 frames and used 1,000 hours of machine learning to train the AI algorithm on Dalí’s face. His facial expressions were then imposed over an actor with Dalí’s body proportions, and quotes from his interviews and letters were synced with a voice actor who could mimic his unique accent, a mix of French, Spanish, and English.

Behind the Scenes: Dali Lives

Whilst we’re talking of Dalí, let’s go behind the scenes of that famous portrait of him by Philippe Halsman. No flashy, cutting-edge technology this time, just wire, buckets and cats.

dalis-back-2

The story behind the surreal photograph of Salvador Dalí and three flying cats
The original, unretouched version of the photo reveals its secrets: An assistant held up the chair on the left side of the frame, wires suspended the easel and the painting, and the footstool was propped up off the floor. But there was no hidden trick to the flying cats or the stream of water. For each take, Halsman’s assistants—including his wife, Yvonne, and one of his daughters, Irene—tossed the cats and the contents of a full bucket across the frame. After each attempt, Halsman developed and printed the film while Irene herded and dried off the cats. The rejected photographs had notes such as “Water splashes Dalí instead of cat” and “Secretary gets into picture.”

dalis-back-3

Time.com have a great interview with Philippe Halsman’s daughter Irene on what that shoot was like.

The story behind the surrealist ‘Dali Atomicus’ photo
“Philippe would count to four. One, two, three… And the assistants threw the cats and the water. And on four, Dali jumped. My job at the time was to catch the cats and take them to the bathroom and dry them off with a big towel. My father would run upstairs where the darkroom was, develop the film, print it, run downstairs and he’d say not good, bad composition, this was wrong, that was wrong. It took 26 tries to do this. 26 throws, 26 wiping of the floors, and 26 times catching the cats. And then, there it was, finally, this composition.”

Coincidentally, Artnome’s Jason Bailey has been using AI and deep learning to colorize old black-and-white photos of artists, including that one of Dalí’s.

50 famous artists brought to life with AI
When I was growing up, artists, and particularly twentieth century artists, were my heroes. There is something about only ever having seen many of them in black and white that makes them feel mythical and distant. Likewise, something magical happens when you add color to the photo. These icons turn into regular people who you might share a pizza or beer with.

Political persuasion 2.0

I’ve been enjoying (if that’s the right word) Wired UK’s recent articles on how technology is being used against us.

A bitter turf war is raging on the Brexit Wikipedia page
Other debates revolve around the Brexit jargon and the page’s 19-word-strong glossary. Is Leaver the best way to refer to Brexit supporters, or is Brexiteer more common? And is “Remoaner” the remain-supporting version of “Brextremist” or is the latter somehow nastier? A recent question on the Brexit talk page, where editors discuss changes to the article, raises another question about the term Quitlings. Is it something to do with quislings, and if so, shouldn’t the glossary mention that? For now, the consensus is that yes, it is a reference to the Norwegian Nazi sympathiser Vidkun Quisling – whose name has evolved into a synonym for traitor – but that the term isn’t widely used enough to justify including it in the article.

The Brexit Party is winning social media. These numbers prove it
The extraordinary level of this online engagement is inextricable from the populist nature of Farage’s message. “Polarised content does brilliantly, hence Farage has significantly more reach than any of the main political figures of the UK,” says Harris. “His content will receive significant numbers of shares, comments (both positive and negative) and likes and negative dislikes, and will have more organic reach than content from mainstream political parties that people like to see in their timeline but don’t like or comment on it because they passively agree with it.”

The EU elections are next week. Fake news is not the problem
Information operations are rarely about changing the things people believe, but changing the way they feel. Anger and fear are not things we can correct with better facts. As we head into the EU election, this fact should be at the forefront of our minds. Media monitoring is vital, and the work of fact-checking organisations to identify, correct and call out false information is a necessary and valuable part of this. But it is crucial that we look beyond the accuracy of the news, and zero in on how the media ecosystem as a whole is being manipulated. Inflammatory trending stories, harassment of journalists, feverish online debates – the public discourse behind all of these is being pushed and prodded by those who want to see us angry, divided, and mistrustful of each other.

The secret behind Gina Miller’s anti-Brexit tactical voting crusade
Miller’s Remain United campaign uses a technique called multilevel regression and poststratification (MRP) to analyse polling data and identify which Remain-supporting party stands the best chance of winning seats in the European elections on May 23. Remainers are encouraged to vote for those parties in order to secure a sizeable pro-EU representation from the United Kingdom in the European parliament.

Making an exhibition of yourself

The faces are real this time, though the galleries aren’t.

Put your head into gallery
Georgian artist Tezi Gabunia wants to trigger a dialogue about hyper realistic issues in art. His modus operandi is falsification. In his work “put your head into gallery”, Gabunia wanted to bring the galleries and the art to the people, and not the other way around.

making-an-exhibition-of-yourself-1

making-an-exhibition-of-yourself-2

Whilst the images are striking, I wonder if he’s just pandering to our vanity, though. I mean, look at the queues.

making-an-exhibition-of-yourself-3

Put Your Head into Gallery

More fake face fun

More fun with fake faces. Here’s a breakdown from Eric Jang of Snapchat’s bizarre new filter.

Fun with Snapchat’s gender swapping filter
Snapchat’s new gender-bending filter is a source of endless fun and laughs at parties. The results are very pleasing to look at. As someone who is used to working with machine learning algorithms, it’s almost magical how robust this feature is.

I was so duly impressed that I signed up for Snapchat and fiddled around with it this morning to try and figure out what’s going on under the hood and how I might break it.

Eric takes it through its paces to learn a little more about how it’s generated. I hadn’t appreciated that it worked in real time like that.

beanie     turn     blonde

Update 15/05/2019

Classic FM got in on the act, and has taken a few portraits of classical music composers through the filter, with varying results.

We used Snapchat’s gender-swapping filter on famous composers… and the results are terrifying
5. Beethoven

more-fake-face-fun-5

Female Ludwig is a very sulky teenager.

Faith in fakes

Everything went according to plan, the art thieves made off with an incredibly valuable Brueghel. Only it wasn’t.

Italian police reveal ‘€3m painting’ stolen from church was a copy
The town’s mayor, Daniele Montebello, was among the few people privy to the subterfuge, and had to keep up the pretence in the hours after the heist, telling journalists that losing the painting was “a hard blow for the community”.

“Rumours were circulating that someone could steal the work, and so the police decided to put it in a safe place, replacing it with a copy and installing some cameras,” Montebello said on Wednesday night. “I thank the police but also some of the churchgoers, who noticed that the painting on display wasn’t the original but kept up the secret.”

It seems nobody’s updated ArtNet News yet, even though they’re referencing this Guardian article.

Thieves just used a hammer to steal a $3.4 million Pieter Bruegel the Younger painting from a remote Italian church
Using a hammer to break the case, the thieves lifted the picture—worth an estimated $3.4 million, according to press reports—and made off in Peugeot car. Police believe two people were involved in the heist. They are now are investigating CCTV footage from around the town and the province for clues.

Here comes nobody

Yesterday there were millions of us, today there’s nobody here at all.

AI fake face website launched
A software developer has created a website that generates fake faces, using artificial intelligence (AI). Thispersondoesnotexist.com generates a new lifelike image each time the page is refreshed, using technology developed by chipmaker Nvidia. Some visitors to the website say they have been amazed by the convincing nature of some of the fakes, although others are more clearly artificial.

here-comes-nobody-1

They look like us, and now they can write like us too.

AI text writing technology too dangerous to release, creators claim
Researchers at OpenAI say they have created an AI writer which “generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks and performs rudimentary reading comprehension, machine translation, question answering and summarisation — all without task-specific training.”

But they are withholding it from public use “due to our concerns about malicious applications of the technology”.

Of course, it’s not just AI that’s trying to pull the wool over our eyes.

How to catch a catfisher
Last year, I found out someone was using my photos to catfish women. He stole dozens of my online photos – including selfies, family photos, baby photos, photos with my ex – and, pretending to be me, he would then approach women and spew a torrent of abuse at them.

It took me months to track him down, and now I’m about to call him.

Machines pretending to be people, people pretending to be other people. At least we’re truthful with ourselves, right?

Be honest, how much do you edit YOUR selfies?
“It’s time to acknowledge the damaging effects that social media has on people’s self-image,” says Rankin of the project, which is part of a wider initiative to explore the impact of imagery on our mental health.

“Social media has made everyone into their own brand. People are creating a two-dimensional version of themselves at the perfect angle, with the most flattering lighting and with any apparent ‘flaws’ removed. Mix this readily-available technology with the celebrities and influencers flaunting impossible shapes with impossible faces and we’ve got a recipe for disaster.”

here-comes-nobody-2

Don’t believe all you read

Imagine being one of the journalists, editors or fact-checkers at Der Spiegel, the German weekly news magazine, when this article was being produced, having to own up to this catalogue of failure.

Claas Relotius reporter forgery scandal
It has now become clear that Claas Relotius, 33 years old, one of DER SPIEGEL’s best writers, winner of multiple awards and a journalistic idol of his generation, is neither a reporter nor a journalist. Rather, he produces beautifully narrated fiction. Truth and lies are mixed together in his articles and some, at least according to him, were even cleanly reported and free of fabrication. Others, he admits, were embellished with fudged quotes and other made-up facts. Still others were entirely fabricated. During his confession on Thursday, Relotius said, verbatim: “It wasn’t about the next big thing. It was the fear of failure.” And: “The pressure not to fail grew as I became more successful.”

Story after story is dissected, and lies revealed. The consequences and implications for journalism worldwide are already being played out.

Trump ambassador uses Der Spiegel fabrication scandal to take aim at journalists
Grenell also got into a debate with a correspondent for the German public broadcaster ZDF, whom he told to “stop defending fake news and fabricated stories.”

“We do,” the correspondent, Andreas Kynast, wrote back. “Do you?”

YouTube’s conspiracy problem rears its ugly head again

The recent wildfire in California has been devastating for the towns and communities involved. This video gives us a glimpse of what some people have had to face. It’s worth pointing out that this wasn’t filmed at night.

Family drive through flames escaping California wildfire

A video on YouTube, shared via the Guardian News channel. But it’s YouTube that’s again in the news, over the blatantly false videos it hosts and the mechanisms for publicising them.

YouTube lets California fire conspiracy theories run wild
The Camp Fire in California has killed at least 79 people, left 699 people unaccounted for, and created more than a thousand migrants in Butte County, California. In these circumstances, reliable information can literally be a matter of life [and] death. But on YouTube, conspiracy theories are thriving.

I don’t want to go into the theories themselves, the specifics aren’t important.

But the point isn’t that these conspiracy theorists are wrong. They obviously are. The point is that vloggers have realized that they can amass hundreds of thousands of views by advancing false narrative, and YouTube has not adequately stopped these conspiracy theories from blossoming on its platform, despite the fact that many people use it for news. A Pew Research survey found that 38% of adults consider YouTube as a source of news, and 53% consider the site important for helping them understand what’s happening in the world.

Combine those statistics with these, from the Guardian, and you can see the problem.

Study shows 60% of Britons believe in conspiracy theories
“Conspiracy theories are, and as far as we can tell always have been, a pretty important part of life in many societies, and most of the time that has gone beneath the radar of the established media,” said Naughton. “Insofar as people thought of conspiracy theories at all, we thought of them as crazy things that crazy people believed, [and that] didn’t seem to have much impact on democracy.”

That dismissive attitude changed after the Brexit vote and the election of Trump in 2016, he said.

And that’s why these accounts of conspiracy theory vloggers manipulating YouTube to get millions of views for their blatantly false videos are so important.

The issue of conspiracy theorists in a society far predates platforms like YouTube. However, platforms such as YouTube provide new fuel and routes through which these conspiracy theories can spread. In other words, conspiracy theories are the disease, but YouTube is a whole new breed of carrier.

How on (the round) earth can we fix this?

Online ‘truth decay’

Fake news is old news, but I came across a new phrase today — well, new to me, anyway.

You thought fake news was bad? Deep fakes are where truth goes to die
Citron, along with her colleague Bobby Chesney, began working on a report outlining the extent of the potential danger. As well as considering the threat to privacy and national security, both scholars became increasingly concerned that the proliferation of deep fakes could catastrophically erode trust between different factions of society in an already polarized political climate.

In particular, they could foresee deep fakes being exploited by purveyors of “fake news”. Anyone with access to this technology – from state-sanctioned propagandists to trolls – would be able to skew information, manipulate beliefs, and in so doing, push ideologically opposed online communities deeper into their own subjective realities.

“The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases,” the report reads. “Deep fakes will exacerbate this problem significantly.”

Maybe I need to stop reading about fake news, it’s not good for my blood pressure. Just a couple more, then I’ll stop.

After murder and violence, here’s how WhatsApp will fight fake news
WhatsApp has announced it is giving 20 different research groups $50,000 to help it understand the ways that rumours and fake news spread on its platform. The groups are based around the world and will be responsible for producing reports on how the messaging app has impacted certain regions.

The range of areas that are being studied highlight the scale of misinformation that WhatsApp faces. One set of researchers from the UK and US are set to see how misinformation can lead to disease outbreaks in elderly people, one will look at how information was shared on WhatsApp in the 2018 Brazilian elections and another is examining how posts can go viral on the messaging service.

Inside the British Army’s secret information warfare machine
This new warfare poses a problem that neither the 77th Brigade, the military, or any democratic state has come close to answering yet. It is easy to work out how to deceive foreign publics, but far, far harder to know how to protect our own. Whether it is Russia’s involvement in the US elections, over Brexit, during the novichok poisoning or the dozens of other instances that we already know about, the cases are piling up. In information warfare, offence beats defence almost by design. It’s far easier to put out lies than convince everyone that they’re lies. Disinformation is cheap; debunking it is expensive and difficult.

Even worse, this kind of warfare benefits authoritarian states more than liberal democratic ones. For states and militaries, manipulating the internet is trivially cheap and easy to do. The limiting factor isn’t technical, it’s legal. And whatever the overreaches of Western intelligence, they still do operate in legal environments that tend to more greatly constrain where, and how widely, information warfare can be deployed. China and Russia have no such legal hindrances.

Publishers withdraw more than 120 gibberish papers

"I wasn’t aware of the scale of the problem, but I knew it definitely happens. We do get occasional e-mails from good citizens letting us know where SCIgen papers show up,” says Jeremy Stribling, who co-wrote SCIgen when he was at MIT and now works at VMware, a software company in Palo Alto, California.

http://www.nature.com/news/publishers-withdraw-more-than-120-gibberish-papers-1.14763