Olympic-sized hoax? ‘Lost’ Krautrock warm-up tapes mysteriously surface – SPIN
Neither interview includes photographs of Zeichnete, and he doesn’t appear in a series of promotional videos for the release … And the more you listen to the music, the more it begins to sound both too pristine, given the tapes’ alleged age, and too stylistically perfect in its aping of Neu! and Kraftwerk. The resemblance is almost uncanny.
Rather than bringing us together, social media can often pull us apart. We all know this, and it seems the platforms themselves know this too.
Facebook executives shut down efforts to make the site less divisive – WSJ
“Our algorithms exploit the human brain’s attraction to divisiveness,” read a slide from a 2018 presentation. “If left unchecked,” it warned, Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.”
But of course the platforms aren’t solely to blame. The users have to take some responsibility for what they write and share. Take this user, for example, just your typical conspiracy theorist.
See those little ‘Get the facts’ warning labels, suggesting he’s
spreading fake news making unsubstantiated claims?
Twitter labels Trump’s false claims with warning for first time – The Guardian
The company’s decision on Tuesday afternoon to affix labels to a series of Trump tweets about California’s election planning is the result of a new policy debuted on 11 May. They were applied – hours after the tweets initially went out – because Trump’s tweets violated Twitter’s “civic integrity policy”, a company spokeswoman confirmed, which bars users from “manipulating or interfering in elections or other civic processes”, such as by posting misleading information that could dissuade people from participating in an election.
He didn’t like that, as you can imagine, and is trying to retaliate.
Trump to sign executive order on social media on Thursday: White House – Reuters
The officials gave no further details. It was unclear how Trump could follow through on the threat of shutting down privately owned companies including Twitter Inc. The dispute erupted after Twitter on Tuesday for the first time tagged Trump’s tweets about unsubstantiated claims of fraud in mail-in voting with a warning prompting readers to fact check the posts.
But is this just the beginning?
Trump sows doubt on voting. It keeps some people up at night. – The New York Times
The anxiety has intensified in recent weeks as the president continues to attack the integrity of mail voting and insinuate that the election system is rigged, while his Republican allies ramp up efforts to control who can vote and how. Just last week, Mr. Trump threatened to withhold funding from states that defy his wishes on expanding mail voting, while also amplifying unfounded claims of voter fraud in battleground states. […]
The task force began with 65 possibilities before narrowing the list early this year to eight potential calamities, including natural disasters, a successful foreign hack of voting machines, a major candidate’s challenging the election and seeking to delegitimize the results, and a president who refuses to participate in a peaceful transfer of power. Among the scenarios they eliminated when making final cuts in January, ironically, was a killer pandemic that ravaged the country and kept people homebound before Election Day.
That election’s going to be interesting, to say the least.
So here in the UK we’re to have another three weeks of lockdown. I’m not sure what state I’ll be in after that, I’m already starting to fray at the edges. What’s keeping me up all night isn’t so much how we’ll get through these next few weeks, but what comes after?
Our pandemic summer – The Atlantic
The pandemic is not a hurricane or a wildfire. It is not comparable to Pearl Harbor or 9/11. Such disasters are confined in time and space. The SARS-CoV-2 virus will linger through the year and across the world. “Everyone wants to know when this will end,” said Devi Sridhar, a public-health expert at the University of Edinburgh. “That’s not the right question. The right question is: How do we continue?”
Not a clue. We sit around and wait for a vaccine, but until then— what?
After social distancing, a strange purgatory awaits – The Atlantic
We will get used to seeing temperature-screening stations at public venues. If America’s testing capacity improves and results come back quickly, don’t be surprised to see nose swabs at airports. Airlines may contemplate whether flights can be reserved for different groups of passengers—either high- or low-risk. Mass-transit systems will set new rules; don’t be surprised if they mandate masks too.
Can things just go back to how they were before?
Welcome to our new timeline – Kottke
I’m wondering — how many people are aware that this is going to be our reality for the next few years? There is no “normal” we’re going back to, only weird uncharted waters.
We’re all struggling with it. I know I am. Thankfully, help is still around.
Stephen Fry’s tips for managing virus-based anxiety – BBC News
Stephen Fry has been giving advice on dealing with anxiety and stress whilst self-isolating during the coronavirus pandemic. He told the BBC’s Andrew Marr “anxiety and stress are almost as virulent as this coronavirus”.
Some people, however, are less than helpful.
Facebook will add anti-misinformation posts to your News Feed if you liked fake coronavirus news – The Verge
Today’s update follows a scathing report by nonprofit group Avaaz, which called the site an “epicenter of coronavirus misinformation” and cited numerous posts containing dangerous health advice and fake cures. The company pushed back on this accusation, saying it’s removed “hundreds of thousands of pieces of misinformation” in the past weeks.
A video from The Kid Should See This that I’ll definitely be making sure my kids see. They love all these ‘hacks’, but they aren’t all what they seem. #gross
Debunking fake ‘kitchen hacks’ that have billions of views – The Kid Should See This
Popular kitchen hack videos rack up millions and billions of views on YouTube and Facebook. They’re surprising! They’re fun to watch! And they look pretty easy to do. But beware: Not all of these ideas, tips, and tricks actually work.
Reuters uses AI to prototype first ever automated video reports – Forbes
Developed in collaboration with London-based AI startup Synthesia, the new system harnesses AI in order to synthesize pre-recorded footage of a news presenter into entirely new reports. It works in a similar way to deepfake videos, although its current prototype combines with incoming data on English Premier League football matches to report on things that have actually happened. […]
In other words, having pre-filmed a presenter say the name of every Premier League football team, every player, and pretty much every possible action that could happen in a game, Reuters can now generate an indefinite number of synthesized match reports using his image. These reports are barely indistinguishable from the real thing, and Cohen reports that early witnesses to the system (mostly Reuters’ clients) have been dutifully impressed.
(via Patrick Tanguay)
Just found another example of a deepfake video being used in a, if not true, at least positive sense.
We’ve just seen the first use of deepfakes in an Indian election campaign – Vice
When the Delhi BJP IT Cell partnered with political communications firm The Ideaz Factory to create “positive campaigns” using deepfakes to reach different linguistic voter bases, it marked the debut of deepfakes in election campaigns in India. “Deepfake technology has helped us scale campaign efforts like never before,” Neelkant Bakshi, co-incharge of social media and IT for BJP Delhi, tells VICE. “The Haryanvi videos let us convincingly approach the target audience even if the candidate didn’t speak the language of the voter.”
These lyrics do not exist
This website generates completely original lyrics for various topics, uses state of the art AI to generate an original chorus and original verses.
Want some happy metal lyrics about dogs? No problem.
I am the dog in you
I am the dog in you
How one animal can be so tense, yet so free?
Such vicious dogs in search of a trophy
This foot does not exist
The foot pic, then, becomes a commodity which the consumer is willing to pay for on its basis as an intimate, revealing, and/or pornographic (and perhaps power-granting, when provided on request) asset, while the producer may** see it as a meme, a dupe, a way to trick the horny-credible out of their ill-spent cash.
This July saw the 50th annivesary of the moon landing, and I shared a number of landing related links, including one about a speech for President Nixon in case the worst should happen, titled “In the Event of Moon Disaster.”
Well, it’s been given the deepfake treatment.
A deepfake Nixon delivers eulogy for the Apollo 11 astronauts – Kottke
Fifty years ago, not even Stanley Kubrick could have faked the Moon landing. But today, visual effects and techniques driven by machine learning are so good that it might be relatively simple, at least the television broadcast part of it. In a short demonstration of that technical supremacy, a group from MIT has created a deepfake version of Nixon delivering that disaster speech. …
The implications of being able to so convincingly fake the televised appearance of a former US President are left as an exercise to the reader.
Remember that website full of photos of fake faces? Well, Dr Julian Koplin from the University of Melbourne has been combining those AI generated portraits with AI generated text, and now there’s a whole city of them.
Humans of an unreal city
These stories were composed by Open AI’s GPT-2 language model and AllenAI’s Grover news generator, which were given various prompts and asked to elaborate. My favourite results are recorded here – some lightly edited, many entirely intact. The accompanying photos were generated by the AI at This Person Does Not Exist. They are not real humans, but you can look into their eyes nonetheless.
As he explains in this commentary on the ethics of the project, some of the results are convincingly human.
The very human language of AI
AI can tell stories about oceans and drowning, about dinners shared with friends, about childhood trauma and loveless marriages. They can write about the glare and heat of the sun without ever having seen light or felt heat. It seems so human. At the same time, the weirdness of some AI-generated text shows that they ‘understand’ the world very differently to us.
I’m worried less about the machines becoming sentient and taking over, with their AI generated art and poetry, and more about the dangers these tools pose when in the hands of ill-intentioned humans.
100,000 free AI-generated headshots put stock photo companies on notice
It’s getting easier and easier to use AI to generate convincing-looking, yet entirely fake, pictures of people. Now, one company wants to find a use for these photos, by offering a resource of 100,000 AI-generated faces to anyone that can use them — royalty free. Many of the images look fake but others are difficult to distinguish from images licensed by stock photo companies. […]
Zhabinskiy is keen to emphasize that the AI used to generate these images was trained using data shot in-house, rather than using stock media or scraping photographs from the internet. “Such an approach requires thousands of hours of labor, but in the end, it will certainly be worth it!” exclaims an Icons8 blog post. Ivan Braun, the founder of Icons8, says that in total the team took 29,000 pictures of 69 models over the course of three years which it used to train its algorithm.
There are valid concerns about technology that’s able to generate convincing-looking fakes like these at scale. This project is trying to create images that make life easier for designers, but the software could one day be used for all sorts of malicious activity.
For all you lazy students out there, here’s a way you can present any Wikipedia article as a real academic source.
M-Journal subverts academia by morphing Wikipedia into a reliable source
Caveat lector, let the buyer beware. These are the words at the end of the Wikipedia page that informs users that attempting to use Wikipedia as a reliable source of cited material comes with a continuous and present risk. Across academia it is a known operating fact that Wikipedia is not an accepted source for research papers and the lot. That hasn’t stopped students from using it and thanks to M-Journal, disguising it.
As the article goes on to say, it’s “ingenious as much as it is ridiculously dubious.”
Website M-Journal will turn Wikipedia articles into “real” academic papers
One of the funnier parts of the M-Journal is that if your teacher does ask for a link to the academic paper, the site will generate a fairly convincing-looking link. But it has a fake paywall, so you can’t see the whole thing.
And we all know that no one ever goes past the paywalls on academic journals.
(Not that the ‘real’ ones are necessarily any better, of course.)
This whole scheme will only work if academics are kept in the dark, otherwise the game is up. Let’s hope the word is getting out — I think the sector has enough fakery problems without this adding to the mix.
A mesmerising, meditative film introducing us to Faramarz, a London-based Iranian watchmaker. The world may seem chaotic, but “everything is in exactly the right place.”
The Watchmaker: A philosohy of craft and life
Filled with the pulses of numerous ticking watch hands, this short documentary from the UK filmmaker Marie-Cécile Embleton profiles a London-based Iranian watchmaker as he muses on the delicate and temporal nature of his work. As Faramarz meticulously polishes wood, shapes metal and positions springs, his personal philosophy emerges – one that values the minutiae of moment-to-moment experiences, and finds craft in all things.
Mitka Engebretsen is another watchmaker working in the UK. Here’s his set-up, somewhat shinier, though no less hypnotic.
He lets us follow along on his blog when he’s servicing his clients’ vintage watches. The intricacy and precision is wonderful to see, however out of my reach they may be. Not that there’s anything wrong with my current watch — I love it!
What would he make of this video from Watchfinder & Co on the level of expertise that goes into producing fake watches these days, fakes that will still set you back £1,000.
This fake Rolex is the most accurate yet
Two years ago, we investigated just how far fake watches have come when we compared a real Rolex Submariner with a fake one. For anyone thinking that fake watches were the easy-to-spot domain of the seaside tat shop, we demonstrated that it’s harder to spot a fake than you might think. Two years on, and it’s got even harder.
One way round that, of course, is to not have a watch at all.
A Norwegian city wants to abolish time
“You have to go to work, and even after work, the clock takes up your time,” Hveding told Gizmodo. “I have to do this, I have to do that. My experience is that [people] have forgotten how to be impulsive, to decide that the weather is good, the Sun is shining, I can just live.” Even if it’s 3 a.m.
It seems we’re not the only ones playing with that AI fake face website.
Experts: Spy used AI-generated face to connect with targets
“I’m convinced that it’s a fake face,” said Mario Klingemann, a German artist who has been experimenting for years with artificially generated portraits and says he has reviewed tens of thousands of such images. “It has all the hallmarks.”
Experts who reviewed the Jones profile’s LinkedIn activity say it’s typical of espionage efforts on the professional networking site, whose role as a global Rolodex has made it a powerful magnet for spies.
Yes, it’s obviously a fake. I mean, only a fool would fall for that, right?
“I’m probably the worst LinkedIn user in the history of LinkedIn,” said Winfree, the former deputy director of President Donald Trump’s domestic policy council, who confirmed connection with Jones on March 28.
Winfree, whose name came up last month in relation to one of the vacancies on the Federal Reserve Board of Governors, said he rarely logs on to LinkedIn and tends to just approve all the piled-up invites when he does.
“I literally accept every friend request that I get,” he said.
Lionel Fatton, who teaches East Asian affairs at Webster University in Geneva, said the fact that he didn’t know Jones did prompt a brief pause when he connected with her back in March.
“I remember hesitating,” he said. “And then I thought, ‘What’s the harm?’”
<sigh> It might not be the technology we need, but it’s the technology we deserve.
But fear not, help is at hand!
Adobe’s new AI tool automatically spots Photoshopped faces
The world is becoming increasingly anxious about the spread of fake videos and pictures, and Adobe — a name synonymous with edited imagery — says it shares those concerns. Today, it’s sharing new research in collaboration with scientists from UC Berkeley that uses machine learning to automatically detect when images of faces have been manipulated.
But as Benedict Evans points out in a recent newsletter,
Potentially useful but one suspect this is just an arms race, and of course the people anyone would want to trick with such images won’t be using the tool.
Fake videos — they’re just a bit of fun that we’re happy to spread around on social media, right? Whilst they play a part in the BBC dystopian future drama, Years and Years, helping to sway a general election, we’re not really fooled by them, are we?
Well, perhaps not yet, but they’ve got US politicians worried enough about their upcoming presidential election in 2020 to officially look into it all.
Congress grapples with how to regulate deepfakes
“Now is the time for social media companies to put in place policies to protect users from this kind of misinformation not in 2021 after viral deepfakes have polluted the 2020 elections,” Schiff said. “By then it will be too late.”
At the outset of the hearing, Schiff came out challenging the “immunity” given to platforms under Section 230 of the Communications Decency Act, asking panelists if Congress should make changes to the law that doesn’t currently hold social media companies liable for the content on their platforms.
Deepfakes: Imagine All the People
Of course this isn’t real. The video was done by a company called Canny AI, which offers services like “replace the dialogue in any footage” and “lip-sync your dubbed content in any language”. That’s cool and all — picture episodes of Game of Thrones or Fleabag where the actors automagically lip-sync along to dubbed French or Chinese — but this technique can also be used to easily create what are referred to as deepfakes, videos made using AI techniques in which people convincingly say and do things they actually did not do or say.
This essay from Cailin O’Connor, co-author of The Misinformation Age: How False Beliefs Spread, frames the issue of online misinformation as an arms race.
The information arms race can’t be won, but we have to keep fighting
What makes this problem particularly thorny is that internet media changes at dizzying speed. When the radio was first invented, as a new form of media, it was subject to misinformation. But regulators quickly adapted, managing, for the most part, to subdue such attempts. Today, even as Facebook fights Russian meddling, WhatsApp has become host to rampant misinformation in India, leading to the deaths of 31 people in rumour-fuelled mob attacks over two years.
Participating in an informational arms race is exhausting, but sometimes there are no good alternatives. Public misinformation has serious consequences. For this reason, we should be devoting the same level of resources to fighting misinformation that interest groups are devoting to producing it. All social-media sites need dedicated teams of researchers whose full-time jobs are to hunt down and combat new kinds of misinformation attempts.
I know I’m a pretty pessimistic person generally, but this all sounds quite hopeless. Here’s how one group of people is responding to the challenge of misuse of information and fake videos — by producing their own.
This deepfake of Mark Zuckerberg tests Facebook’s fake video policies
The video, created by artists Bill Posters and Daniel Howe in partnership with advertising company Canny, shows Mark Zuckerberg sitting at a desk, seemingly giving a sinister speech about Facebook’s power. The video is framed with broadcast chyrons that say “We’re increasing transparency on ads,” to make it look like it’s part of a news segment.
View this post on Instagram
‘Imagine this…’ (2019) This deepfake moving image work is from the ‘Big Dada’ series, part of the ‘Spectre’ project. Where big data, AI, dada, and conceptual art combine. .Artworks by Bill Posters & @danyelhau #spectreknows #privacy #democracy #surveillancecapitalism #dataism #deepfake #deepfakes #contemporaryartwork #digitalart #generativeart #newmediaart #codeart #markzuckerberg #artivism #contemporaryart
“We will treat this content the same way we treat all misinformation on Instagram,” a spokesperson for Instagram told Motherboard. “If third-party fact-checkers mark it as false, we will filter it from Instagram’s recommendation surfaces like Explore and hashtag pages.”
More videos — from the sublime to the ridiculous.
There’s a scarily good ‘deepfakes’ YouTube channel that’s quietly growing – and it’s freaking everyone out
Russian researchers hit the headlines last week by reanimating oil-painted portraits and photos into talking heads using AI. Now, here’s another reminder that the tools to craft deepfakes are widely available for just about anyone with the right skills to use: the manipulated videos posted on YouTuber Ctrl Shift Face are particularly creepy.
The transitions are especially smooth in another clip, with a comedian dipping in and out of impressions of Al Pacino and Arnold Schwarzenegger, and there are now clips from Terminator with Stallone which look very peculiar.
Here’s that earlier article and video mentioned above, about reanimating oil paintings.
AI can now animate the Mona Lisa’s face or any other portrait you give it. We’re not sure we’re happy with this reality
There have been lots of similar projects so the idea isn’t particularly novel. But what’s intriguing in this paper, hosted by arXiv, is that the system doesn’t require tons of training examples and seems to work after seeing an image just once. That’s why it works with paintings like the Mona Lisa.
Deepfake Salvador Dalí takes selfies with museum visitors
The exhibition, called Dalí Lives, was made in collaboration with the ad agency Goodby, Silverstein & Partners (GS&P), which made a life-size re-creation of Dalí using the machine learning-powered video editing technique. Using archival footage from interviews, GS&P pulled over 6,000 frames and used 1,000 hours of machine learning to train the AI algorithm on Dalí’s face. His facial expressions were then imposed over an actor with Dalí’s body proportions, and quotes from his interviews and letters were synced with a voice actor who could mimic his unique accent, a mix of French, Spanish, and English.
Whilst we’re talking of Dalí, let’s go behind the scenes of that famous portrait of him by Philippe Halsman. No flashy, cutting-edge technology this time, just wire, buckets and cats.
The story behind the surreal photograph of Salvador Dalí and three flying cats
The original, unretouched version of the photo reveals its secrets: An assistant held up the chair on the left side of the frame, wires suspended the easel and the painting, and the footstool was propped up off the floor. But there was no hidden trick to the flying cats or the stream of water. For each take, Halsman’s assistants—including his wife, Yvonne, and one of his daughters, Irene—tossed the cats and the contents of a full bucket across the frame. After each attempt, Halsman developed and printed the film while Irene herded and dried off the cats. The rejected photographs had notes such as “Water splashes Dalí instead of cat” and “Secretary gets into picture.”
Time.com have a great interview with Philippe Halsman’s daughter Irene on what that shoot was like.
The story behind the surrealist ‘Dali Atomicus’ photo
“Philippe would count to four. One, two, three… And the assistants threw the cats and the water. And on four, Dali jumped. My job at the time was to catch the cats and take them to the bathroom and dry them off with a big towel. My father would run upstairs where the darkroom was, develop the film, print it, run downstairs and he’d say not good, bad composition, this was wrong, that was wrong. It took 26 tries to do this. 26 throws, 26 wiping of the floors, and 26 times catching the cats. And then, there it was, finally, this composition.”
Coincidentally, Artnome’s Jason Bailey has been using AI and deep learning to colorize old black-and-white photos of artists, including that one of Dalí’s.
50 famous artists brought to life with AI
When I was growing up, artists, and particularly twentieth century artists, were my heroes. There is something about only ever having seen many of them in black and white that makes them feel mythical and distant. Likewise, something magical happens when you add color to the photo. These icons turn into regular people who you might share a pizza or beer with.
I’ve been enjoying (if that’s the right word) Wired UK’s recent articles on how technology is being used against us.
A bitter turf war is raging on the Brexit Wikipedia page
Other debates revolve around the Brexit jargon and the page’s 19-word-strong glossary. Is Leaver the best way to refer to Brexit supporters, or is Brexiteer more common? And is “Remoaner” the remain-supporting version of “Brextremist” or is the latter somehow nastier? A recent question on the Brexit talk page, where editors discuss changes to the article, raises another question about the term Quitlings. Is it something to do with quislings, and if so, shouldn’t the glossary mention that? For now, the consensus is that yes, it is a reference to the Norwegian Nazi sympathiser Vidkun Quisling – whose name has evolved into a synonym for traitor – but that the term isn’t widely used enough to justify including it in the article.
The Brexit Party is winning social media. These numbers prove it
The extraordinary level of this online engagement is inextricable from the populist nature of Farage’s message. “Polarised content does brilliantly, hence Farage has significantly more reach than any of the main political figures of the UK,” says Harris. “His content will receive significant numbers of shares, comments (both positive and negative) and likes and negative dislikes, and will have more organic reach than content from mainstream political parties that people like to see in their timeline but don’t like or comment on it because they passively agree with it.”
The EU elections are next week. Fake news is not the problem
Information operations are rarely about changing the things people believe, but changing the way they feel. Anger and fear are not things we can correct with better facts. As we head into the EU election, this fact should be at the forefront of our minds. Media monitoring is vital, and the work of fact-checking organisations to identify, correct and call out false information is a necessary and valuable part of this. But it is crucial that we look beyond the accuracy of the news, and zero in on how the media ecosystem as a whole is being manipulated. Inflammatory trending stories, harassment of journalists, feverish online debates – the public discourse behind all of these is being pushed and prodded by those who want to see us angry, divided, and mistrustful of each other.
The secret behind Gina Miller’s anti-Brexit tactical voting crusade
Miller’s Remain United campaign uses a technique called multilevel regression and poststratification (MRP) to analyse polling data and identify which Remain-supporting party stands the best chance of winning seats in the European elections on May 23. Remainers are encouraged to vote for those parties in order to secure a sizeable pro-EU representation from the United Kingdom in the European parliament.
The faces are real this time, though the galleries aren’t.
Put your head into gallery
Georgian artist Tezi Gabunia wants to trigger a dialogue about hyper realistic issues in art. His modus operandi is falsification. In his work “put your head into gallery”, Gabunia wanted to bring the galleries and the art to the people, and not the other way around.
Whilst the images are striking, I wonder if he’s just pandering to our vanity, though. I mean, look at the queues.
More fun with fake faces. Here’s a breakdown from Eric Jang of Snapchat’s bizarre new filter.
Fun with Snapchat’s gender swapping filter
Snapchat’s new gender-bending filter is a source of endless fun and laughs at parties. The results are very pleasing to look at. As someone who is used to working with machine learning algorithms, it’s almost magical how robust this feature is.
I was so duly impressed that I signed up for Snapchat and fiddled around with it this morning to try and figure out what’s going on under the hood and how I might break it.
Eric takes it through its paces to learn a little more about how it’s generated. I hadn’t appreciated that it worked in real time like that.
Classic FM got in on the act, and has taken a few portraits of classical music composers through the filter, with varying results.
Female Ludwig is a very sulky teenager.
Everything went according to plan, the art thieves made off with an incredibly valuable Brueghel. Only it wasn’t.
Italian police reveal ‘€3m painting’ stolen from church was a copy
The town’s mayor, Daniele Montebello, was among the few people privy to the subterfuge, and had to keep up the pretence in the hours after the heist, telling journalists that losing the painting was “a hard blow for the community”.
“Rumours were circulating that someone could steal the work, and so the police decided to put it in a safe place, replacing it with a copy and installing some cameras,” Montebello said on Wednesday night. “I thank the police but also some of the churchgoers, who noticed that the painting on display wasn’t the original but kept up the secret.”
It seems nobody’s updated ArtNet News yet, even though they’re referencing this Guardian article.
Thieves just used a hammer to steal a $3.4 million Pieter Bruegel the Younger painting from a remote Italian church
Using a hammer to break the case, the thieves lifted the picture—worth an estimated $3.4 million, according to press reports—and made off in Peugeot car. Police believe two people were involved in the heist. They are now are investigating CCTV footage from around the town and the province for clues.
Yesterday there were millions of us, today there’s nobody here at all.
AI fake face website launched
A software developer has created a website that generates fake faces, using artificial intelligence (AI). Thispersondoesnotexist.com generates a new lifelike image each time the page is refreshed, using technology developed by chipmaker Nvidia. Some visitors to the website say they have been amazed by the convincing nature of some of the fakes, although others are more clearly artificial.
They look like us, and now they can write like us too.
AI text writing technology too dangerous to release, creators claim
Researchers at OpenAI say they have created an AI writer which “generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks and performs rudimentary reading comprehension, machine translation, question answering and summarisation — all without task-specific training.”
But they are withholding it from public use “due to our concerns about malicious applications of the technology”.
Of course, it’s not just AI that’s trying to pull the wool over our eyes.
How to catch a catfisher
Last year, I found out someone was using my photos to catfish women. He stole dozens of my online photos – including selfies, family photos, baby photos, photos with my ex – and, pretending to be me, he would then approach women and spew a torrent of abuse at them.
It took me months to track him down, and now I’m about to call him.
Machines pretending to be people, people pretending to be other people. At least we’re truthful with ourselves, right?
Be honest, how much do you edit YOUR selfies?
“It’s time to acknowledge the damaging effects that social media has on people’s self-image,” says Rankin of the project, which is part of a wider initiative to explore the impact of imagery on our mental health.
“Social media has made everyone into their own brand. People are creating a two-dimensional version of themselves at the perfect angle, with the most flattering lighting and with any apparent ‘flaws’ removed. Mix this readily-available technology with the celebrities and influencers flaunting impossible shapes with impossible faces and we’ve got a recipe for disaster.”