Tim’s hippie manifesto

Some less than positive reaction from The Register and others to Tim Berners-Lee’s latest campaign to save the web from itself. To describe it as a hippie manifesto sounds a little harsh but, as I said before, I can’t see this making much difference unless Facebook and Google agree to give up power, money etc.

Web Foundation launches internet hippie manifesto: ‘We’ve lost control of our data, it is being used against us’
It identifies the same problems that everyone and their dog has been writing about for years: there is a digital divide; internet access can be expensive; an entire industry has grown up selling your personal data; governments abuse the internet sometimes; people use the internet to do unpleasant things like bully and harass people; net neutrality’s a thing.

It has some charts and stats. But basically it reads like a High School final project on the problems of the internet. Competent but not consequential. […]

But simply saying companies shouldn’t make money from personal data and governments shouldn’t turn off the internet is not going to achieve a single thing. There needs to be clear plan of attack, recognition of pain points for companies, a broad and well-organized campaign to engage and rally people.

Berners-Lee takes flak for ‘hippie manifesto’ that only Google and Facebook could love
Open-source advocate Rafael Laguna, co-founder of Open-Xchange, is suspicious that Google and Facebook – the companies most under fire for privacy and other human rights abuses – were first to voice their support for the Greatest Living Briton’s declaration. “They are the two outstanding creators of the problems proclaimed in Tim’s paper,” Laguna notes. […]

Laguna told us: “As we have seen before with ‘Privacy Shield’, I suspect this move will be used as ‘proof’ of their reputability – but I fail to see how Google and Facebook will genuinely adhere to the requirements laid out in the initiative. The only result I can see is that it gets watered down, that it remains a lip service and, worst case, the whole thing loses credibility.”

Can Tim Berners-Lee fix what he started?

We’re growing increasingly disillusioned with the web, but the guy behind it has a plan — a “Contract for the Web” that he hopes will set out our rights and freedoms on the internet.

Tim Berners-Lee launches campaign to save the web from abuse
“Humanity connected by technology on the web is functioning in a dystopian way. We have online abuse, prejudice, bias, polarisation, fake news, there are lots of ways in which it is broken. This is a contract to make the web one which serves humanity, science, knowledge and democracy,” he said.

For it to work, the big tech companies need to be behind it. No problem, right?

One of the early signatories to the contract, Facebook, has been fined by the Information Commissioner’s Office for its part in the Cambridge Analytica scandal; has faced threats from the EU for taking too long to remove extremist content; and has been sued for allowing advertisers to target housing ads only at white people. The firm, which has appointed the former deputy prime minister, Nick Clegg, to lead its PR operation, did not respond to a request for comment.

Another early signatory, Google, is reportedly developing a censored version of its search engine for the Chinese market. “If you sign up to the principles, you can’t do censorship,” said Berners-Lee. “Will this be enough to make search engines push back? Will it be persuasive enough for the Chinese government to be more open? I can’t predict whether that will happen,” he said. Google did not respond to a request for comment.

Hmm. I can’t see this making much difference unless Facebook and Google agree to– what, make less money?

“I was devastated”: Tim Berners-Lee, the man who created the World Wide Web, has some regrets
“We demonstrated that the Web had failed instead of served humanity, as it was supposed to have done, and failed in many places,” he told me. The increasing centralization of the Web, he says, has “ended up producing—with no deliberate action of the people who designed the platform—a large-scale emergent phenomenon which is anti-human.”

tim-1

“Tim and Vint made the system so that there could be many players that didn’t have an advantage over each other.” Berners-Lee, too, remembers the quixotism of the era. “The spirit there was very decentralized. The individual was incredibly empowered. It was all based on there being no central authority that you had to go to to ask permission,” he said. “That feeling of individual control, that empowerment, is something we’ve lost.”

That’s it in a nutshell, for me. The web just isn’t the same as it was at the beginning.

The power of the Web wasn’t taken or stolen. We, collectively, by the billions, gave it away with every signed user agreement and intimate moment shared with technology. Facebook, Google, and Amazon now monopolize almost everything that happens online, from what we buy to the news we read to who we like. Along with a handful of powerful government agencies, they are able to monitor, manipulate, and spy in once unimaginable ways.

Tim Wu is a law professor and ‘influential tech thinker’. Here’s his take on what went wrong.

Tim Wu: ‘The internet is like the classic story of the party that went sour’
Looking back at the 00s, the great mistake of the web’s idealists was a near-total failure to create institutions designed to preserve that which was good about the web (its openness, its room for a diversity of voices and its earnest amateurism), and to ward off that which was bad (the trolling, the clickbait, the demands of excessive and intrusive advertising, the security breaches). There was too much faith that everything would take care of itself – that “netizens” were different, that the culture of the web was intrinsically better. Unfortunately, that excessive faith in web culture left a void, one that became filled by the lowest forms of human conduct and the basest norms of commerce. It really was just like the classic story of the party that went sour.

The Guardian certainly likes to report on versions of this story, but only in November and March, it seems.

Tech giants may have to be broken up, says Tim Berners-Lee
Web inventor says Silicon Valley firms have too much clout and ‘optimism has cracked’ [November 2018]

Tim Berners-Lee: we must regulate tech firms to prevent ‘weaponised’ web
The inventor of the world wide web warns over concentration of power among a few companies ‘controlling which ideas are shared’ [March 2018]

Tim Berners-Lee on the future of the web: ‘The system is failing’
The inventor of the world wide web remains an optimist but sees a ‘nasty wind’ blowing amid concerns over advertising, net neutrality and fake news [November 2017]

Tim Berners-Lee: I invented the web. Here are three things we need to change to save it
It has taken all of us to build the web we have, and now it is up to all of us to build the web we want – for everyone [March 2017]

Google+, we hardly knew ye

I admit, I did use this for a while, but I’m as surprised as others to learn that Google+ made it this far. ( I still miss Google Reader.)

The death of Google+ is imminent, says Google
Google’s decision follows the Wall Street Journal’s revelation. also published on Oct. 8, that the company exposed hundreds of thousands of Google+ users’ data earlier this year, and opted to keep it a secret:

A software glitch in the social site gave outside developers potential access to private Google+ profile data between 2015 and March 2018, when internal investigators discovered and fixed the issue, according to the documents and people briefed on the incident. A memo reviewed by the Journal prepared by Google’s legal and policy staff and shared with senior executives warned that disclosing the incident would likely trigger “immediate regulatory interest” and invite comparisons to Facebook’s leak of user information to data firm Cambridge Analytica.

That doesn’t make them look good, does it? But then, should we be surprised anymore?

AI to the rescue

In 2016 the RNIB announced a project between the NHS and DeepMind, Google’s artificial intelligence company.

Artificial intelligence to look for early signs of eye conditions humans might miss
With the number of people affected by sight loss in the UK predicted to double by 2050, Moorfields Eye Hospital NHS Foundation Trust and DeepMind Health have joined forces to explore how new technologies can help medical research into eye diseases.

This wasn’t the only collaboration with the NHS that Google was involved in. There was another project, to help staff monitor patients with kidney disease, that had people concerned about the amount of the medical information being handed over.

Revealed: Google AI has access to huge haul of NHS patient data
Google says that since there is no separate dataset for people with kidney conditions, it needs access to all of the data in order to run Streams effectively. In a statement, the Royal Free NHS Trust says that it “provides DeepMind with NHS patient data in accordance with strict information governance rules and for the purpose of direct clinical care only.”

Still, some are likely to be concerned by the amount of information being made available to Google. It includes logs of day-to-day hospital activity, such as records of the location and status of patients – as well as who visits them and when. The hospitals will also share the results of certain pathology and radiology tests.

The Google-owned company tried to reassure us that everything was being done appropriately, that all those medical records would be safe with them.

DeepMind hits back at criticism of its NHS data-sharing deal
DeepMind co-founder Mustafa Suleyman has said negative headlines surrounding his company’s data-sharing deal with the NHS are being “driven by a group with a particular view to peddle”. […]

All the data shared with DeepMind will be encrypted and parent company Google will not have access to it. Suleyman said the company was holding itself to “an unprecedented level of oversight”.

That didn’t seem to cut it though.

DeepMind’s data deal with the NHS broke privacy law
“The Royal Free did not have a valid basis for satisfying the common law duty of confidence and therefore the processing of that data breached that duty,” the ICO said in its letter to the Royal Free NHS Trust. “In this light, the processing was not lawful under the Act.” […]

“The Commission is not persuaded that it was necessary and proportionate to process 1.6 million partial patient records in order to test the clinical safety of the application. The processing of these records was, in the Commissioner’s view, excessive,” the ICO said.

And now here we are, some years later, and that eye project is a big hit.

Artificial intelligence equal to experts in detecting eye diseases
The breakthrough research, published online by Nature Medicine, describes how machine-learning technology has been successfully trained on thousands of historic de-personalised eye scans to identify features of eye disease and recommend how patients should be referred for care.

Researchers hope the technology could one day transform the way professionals carry out eye tests, allowing them to spot conditions earlier and prioritise patients with the most serious eye diseases before irreversible damage sets in.

That’s from UCL, one of the project’s partners. I like the use of the phrase ‘historic de-personalised eye scans’. And it doesn’t mention Google once.

Other reports also now seem to be pushing the ‘AI will rescue us’ angle, rather than the previous ‘Google will misuse our data’ line.

DeepMind AI matches health experts at spotting eye diseases
DeepMind’s ultimate aim is to develop and implement a system that can assist the UK’s National Health Service with its ever-growing workload. Accurate AI judgements would lead to faster diagnoses and, in theory, treatment that could save patients’ vision.

Artificial intelligence ‘did not miss a single urgent case’
He told the BBC: “I think this will make most eye specialists gasp because we have shown this algorithm is as good as the world’s leading experts in interpreting these scans.” […]

He said: “Every eye doctor has seen patients go blind due to delays in referral; AI should help us to flag those urgent cases and get them treated early.”

And it seems AI can help with the really tricky problems too.

This robot uses AI to find Waldo, thereby ruining Where’s Waldo
To me, this is like the equivalent of cheating on your math homework by looking for the answers at the back of your textbook. Or worse, like getting a hand-me-down copy of Where’s Waldo and when you open the book, you find that your older cousin has already circled the Waldos in red marker. It’s about the journey, not the destination — the process of methodically scanning pages with your eyes is entirely lost! But of course, no one is actually going to use this robot to take the fun out of Where’s Waldo, it’s just a demonstration of what AutoML can do.

There’s Waldo is a robot that finds Waldo

All as bad as each other?

Rhett Jones from Gizmodo strikes a cautionary note about Apple’s positioning following Facebook’s recent data sharing controversies.

Apple isn’t your friend
In its own deliberate fashion, Apple appears to see a market opportunity in the privacy debate that goes beyond polishing its own image. As headlines blared about Facebook’s latest data-sharing turmoil, the Wall Street Journal reported that Apple has been quietly planning to launch a new advertising network for the past year. It’s said to be a re-imagining of its failed iAd network that was shuttered in 2016.

[…]

Generally, more competition is welcome. If Apple is giving Facebook and Google headaches, we say that’s great. But it’s a thorny issue when we’re talking about a few billion-dollar companies exchanging places on the ladder as they strive to be trillion-dollar companies. It’s just not enough for the least bad megacorp to keep the evil ones in check.

Dumbing down the chatbots

A quite different take on Google’s AI demo from the other day. Rather than be impressed at how clever the bots appear, because they sound like us, we should be sad at how inefficient we’ve made them, because they sound like us.

Chatbots are saints
Pichai played a recording of Duplex calling a salon to schedule a haircut. This is an informational transaction that a couple of computers could accomplish in a trivial number of microseconds — bip! bap! done! — but with a human on one end of the messaging bus, it turned into a slow-motion train wreck. Completing the transaction required 17 separate data transmissions over the course of an entire minute — an eternity in the machine world. And the human in this case was operating at pretty much peak efficiency. I won’t even tell you what happened when Duplex called a restaurant to reserve a table. You could almost hear the steam coming out of the computer’s ears.

In our arrogance, we humans like to think of natural language processing as a technique aimed at raising the intelligence of machines to the point where they’re able to converse with us. Pichai’s demo suggests the reverse is true. Natural language processing is actually a technique aimed at dumbing down computers to the point where they’re able to converse with us. Google’s great breakthrough with Duplex came in its realization that by sprinkling a few monosyllabic grunts into computer-generated speech — um, ah, mmm — you could trick a human into feeling kinship with the machine. You ace the Turing test by getting machines to speak baby-talk.

Blogger’s still here?

TechCrunch has news of an update to Blogger. Nothing newsworthy about the update, really. What’s catching our eye is that Blogger still exists at all.

Blogger gets a spring cleaning
It’s surprising that Blogger is still around. I can’t remember the last time I saw a Blogger site in my searches, and it sure doesn’t have a lot of mindshare. Google also has let the platform linger and hasn’t integrated it with any of its newer services. The same thing could be said for Google+, too, of course. Google cuts some services because they have no users and no traction. That could surely be said for Blogger and Google+, but here they are, still getting periodic updates.

I used to have a blog on Blogger, and prompted by this article I’ve just had a very strange stroll down memory lane to visit it, via the Internet Archive’s marvellous Wayback Machine.

more-coffee-less-dukkha-585

I really liked the look of that old blog. Very mid-2000s. Are there no blogs that look like this anymore?

Google’s creeping us out again

But it only wants to help, it’s for our own good.

Google wants to cure our phone addiction. How about that for irony?
This is Google doing what it always does. It is trying to be the solution to every aspect of our lives. It already wants to be our librarian, our encyclopedia, our dictionary, our map, our navigator, our wallet, our postman, our calendar, our newsagent, and now it wants to be our therapist. It wants us to believe it’s on our side.

There is something suspect about deploying more technology to use less technology. And something ironic about a company that fuels our tech addiction telling us that it holds the key to weaning us off it. It doubles as good PR, and pre-empts any future criticism about corporate irresponsibility.

And then there’s this. How many times have we had cause to say, ‘just because we can, doesn’t mean we should’?

Google’s new voice bot sounds, um, maybe too real
“Google Assistant making calls pretending to be human not only without disclosing that it’s a bot, but adding ‘ummm’ and ‘aaah’ to deceive the human on the other end with the room cheering it… horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing,” tweeted Zeynep Tufekci, a professor at the University of North Carolina at Chapel Hill who studies the social impacts of technology.

“As digital technologies become better at doing human things, the focus has to be on how to protect humans, how to delineate humans and machines, and how to create reliable signals of each—see 2016. This is straight up, deliberate deception. Not okay,” she added.

They know everything about us, and that’s ok?

I really need to stop reading articles about how our personal data is being used and abused by seemingly everyone on the internet. Nothing good can come from going over the same bad news. These from The Guardian are the last ones, I promise.

Why have we given up our privacy to Facebook and other sites so willingly?
If you think you’re a passive user of Facebook, minimising the data you provide to the site or refraining from oversharing details of your life, you have probably underestimated the scope of its reach. Facebook doesn’t just learn from the pictures you post, and the comments you leave: the site learns from which posts you read and which you don’t; it learns from when you stop scrolling down your feed and how long it takes you to restart; it learns from your browsing on other websites that have nothing to do with Facebook itself; and it even learns from the messages you type out then delete before sending (the company published an academic paper on this “self-censorship” back in 2013).

[…]

Lukasz Olejnik, an independent security and privacy researcher, agrees: “Years ago, people and organisations used to shift the blame on the users, even in public. This blaming is unfortunate, because expecting users to be subject-matter experts and versed in the obscure technical aspects is misguided.

“Blaming users is an oversimplification, as most do not understand the true implications when data are shared – they cannot. You can’t expect people to fully appreciate the amount of information extracted from aggregated datasets. That said, you can’t expect users to know what is really happening with their data if it’s not clearly communicated in an informed consent prompt, which should in some cases include also the consequences of hitting ‘I agree’.”

So what kind of data are we talking about? What are we sharing? Everything from where we’ve been, what we’ve ever watched or searched for, to even what we’ve deleted.

Are you ready? This is all the data Facebook and Google have on you
This information has millions of nefarious uses. You say you’re not a terrorist. Then how come you were googling Isis? Work at Google and you’re suspicious of your wife? Perfect, just look up her location and search history for the last 10 years. Manage to gain access to someone’s Google account? Perfect, you have a chronological diary of everything that person has done for the last 10 years.

This is one of the craziest things about the modern age. We would never let the government or a corporation put cameras/microphones in our homes or location trackers on us. But we just went ahead and did it ourselves because – to hell with it! – I want to watch cute dog videos.

And texts and calls too.

Facebook logs SMS texts and calls, users find as they delete accounts
Facebook makes it hard for users to delete their accounts, instead pushing them towards “deactivation”, which leaves all personal data on the company’s servers. When users ask to permanently delete their accounts, the company suggests: “You may want to download a copy of your info from Facebook.” It is this data dump that reveals the extent of Facebook’s data harvesting – surprising even for a company known to gather huge quantities of personal information.

So what can be done?

Beware the smart toaster: 18 tips for surviving the surveillance age
Just over a week ago, the Observer broke a story about how Facebook had failed to protect the personal information of tens of millions of its users. The revelations sparked a #DeleteFacebook movement and some people downloaded their Facebook data before removing themselves from the social network. During this process, many of these users were shocked to see just how much intel about them the internet behemoth had accumulated. If you use Facebook apps on Android, for example – and, even inadvertently, gave it permission – it seems the company has been collecting your call and text data for years.

It’s not me, it’s you! So Facebook protested, in the wake of widespread anger about its data-collection practices. You acquiesced to our opaque privacy policies. You agreed to let us mine and monetise the minutiae of your existence. Why are you so upset?

Most of the tips the article lists fail to really address the issues above, as they are more about how to secure your accounts from hackers, rather than dealing with Facebook and Google intrusions and opaque consent agreements. But a couple are worth highlighting.

12. Sometimes it’s worth just wiping everything and starting over
Your phone, your tweets, your Facebook account: all of these things are temporary. They will pass. Free yourself from an obsession with digital hoarding. If you wipe your phone every year, you learn which apps you need and which are just sitting in the background hoovering up data. If you wipe your Facebook account every year, you learn which friends you actually like and which are just hanging on to your social life like a barnacle.

[…]

18. Finally, remember your privacy is worth protecting
You might not have anything to hide (except your embarrassing Netflix history) but that doesn’t mean you should be blase about your privacy. Increasingly, our inner lives are being reduced to a series of data points; every little thing we do is for sale. As we’re starting to see, this nonstop surveillance changes us. It influences the things we buy and the ideas we buy into. Being more mindful of our online behaviour, then, isn’t just important when it comes to protecting our information, it’s essential to protecting our individuality.

Big bad numbers

TechCrunch has a summary of the latest report from Google on its attempts to clear up its mess. Some of the numbers are incredible.

In 2017, Google removed 3.2B ‘bad ads’ and blocked 320K publishers, 90K sites, 700K mobile apps
Google also removed 130 million ads for malicious activity abuses, such as trying to get around Google’s ad review. And 79 million ads were blocked because clicking on them led to sites with malware, while 400,000 sites containing malware were also removed as part of that process. Google also identified and blocked 66 million “trick to click” ads and 48 million ads that tricked you into downloading software.

Sounds impressive, but that’s not all they’re trying to tackle currently.

The bad ads report publication comes in the wake of Google taking a much more proactive stance tackling harmful content on one of its most popular platforms, YouTube. In February, the company announced that it would be getting more serious about how it evaluated videos posted to the site, and penalising creators a through a series of “strikes” if they were found to be running afoul of Google’s policies.

The strikes have been intended to hit creators where it hurts them most: by curtailing monetising and discoverability of the videos.

This week, Google started to propose a second line of attack to try to raise the level of conversation around questionable content: it plans to post alternative facts from Wikipedia alongside videos that carry conspiracy theories (although it’s not clear how Google will determine which videos are conspiracies, and which are not).

That sounds quite intractable. It will be interesting to see how that plays out.

A self-spamming university?

Google blocks U. of Illinois at Chicago from emailing its own students
The University of Illinois at Chicago recently found itself living a modern nightmare: Google’s automated cybersecurity regime mistook the university as the culprit in a spam attack on the university’s students and began blocking university email accounts from sending messages to Gmail users.

The blocking went on for more than two weeks, and the affected Gmail users included 13,000 of the university’s own students. University officials describe those two weeks as a Kafkaesque state of limbo.

Gmail's beginnings and consequences

How Gmail happened: the inside story of its launch 10 years ago
But serious search practically begged for serious storage: It opened up the possibility of keeping all of your email, forever, rather than deleting it frantically to stay under your limit. That led to the eventual decision to give each user 1GB of space, a figure Google settled on after considering capacities that were generous but not preposterous, such as 100MB.

An interesting read about the cautious beginnings of what now seems like such a no brainer. But consider that passage above with this one from Barclay T Blair, information governance expert, in a post entitled “There is no harm in keeping tiny emails”. He had found an article that he thought…

“There is no harm in keeping tiny emails”
… nicely summed up the attitude I encounter from IT and others in our information governance engagements. Ask an attorney sometime if there really is “no harm in keeping tiny emails around in this age of ever-expanding storage space.” The drug dealers of the IG world have really done an incredible job convincing the addicts that the drug has no downside.

On owning your own data

On owning your own data
The problem, of course, is this wretched business model that has your landlord snooping on you and keeping all that information in the first place. If they didn’t have that information — or if that information was encrypted in a manner that only you could access it — they couldn’t share your information even if they wanted to.

Sorry, what?!

Google Reader

They say the writing’s been on the wall for a while, but still, this is a real shame. I’m one of those “die hards who were still using Google Reader every day (and there’s a lot of them!) will have to figure out a brand new Internet reading routine come July”. And what about all the ifttt.com recipes I’ve been building up? Might have to re-read this post about not paying for the product again.

Google Reader YouTube

 

A less cloudy perspective on clouds

CloudDave Girouard, former President of Enterprise for Google, on why our objections to the cloud are mad.

Well, he would say that, wouldn’t he? Though, reading this, it’s hard to argue against him.

If you care about the reliability, security, and the protection of your data, then you should entrust it to those who are most capable of managing it. If you believe you can match the capabilities and rigor of Google’s Security Operations team, I wish you well.

An interesting perspective from someone very much the other side of this cloud debate.

(Via Robert Brook)

From the leaders of Google’s data visualization research group

HINT.FM / Fernanda Viegas & Martin Wattenberg
“As technologists we ask, Can visualization help people think collectively? Can visualization move beyond numbers into the realm of words and images? As artists we seek the joy of revelation. Can visualization tell never-before-told stories? Can it uncover truths about color, memory, and sensuality?”