“Hello? Is this thing on?”

I certainly enjoy reading about these voice assistants more than I do using them.

You bought smart speakers over the holidays. Now what are Amazon and Google doing with your data?
Ultimately, the choice to keep a smart speaker around comes down to what you’re getting out of the product. For some people with physical disabilities or intellectual differences, smart speakers can make household tasks easier or provide an engaging presence in daily life. For tech junkies like my friend, the sheer joy of commanding a smart home network might be enough. For Hoffman-Andrews, though, the benefits of a speaker don’t outweigh the costs. He bought a couple of products for testing, but he admits he couldn’t actually bring himself to set them up. Being able to ask a speaker to dim the lights or play a weather forecast just didn’t seem like a good enough tradeoff for giving companies access to his home.

“Is it normal to have cameras and microphones pointed at you and your guests? Currently the answer is mostly no,” he says. “These devices aim to change the answer to yes.”

Can’t go back

2019! As everyone else is greeting the new year with positivity and optimism for the future, I’m taking the contrary position and sharing some rather backward-facing articles.

Jason Koebler at Vice reminiscences about his old Tripod homepage (I had one of those!), and wonders whether he should rejuvenate it.

We should replace Facebook with personal websites
There’s a subtext of the #deleteFacebook movement that has nothing to do with the company’s mishandling of personal data. It’s the idea that people who use Facebook are stupid, or shouldn’t have ever shared so much of their lives. But for people who came of age in the early 2000s, sharing our lives online is second nature, and largely came without consequences. There was no indication that something we’d been conditioned to do would be quickly weaponized against us.

Wired’s Jason Kehe takes a step back from his iPhone.

Going dumb: My year with a flip phone
I felt like a wholer person. My mind was reabsorbing previously offloaded information and creating new connections. I was thinking more and better. My focus was improving. I thought I was breaking through.

In the end, I was not.

(He chooses a Kyocera phone, though I think we can all agree this was the best phone of its time.)

Web designer Andy Clarke shares the techniques he would have used back in 1998 to lay out a website — frames, tables and spacer gifs. Remember them?

Designing your site like it’s 1998
The height and width of these “shims” or “spacers” is only 1px but they will stretch to any size without increasing their weight on the page. This makes them perfect for performant website development.

Of course, these days we’re certain we know a much better way of doing all this. And that’s his point.

Strange as it might seem looking back, in 1998 we were also certain our techniques and technologies were the best for the job. That’s why it’s dangerous to believe with absolute certainty that the frameworks and tools we increasingly rely on today—tools like Bootstrap, Bower, and Brunch, Grunt, Gulp, Node, Require, React, and Sass—will be any more relevant in the future than elements, frames, layout tables, and spacer images are today.

What will all this look like in the next 20 years?

Facebook’s very relaxed attitude to our data

The Verge breaks down the latest story from the New York Times about Facebook’s data sharing agreements with Microsoft, Amazon Spotify and others.

Facebook gave Spotify and Netflix access to users’ private messages
I find it helpful to read the allegations in the Times’ story chronologically, starting with the integration deals, continuing with the one-off agreements, and ending with instant personalization. Do so and you read a story of a company that, after some early success growing its user base by making broad data-sharing agreements with one set of companies — OEMs — it grew more confident, and proceeded to give away more and more, often with few disclosures to users. By the time “Instant personalization” arrived, it was widely panned, and never met Facebook’s hopes for it. Shortly after it was wound down, Facebook would take action against Cambridge Analytica, and once again began placing meaningful limitations on its API.

Then basically nothing happened for three years!

Whatever is happening, it’s happening … now. It has been only two months since the largest data breach in Facebook’s history. It has been only five days since the last time Facebook announced a significant data leak.

On and on we go. The more we hear about how Facebook treats our data — and us — the more bored and relaxed we seem to be about it all. I can’t see this changing.

Update

From Facebook: Facts About Facebook’s Messaging Partnerships
From Ars Technica: Facebook “partner” arrangements: Are they as bad as they look?

I still think Facebook has transparency and trust issues though…

Just Go+

The planned demise of Google+ isn’t going according to plan, it seems.

Google+ is shutting down sooner than expected
On Monday (Dec. 10), the company revealed that a security flaw could have exposed profile information such as names, email addresses, jobs, and ages of 52.5 million Google+ users without their permission in November. The Alphabet-owned company now says it will close down the main Google+ platform by April 2019, four months earlier than planned.

Well, at least they tried. Anyone remember this, from 2011?

Google takes buzz saw to Buzz, other appendages
“Changing the world takes focus on the future, and honesty about the past,” wrote Google VP for products Bradley Horowitz in a blog post on Friday. “We learned a lot from products like Buzz, and are putting that learning to work every day in our vision for products like Google+.”

By “honesty”, we can only assume that Horowitz means that Buzz – beset with a host of privacy problems from its inception – honestly never caught on.

Listening with your whole body

A fascinating report on the new wearable technology allowing deaf concert goers to experience music in a brand new way.

New wearable tech lets users listen to live music through their skin
Back in September, 200 music fans gathered at the Bunkhouse Saloon in downtown Las Vegas for a private live concert with a unique twist: several of the fans were deaf. The concert served as a beta test for new wearable technology that allows deaf and hearing users alike to experience musical vibrations through their skin for a true “surround body” experience. […]

People at the Vegas concert (both deaf and hearing) reported feeling like their bodies became the instrument and the music was being played through them. One woman likened the experience to “living inside the strings of a piano,” after experiencing the third (Presto agitato) movement of Beethoven’s Moonlight Sonata while wearing the kit.

Reading that reminded me of an incident when I was a university Deputy Registrar, helping to run the graduation ceremonies at York Minster, one of Europe’s largest cathedrals. Before the ceremony was due to start, I was outlining the proceedings to one of our deaf students and her supporter — showing her the stage and the route across the nave and so on — when she suddenly turned to me with a look of extreme anxiety and confusion.

The organ had started to play. She couldn’t hear it, but she could certainly feel it. It was like an earthquake, she said.

It’s currently being refurbished, so this year’s ceremonies had to make do with a digital organ.

The once-a-century refurbishment
York Minster’s Grand Organ is currently undergoing a major, £2m refurbishment, the first on this scale since 1903.

The instrument, which dates back to the early 1830s, is being removed – including nearly all of its 5,403 pipes – and will be taken to Durham for repair and refurbishment by organ specialists Harrison and Harrison.

whole-body-listening-1

Over three weeks, a team of eight people from organ specialists Harrison and Harrison dismantled the instrument – including nearly all of its 5,403 pipes – and transported it to their workshop in Durham for cleaning and repair works to be carried out. The pipes range in length from the size of a pencil to 10m long and the instrument overall is one of the largest in the country, weighing approximately 20,000kg.

Office moves?

How many of us spend all our working days with Microsoft Office products? It’s sobering to think that I’ve been staring at monitors full of Outlook e-mails, Word documents and Excel spreadsheets for more than 20 years now. Might that all be changing soon? We’ll see.

The new word processor wars: A fresh crop of productivity apps are trying to reinvent our workday
Nearly 30 years after Microsoft Office came on the scene, it’s in the DNA of just about every productivity app. Even if you use Google’s G Suite or Apple’s iWork, you’re still following the Microsoft model.

But that way of thinking about work has gotten a little dusty, and new apps offering a different approach to getting things done are popping up by the day. There’s a new war on over the way we work, and the old “office suite” is being reinvented around rapid-fire discussion threads, quick sharing and light, simple interfaces where all the work happens inside a single window.

The article lists the alternatives as Quip, Notejoy, Slite, Zenkit, Notion and Agenda for documents and Smartsheet, Airtable, Coda and Trello for spreadsheets.

Their informal, cartoony visuals and emphasis on chatty messaging collaboration makes everything feel a little juvenile and jokey.

office-moves-4

I wonder if my demographic is supposed to be represented on that Coda homepage by the grey-haired, casual-suit-no-tie coffee-drinker in the bottom right-hand corner. I’ve certainly never taken an ice-cream, a skateboard or a basketball to work, so I guess it must be, fist-bump-at-the-stacked-area-chart notwithstanding.

office-moves-2

Down the Amazon storefront rabbit hole

The list of Things I Just Don’t Understand Anymore continues to grow. I’m familiar with shopping. I’m familiar with online shopping. But then again —

A business with no end
Recently, one of my students at Stanford told me a strange story. His parents, who live in Palo Alto, Calif., had been receiving mysterious packages at their house. The packages were all different shapes and sizes but each was addressed to “Returns Department, Valley Fountain LLC.”

I looked into it and found that a company called Valley Fountain LLC was indeed listed at his parents’ address. But it also appeared to be listed at 235 Montgomery Street, Suite 350, in downtown San Francisco.

So were 140 other LLCs, most of which were registered in 2015.

And so begins another incredible journey down the e-commerce internet rabbit hole with Jenny Odell, as she tries to untangle the mess of connections between an evangelical church university, many spurious, scammy Amazon storefronts, and an American weekly news magazine.

Indeed, at some point I began to feel like I was in a dream. Or that I was half-awake, unable to distinguish the virtual from the real, the local from the global, a product from a Photoshop image, the sincere from the insincere.

I’ve highlighted Jenny Odell’s journalism here before, and this piece is just as fascinating. It’s being discussed on the Amazon Seller forums, with legitimate sellers worrying how they can possibly compete with fraudulent dropshipping at such a big scale.

A Business with No End — Much explained about shady Amazon sellers
The vast international illegal operation employs hundreds of fake companies, fake churches, fake bookstores, fake department stores that may or may not exist, fake brands, fake HB1 visas, fake reviews, a fake university in California full of “students” on student visas who write click-bait and fake reviews, and even a fake psychiatric hospital. Oh, and apparently a lot of shady fake Amazon sellers. Not confined to Amazon, the empire also involves multiple click-bait farms and fake review farms, and even Newsweek magazine. All part of a vast hidden empire run by a man named Park.

Technology can’t stand still (unfortunately)

Using Proterozoic geology as his unusual starting point, MIT Media Lab Director Joi Ito takes a look at the past, present and future of the web and cultural technology.

The next Great (Digital) Extinction
As our modern dinosaurs crash down around us, I sometimes wonder what kind of humans will eventually walk out of this epic transformation. Trump and the populism that’s rampaging around the world today, marked by xenophobia, racism, sexism, and rising inequality, is greatly amplified by the forces the GDE has unleashed. For someone like me who saw the power of connection build a vibrant, technologically meshed ecosystem distinguished by peace, love, and understanding, the polarization and hatred empowered by the internet today is like watching your baby turning into the little girl in The Exorcist.

And here’s a look into the technological future with analyst Benedict Evans.

The end of the beginning
The internet began as an open, ‘permissionless’, decentralized network, but then we got (and indeed needed) new centralised networks on top, and so we’ve spent a lot of the past decade talking about search and social. Machine learning and crypto give new and often decentralized, permissionless fundamental layers for looking at meaning, intent and preference, and for attaching value to those.

The End of the Beginning
What’s the state of not just “the world of tech”, but tech in the world? The access story is now coming to an end, observes Evans, but the use story is just beginning: Most of the people are now online, but most of the money is still not. If we think we’re in a period of disruption right now, how will the next big platform shifts — like machine learning — impact huge swathes of retail, manufacturing, marketing, fintech, healthcare, entertainment, and more?

Two contrasting technologies

Here are two technologies or tools that couldn’t be more different.

One started out around 1560 or 1795, has no moving parts, needs no manual and is still being sold in their billions…

A sharp look at the surprisingly complex process of pencil manufacturing by photographer Christopher Payne
The photographer, renowned for his cinematic images that show the architectural grace of manufacturing spaces, shares that he has held a lifelong fascination with design, assembly, and industrial processes. “The pencil is so simple and ubiquitous that we take it for granted,” Payne tells Colossal. “But making one is a surprisingly complex process, and when I saw all the steps involved, many of which are done by hand, I knew it would make for a compelling visual narrative.”

two-contrasting-technologies-1

…although for how much longer.

Children struggle to hold pencils due to too much tech, doctors say
His mother, Laura, blames herself: “In retrospect, I see that I gave Patrick technology to play with, to the virtual exclusion of the more traditional toys. When he got to school, they contacted me with their concerns: he was gripping his pencil like cavemen held sticks. He just couldn’t hold it in any other way and so couldn’t learn to write because he couldn’t move the pencil with any accuracy.”

The other, a highly complicated technological marvel that spread across the globe, revolutionising society, only to completely disappear within 30 years

VHS tapes
People have been able to consume their choice of music at home for more than a century, but it wasn’t until the mid-1970s that video was truly freed from the constraints of the multiplex and the network broadcast schedule—and not until the 1980s that it really became accessible. That heyday didn’t last long. Just three decades separated the first VHS-format VCR from the last Hollywood hit distributed on video tape. But in that time, a lot of memories were created, and a new template for consuming media was forged.

two-contrasting-technologies-3

… though fans remain.

From ignored ubiquity to design classic: the art of the blank VHS tape
When the company he worked at acquired a commercial printer with a scan bed on top, Jones began to scan tapes. Looking around on Google, he saw hardly any high-resolution images of these little pieces of everyday ephemera. There were plenty of horror and VHS box art scans, “but no love for the lowly home recording tape box that had been part of so many homes and families.” From this realization, the Vault Of VHS was born, a blog dedicated to the design of retail VHS packaging for both home and pre-recorded tapes.

two-contrasting-technologies-2

“The font, art, and tape dirt grey just feel like the gummy carpet of a grimy porn theater.”

There are cameras everywhere

There is such a high level of surveillance in our society. There are cameras everywhere, but they’re our cameras.

The ubiquity of smartphones, as captured by photographers
With so many devices in so many hands now, the visual landscape has changed greatly, making it a rare event to find oneself in a group of people anywhere in the world and not see at least one of them using a phone. Collected here: a look at that smartphone landscape, and some of the stories of the phones’ owners.

cameras-everywhere-1

What an odd world we live in.

cameras-everywhere-2

The real world isn’t really real unless there’s a screen between us and it.

cameras-everywhere-4

But then you come across a photo like this.

cameras-everywhere-3

Such a strong image. The difference between surveillance and sousveillance.

Sousveillance
Sousveillance is the recording of an activity by a participant in the activity, typically by way of small wearable or portable personal technologies. The term “sousveillance”, coined by Steve Mann, stems from the contrasting French words sur, meaning “above”, and sous, meaning “below”, i.e. “surveillance” denotes the “eye-in-the-sky” watching from above, whereas “sousveillance” denotes bringing the camera or other means of observation down to human level, either physically (mounting cameras on people rather than on buildings), or hierarchically (ordinary people doing the watching, rather than higher authorities or architectures doing the watching).

Searching for digital sovereignty

Have you used Qwant yet?

Qwant – The search engine that respects your privacy
Based and designed in Europe, Qwant is the first search engine which protects its users freedoms and ensures that the digital ecosystem remains healthy. Our keywords: privacy and neutrality.

I must admit I had never heard of this search engine before I read this article from Wired. The French National Assembly and the French Army Ministry have announced that they’ll stop using Google as their default search engines, and use Qwant instead.

France is ditching Google to reclaim its online independence
“We have to set the example,” said Florian Bachelier, one of MPs chairing the Assembly’s cybersecurity and digital sovereignty task-force, which was launched in April 2018 to help protect French companies and state agencies from cyberattacks and from the growing dependency on foreign companies. “Security and digital sovereignty are at stake here, which is anything but an issue only for geeks,” Bachelier added. […]

In France, this all started with the Edward Snowden. In 2013, when the American whistleblower revealed that the NSA was spying on foreign leaders and had important capability to access data stocked on private companies’ clouds, it was a wake up call for French politicians. A senate report that same year fretted that France and the European Union were becoming “digital colonies”, a term that since then has been used by French government officials and analysts to alert about the threat posed by the US and China, on issues of economic, political and technological sovereignty. Recent scandals, including the Cambridge Analytica-Facebook imbroglio, further shook French politicians and public opinion.

A European Duckduckgo, but without the stupid name? Might be something to look further into.

Another day, another data protection issue

We’re generating data all the time, without realising, and without really knowing where it all goes.

Users told to ditch OneDrive and Office 365 to avoid ‘covert’ data harvesting
Microsoft Office and Windows 10 Enterprise uses a telemetry data collection mechanism that breaches the EU’s General Data Protection Regulation (GDPR), according to a 91-page report commissioned by the Dutch government, and conducted by firm Privacy Company.

It’s not just Microsoft in the firing line, of course.

With GDPR now several months into play, data watchdogs across Europe are beginning to take their first steps in the new regulatory landscape. Microsoft is the latest in a line of major companies accused of breaching GDPR, with Oracle and Equifax among seven firms reported for violations by a data rights group last week.

And that story about Google’s AI company having access to NHS data is still rumbling on.

Google: Our DeepMind health slurp is completely kosher
DeepMind told The Reg: “It is false to say that Google is “absorbing” data. This data is not DeepMind’s or Google’s – it belongs to our partners, whether the NHS or internationally. We process it according to their instructions.”

That claim, echoed by DeepMind Health chief Dominic King, brought a swift correction from legal experts. “It doesn’t belong to DeepMind’s partners, it belongs to the individuals,” Serena Tierney partner at lawyers VWV. “Those ‘partners’ may have limited rights, but it doesn’t belong to them.”

I wonder if we’ll be seeing more of these issues, what with one thing and another.

What the potentially useless draft Brexit agreement means for tech
One of the big questions for Brexit is data protection, and the agreement seeks to hold onto the status quo. Scroll through to Article 71 for the text, which says that EU data protection law will continue to cover the UK before and after the transition period, which runs until the end of 2020. That means personal data can continue to flow between the UK and the EU.

“This issue is critical to the tech sector and to every other industry in a modern digitising economy,” says Tech UK CEO Julian David in a blog post. Data’s the oil that greases tech, and all that.

That doesn’t mean that GDPR will continue to apply in the UK post Brexit. Christopher Knight, privacy lawyer at 11KBW, notes that the UK will become a “third state”. That means the UK won’t be required to apply GPDR and other data laws to “wholly internal situations of processing”.

Update: Well, here’s a thing. I’m still getting used to this new Android phone, with its Google news feed thing, and some time after first drafting this post I was browsing through it and came across the article below. How did it know to surface stories about DeepMind? I’m sure I hadn’t searched for it, but came across it in a newsletter. Is Google reading what I type into WordPress?

Inside DeepMind as the lines with Google blur
Last week, the line between the companies blurred significantly when DeepMind announced that it would transfer control of its health unit to a new Google Health division in California. […]

In March 2017, DeepMind also announced it would build a “data audit” system, as part of its public commitment to transparency. The technology would allow NHS partners to track its use of patient data in real time, with no possibility of falsification, DeepMind said. Google did not comment on whether it will finish the project.

Online ‘truth decay’

Fake news is old news, but I came across a new phrase today — well, new to me, anyway.

You thought fake news was bad? Deep fakes are where truth goes to die
Citron, along with her colleague Bobby Chesney, began working on a report outlining the extent of the potential danger. As well as considering the threat to privacy and national security, both scholars became increasingly concerned that the proliferation of deep fakes could catastrophically erode trust between different factions of society in an already polarized political climate.

In particular, they could foresee deep fakes being exploited by purveyors of “fake news”. Anyone with access to this technology – from state-sanctioned propagandists to trolls – would be able to skew information, manipulate beliefs, and in so doing, push ideologically opposed online communities deeper into their own subjective realities.

“The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases,” the report reads. “Deep fakes will exacerbate this problem significantly.”

Maybe I need to stop reading about fake news, it’s not good for my blood pressure. Just a couple more, then I’ll stop.

After murder and violence, here’s how WhatsApp will fight fake news
WhatsApp has announced it is giving 20 different research groups $50,000 to help it understand the ways that rumours and fake news spread on its platform. The groups are based around the world and will be responsible for producing reports on how the messaging app has impacted certain regions.

The range of areas that are being studied highlight the scale of misinformation that WhatsApp faces. One set of researchers from the UK and US are set to see how misinformation can lead to disease outbreaks in elderly people, one will look at how information was shared on WhatsApp in the 2018 Brazilian elections and another is examining how posts can go viral on the messaging service.

Inside the British Army’s secret information warfare machine
This new warfare poses a problem that neither the 77th Brigade, the military, or any democratic state has come close to answering yet. It is easy to work out how to deceive foreign publics, but far, far harder to know how to protect our own. Whether it is Russia’s involvement in the US elections, over Brexit, during the novichok poisoning or the dozens of other instances that we already know about, the cases are piling up. In information warfare, offence beats defence almost by design. It’s far easier to put out lies than convince everyone that they’re lies. Disinformation is cheap; debunking it is expensive and difficult.

Even worse, this kind of warfare benefits authoritarian states more than liberal democratic ones. For states and militaries, manipulating the internet is trivially cheap and easy to do. The limiting factor isn’t technical, it’s legal. And whatever the overreaches of Western intelligence, they still do operate in legal environments that tend to more greatly constrain where, and how widely, information warfare can be deployed. China and Russia have no such legal hindrances.

Tim’s hippie manifesto

Some less than positive reaction from The Register and others to Tim Berners-Lee’s latest campaign to save the web from itself. To describe it as a hippie manifesto sounds a little harsh but, as I said before, I can’t see this making much difference unless Facebook and Google agree to give up power, money etc.

Web Foundation launches internet hippie manifesto: ‘We’ve lost control of our data, it is being used against us’
It identifies the same problems that everyone and their dog has been writing about for years: there is a digital divide; internet access can be expensive; an entire industry has grown up selling your personal data; governments abuse the internet sometimes; people use the internet to do unpleasant things like bully and harass people; net neutrality’s a thing.

It has some charts and stats. But basically it reads like a High School final project on the problems of the internet. Competent but not consequential. […]

But simply saying companies shouldn’t make money from personal data and governments shouldn’t turn off the internet is not going to achieve a single thing. There needs to be clear plan of attack, recognition of pain points for companies, a broad and well-organized campaign to engage and rally people.

Berners-Lee takes flak for ‘hippie manifesto’ that only Google and Facebook could love
Open-source advocate Rafael Laguna, co-founder of Open-Xchange, is suspicious that Google and Facebook – the companies most under fire for privacy and other human rights abuses – were first to voice their support for the Greatest Living Briton’s declaration. “They are the two outstanding creators of the problems proclaimed in Tim’s paper,” Laguna notes. […]

Laguna told us: “As we have seen before with ‘Privacy Shield’, I suspect this move will be used as ‘proof’ of their reputability – but I fail to see how Google and Facebook will genuinely adhere to the requirements laid out in the initiative. The only result I can see is that it gets watered down, that it remains a lip service and, worst case, the whole thing loses credibility.”

A little robot round-up

I don’t know about you, but I find things to do with AI, robots and automation quite confusing. Will the impact of these technologies really be as widespread as envisaged by the futurists? And what will the consequences and implications really be? Is humanity at stake, even?

Here are a number of articles I’m working through, that will hopefully shed some light on it all. Let’s start with the robot uprising.

Social robots will become family members in the homes of the future
With fewer stay-at-home parents, social robots can serve as personalized practice partners to help with homework and reinforce what children have learned that day in school. Far beyond helping you find recipes and ordering groceries, they can be your personal sous-chef or even help you learn to cook. They can also act as personal health coaches to supplement nutrition and wellness programs recommended by doctors and specialists for an increasingly health-conscious population. As the number of aging-in-place boomers soars, social robots can provide a sense of companionship for retirees while also connecting seniors to the world and to their loved ones, as well as sending doctor-appointment and medication reminders.

Robots! A fantastic catalog of new species
IEEE Spectrum editor Erico Guizzo and colleagues have blown out their original Robots app into a fantastic catalog of 200 of today’s fantastic species of robots. They’re cleverly organized into fun categories like “Robots You Can Hug,” “Robots That Can Dance,” “Space Robots,” and “Factory Workers.” If they keep it updated, it’ll be very helpful for the robot uprising.

We need to have a very serious chat about Pepper’s pointless parliamentary pantomime
Had the Committee summoned a robotic arm, or a burger-flipping frame they would have wound up with a worse PR stunt but a better idea of the dangers and opportunities of the robot revolution.

robot-round-up-1

Robots can look very cute, but it’s the implications of those faceless boxes housing the AIs that will be more important, I think.

Computer says no: why making AIs fair, accountable and transparent is crucial
Most AIs are made by private companies who do not let outsiders see how they work. Moreover, many AIs employ such complex neural networks that even their designers cannot explain how they arrive at answers. The decisions are delivered from a “black box” and must essentially be taken on trust. That may not matter if the AI is recommending the next series of Game of Thrones. But the stakes are higher if the AI is driving a car, diagnosing illness, or holding sway over a person’s job or prison sentence.

Last month, the AI Now Institute at New York University, which researches the social impact of AI, urged public agencies responsible for criminal justice, healthcare, welfare and education, to ban black box AIs because their decisions cannot be explained.

Artificial intelligence has got some explaining to do
Most simply put, Explainable AI (also referred to as XAI) are artificial intelligence systems whose actions humans can understand. Historically, the most common approach to AI is the “black box” line of thinking: human input goes in, AI-made action comes out, and what happens in between can be studied, but never totally or accurately explained. Explainable AI might not be necessary for, say, understanding why Netflix or Amazon recommended that movie or that desk organizer for you (personally interesting, sure, but not necessary). But when it comes to deciphering answers about AI in fields like health care, personal finances, or the justice system, it becomes more important to understand an algorithm’s actions.

The only way is ethics.

Why teach drone pilots about ethics when it’s robots that will kill us?
For the most part, armies are keen to maintain that there will always be humans in charge when lethal decisions are taken. This is only partly window dressing. One automated system is dangerous only to its enemies; two are dangerous to each other, and out of anyone’s control. We have seen what happens on stock markets when automatic trading programs fall into a destructive pattern and cause “flash crashes”. In October 2016 the pound lost 6% of its value, with blame in part put down to algorithmic trading. If two hi-tech armies were in a standoff where hair-trigger algorithms faced each other on both sides, the potential for disaster might seem unlimited.

Nuclear war has been averted on at least one occasion by a heroic Russian officer overriding the judgment of computers that there was an incoming missile attack from the US. But he had 25 minutes to decide. Battlefield time is measured in seconds.

The Pentagon’s plans to program soldiers’ brains
DARPA has dreamed for decades of merging human beings and machines. Some years ago, when the prospect of mind-controlled weapons became a public-relations liability for the agency, officials resorted to characteristic ingenuity. They recast the stated purpose of their neurotechnology research to focus ostensibly on the narrow goal of healing injury and curing illness. The work wasn’t about weaponry or warfare, agency officials claimed. It was about therapy and health care. Who could object?

Let’s hope nothing goes wrong.

Machine learning confronts the elephant in the room
Then the researchers introduced something incongruous into the scene: an image of an elephant in semiprofile. The neural network started getting its pixels crossed. In some trials, the elephant led the neural network to misidentify the chair as a couch. In others, the system overlooked objects, like a row of books, that it had correctly detected in earlier trials. These errors occurred even when the elephant was far from the mistaken objects.

Snafus like those extrapolate in unsettling ways to autonomous driving. A computer can’t drive a car if it might go blind to a pedestrian just because a second earlier it passed a turkey on the side of the road.

So yes, things can go wrong. But AI and automation will all be good for jobs, right?

Artificial intelligence to create 58 million new jobs by 2022, says report
Machines and algorithms in the workplace are expected to create 133 million new roles, but cause 75 million jobs to be displaced by 2022 according to a new report from the World Economic Forum (WEF) called “The Future of Jobs 2018.” This means that the growth of artificial intelligence could create 58 million net new jobs in the next few years.

With this net positive job growth, there is expected to be a major shift in quality, location and permanency for the new roles. And companies are expected to expand the use of contractors doing specialized work and utilize remote staffing.

robot-round-up-2

AI may not be bad news for workers
Some jobs could be made a lot easier by AI. One example is lorry-driving. Some fear that truck drivers will be replaced by autonomous vehicles. But manoeuvring a lorry around busy streets is far harder than driving down the motorway. So the driver could switch into automatic mode (and get some rest) when outside the big cities, and take over the wheel once again when nearing the destination. The obvious analogy is with jetliners, where the pilots handle take-off and landing but turn on the computer to cruise at 35,000 feet. Using AI may prevent tired drivers from causing accidents.

Ok, yes, I can see that. But then it goes on…

And the report argues that AI can produce better decision-making by offering a contrarian opinion so that teams can avoid the danger of groupthink. A program could analyse e-mails and meeting transcripts and issue alerts when potentially false assumptions are being made (rather like the boy in the Hans Christian Andersen tale who notices that the Emperor has no clothes). Or it can warn a team when it is getting distracted from the task in hand.

Really? That’s quite a jump from automated driving. Having a system read everything a company’s employees write to look for poor assumptions? I cannot see that happening. More over-selling.

But what else could AI do?

AI lie detector tests to get trial run at EU airports
Fliers will be asked a series of travel-related questions by a virtual border guard avatar, and artificial intelligence will monitor their faces to assess whether they are lying. The avatar will become “more skeptical” and change its tone of voice if it believes a person has lied, before referring suspect passengers to a human guard and allowing those believed to be honest to pass through, said Keeley Crockett of Manchester Metropolitan University in England, who was involved in the project.

AI anchors: Xinhua debuts digital doppelgangers for their journalists
The AI-powered news anchors, according to the outlet, will improve television reporting and be used to generate videos, especially for breaking news on its digital and social media platforms.

“I’m an English artificial intelligence anchor,” Zhang’s digital doppelganger said in introduction during his first news telecast, blinking his eyes and raising his eyebrows throughout the video. “This is my very first day in Xinhua News Agency … I will work tirelessly to keep you informed, as texts will be typed into my system uninterrupted.”

 

This is what the world’s first AI newsreader looks and sounds like [via the Guardian]

But let’s not get too carried away here. We’re talking about people’s jobs, their livelihoods.

The automation charade
Since the dawn of market society, owners and bosses have revelled in telling workers they were replaceable. Robots lend this centuries-old dynamic a troubling new twist: employers threaten employees with the specter of machine competition, shirking responsibility for their avaricious disposition through opportunistic appeals to tech determinism. A “jobless future” is inevitable, we are told, an irresistible outgrowth of innovation, the livelihood-devouring price of progress. …

Though automation is presented as a neutral process, the straightforward consequence of technological progress, one needn’t look that closely to see that this is hardly the case. Automation is both a reality and an ideology, and thus also a weapon wielded against poor and working people who have the audacity to demand better treatment, or just the right to subsist.

That article goes on to introduce a new term to describe the overselling the workplace dynamic and the casualisation of low-skilled service work, “fauxtomation.”

robot-round-up-3

But maybe we should all loosen up, and stop being so serious.

Love in the time of AI: meet the people falling for scripted robots
“Obviously as the technology gets better and the interactivity increases we’re going to be able to form closer connections to characters in games,” Reed said. “They will operate with greater flexibility and ultimately seem more lifelike and easier to connect to.”

But for Wild Rose and many of the other dating sims enthusiasts I spoke to, making the characters more “human” wasn’t particularly exciting or even desired. Saeran didn’t need to be real for her to care about him.

The HAL 9000 Christmas ornament
Fans of “2001: A Space Odyssey” will want to bring home this special Christmas ornament that celebrates 50 years of the science-fiction masterpiece. Press the button to see the ornament light up as HAL says several memorable phrases.

robot-round-up-5

Can Tim Berners-Lee fix what he started?

We’re growing increasingly disillusioned with the web, but the guy behind it has a plan — a “Contract for the Web” that he hopes will set out our rights and freedoms on the internet.

Tim Berners-Lee launches campaign to save the web from abuse
“Humanity connected by technology on the web is functioning in a dystopian way. We have online abuse, prejudice, bias, polarisation, fake news, there are lots of ways in which it is broken. This is a contract to make the web one which serves humanity, science, knowledge and democracy,” he said.

For it to work, the big tech companies need to be behind it. No problem, right?

One of the early signatories to the contract, Facebook, has been fined by the Information Commissioner’s Office for its part in the Cambridge Analytica scandal; has faced threats from the EU for taking too long to remove extremist content; and has been sued for allowing advertisers to target housing ads only at white people. The firm, which has appointed the former deputy prime minister, Nick Clegg, to lead its PR operation, did not respond to a request for comment.

Another early signatory, Google, is reportedly developing a censored version of its search engine for the Chinese market. “If you sign up to the principles, you can’t do censorship,” said Berners-Lee. “Will this be enough to make search engines push back? Will it be persuasive enough for the Chinese government to be more open? I can’t predict whether that will happen,” he said. Google did not respond to a request for comment.

Hmm. I can’t see this making much difference unless Facebook and Google agree to– what, make less money?

“I was devastated”: Tim Berners-Lee, the man who created the World Wide Web, has some regrets
“We demonstrated that the Web had failed instead of served humanity, as it was supposed to have done, and failed in many places,” he told me. The increasing centralization of the Web, he says, has “ended up producing—with no deliberate action of the people who designed the platform—a large-scale emergent phenomenon which is anti-human.”

tim-1

“Tim and Vint made the system so that there could be many players that didn’t have an advantage over each other.” Berners-Lee, too, remembers the quixotism of the era. “The spirit there was very decentralized. The individual was incredibly empowered. It was all based on there being no central authority that you had to go to to ask permission,” he said. “That feeling of individual control, that empowerment, is something we’ve lost.”

That’s it in a nutshell, for me. The web just isn’t the same as it was at the beginning.

The power of the Web wasn’t taken or stolen. We, collectively, by the billions, gave it away with every signed user agreement and intimate moment shared with technology. Facebook, Google, and Amazon now monopolize almost everything that happens online, from what we buy to the news we read to who we like. Along with a handful of powerful government agencies, they are able to monitor, manipulate, and spy in once unimaginable ways.

Tim Wu is a law professor and ‘influential tech thinker’. Here’s his take on what went wrong.

Tim Wu: ‘The internet is like the classic story of the party that went sour’
Looking back at the 00s, the great mistake of the web’s idealists was a near-total failure to create institutions designed to preserve that which was good about the web (its openness, its room for a diversity of voices and its earnest amateurism), and to ward off that which was bad (the trolling, the clickbait, the demands of excessive and intrusive advertising, the security breaches). There was too much faith that everything would take care of itself – that “netizens” were different, that the culture of the web was intrinsically better. Unfortunately, that excessive faith in web culture left a void, one that became filled by the lowest forms of human conduct and the basest norms of commerce. It really was just like the classic story of the party that went sour.

The Guardian certainly likes to report on versions of this story, but only in November and March, it seems.

Tech giants may have to be broken up, says Tim Berners-Lee
Web inventor says Silicon Valley firms have too much clout and ‘optimism has cracked’ [November 2018]

Tim Berners-Lee: we must regulate tech firms to prevent ‘weaponised’ web
The inventor of the world wide web warns over concentration of power among a few companies ‘controlling which ideas are shared’ [March 2018]

Tim Berners-Lee on the future of the web: ‘The system is failing’
The inventor of the world wide web remains an optimist but sees a ‘nasty wind’ blowing amid concerns over advertising, net neutrality and fake news [November 2017]

Tim Berners-Lee: I invented the web. Here are three things we need to change to save it
It has taken all of us to build the web we have, and now it is up to all of us to build the web we want – for everyone [March 2017]

Are we doing the right thing?

As a parent of teenagers, I worry about this topic a lot.

What do we actually know about the risks of screen time and digital media?
The lumping of everything digital into a monolith is a framing that makes Oxford Internet Institute psychologist Andrew Przybylski groan. “We don’t talk about food time,” he points out. “We don’t talk about paper time. But we do talk about screen time.” […]

The new series of papers includes a look at childhood screen use and ADHD, the effects of media multitasking on attention, and the link between violent video games and aggression. The separate papers are a good reminder that these are really separate issues; even if screen time ends up being problematic in one area, it doesn’t mean it can’t have a positive effect in another.

Nothing’s ever straightfoward, is it? Like its conclusion, for instance.

So, is digital media a concern for developing minds? There’s no simple answer, in part because the uses of media are too varied for the question to really be coherent. And, while some research results seem robust, the catalogue of open questions is dizzying. Answering some of those questions needs not just a leap in research quality, but, argues Przybylski, a reframing of the question away from the way we think about tobacco and toward the way we think about information: “What are the most effective strategies parents can employ to empower young people to be proactive and critical users of technology?”

Others have firmly made up their minds, however.

A dark consensus about screens and kids begins to emerge in Silicon Valley
For longtime tech leaders, watching how the tools they built affect their children has felt like a reckoning on their life and work. Among those is Chris Anderson, the former editor of Wired and now the chief executive of a robotics and drone company. He is also the founder of GeekDad.com. “On the scale between candy and crack cocaine, it’s closer to crack cocaine,” Mr. Anderson said of screens.

Technologists building these products and writers observing the tech revolution were naïve, he said. “We thought we could control it,” Mr. Anderson said. “And this is beyond our power to control. This is going straight to the pleasure centers of the developing brain. This is beyond our capacity as regular parents to understand.”

IRC is 30 years old

I was never really nerdy enough to properly join in with this at the time, but it’s an interesting stroll down memory lane nevertheless.

On its 30th anniversary, IRC evokes memories of the internet’s early days
I used IRC in the early 1990s, when there were all kinds of fun things to do. There was a server with a bot that played Boggle. I was the know-it-all music snob who got kicked out of a chat channel someone set up at Woodstock ’94. I created keyboard macros that spewed out ASCII art. I skipped Mike Tyson’s pay-per-view boxing match in 2006 to watch someone describe it on IRC.

<jon12345> lewis connects again
<jon12345> arg
<jon12345> on the ropes
<CaZtRo> HES GOIN DOWN
<CaZtRo> tyson is DOWN
<DaNNe_> no!
<CaZtRo> DOWN DOWN DOWN
<DaNNe_> why ..

Internet Relay Chat turns 30—and we remember how it changed our lives
There was a moment of silence, and then something odd happened. The channel went blank. The list of users disappeared, and NetCruiser politely played the Windows alert chime through the speakers. At the bottom of the IRC window, a new message now stood alone:

“You have been kicked from channel #descent for the following reason: fuck off newbie”

I guess the Internet of 1995 wasn’t that different from the Internet of 2018.

Is Instagram doing enough to stop bullying?

Instagram are rolling out some new mechanisms to reduce bullying, including comment filters and a new camera effect to promote kindness.

New tools to limit bullying and spread kindness on Instagram
While the majority of photos shared on Instagram are positive and bring people joy, occasionally a photo is shared that is unkind or unwelcome. We are now using machine learning technology to proactively detect bullying in photos and their captions and send them to our Community Operations team to review.

But is it enough? As a parent of teenagers (or for anyone really), this article from The Atlantic makes for depressing reading.

Teens are being bullied ‘constantly’ on Instagram
Teenagers have always been cruel to one another. But Instagram provides a uniquely powerful set of tools to do so. The velocity and size of the distribution mechanism allow rude comments or harassing images to go viral within hours. Like Twitter, Instagram makes it easy to set up new, anonymous profiles, which can be used specifically for trolling. Most importantly, many interactions on the app are hidden from the watchful eyes of parents and teachers, many of whom don’t understand the platform’s intricacies. […]

Sometimes teens, many of whom run several Instagram accounts, will take an old page with a high amount of followers and transform it into a hate page to turn it against someone they don’t like. “One girl took a former meme page that was over 15,000 followers, took screencaps from my Story, and Photoshopped my nose bigger and posted it, tagging me being like, ‘Hey guys, this is my new account,’” Annie said. “I had to send a formal cease and desist. I went to one of those lawyer websites and just filled it out. Then she did the same thing to my friend.” […]

Aside from hate pages, teens say most bullying takes place over direct message, Instagram Stories, or in the comments section of friends’ photos. “Instagram won’t delete a person’s account unless it’s clear bullying on their main feed,” said Hadley, a 14-year-old, “and, like, no one is going to do that. It’s over DM and in comment sections.”