Tag Archives: jobs

A little robot round-up

I don’t know about you, but I find things to do with AI, robots and automation quite confusing. Will the impact of these technologies really be as widespread as envisaged by the futurists? And what will the consequences and implications really be? Is humanity at stake, even?

Here are a number of articles I’m working through, that will hopefully shed some light on it all. Let’s start with the robot uprising.

Social robots will become family members in the homes of the future
With fewer stay-at-home parents, social robots can serve as personalized practice partners to help with homework and reinforce what children have learned that day in school. Far beyond helping you find recipes and ordering groceries, they can be your personal sous-chef or even help you learn to cook. They can also act as personal health coaches to supplement nutrition and wellness programs recommended by doctors and specialists for an increasingly health-conscious population. As the number of aging-in-place boomers soars, social robots can provide a sense of companionship for retirees while also connecting seniors to the world and to their loved ones, as well as sending doctor-appointment and medication reminders.

Robots! A fantastic catalog of new species
IEEE Spectrum editor Erico Guizzo and colleagues have blown out their original Robots app into a fantastic catalog of 200 of today’s fantastic species of robots. They’re cleverly organized into fun categories like “Robots You Can Hug,” “Robots That Can Dance,” “Space Robots,” and “Factory Workers.” If they keep it updated, it’ll be very helpful for the robot uprising.

We need to have a very serious chat about Pepper’s pointless parliamentary pantomime
Had the Committee summoned a robotic arm, or a burger-flipping frame they would have wound up with a worse PR stunt but a better idea of the dangers and opportunities of the robot revolution.

robot-round-up-1

Robots can look very cute, but it’s the implications of those faceless boxes housing the AIs that will be more important, I think.

Computer says no: why making AIs fair, accountable and transparent is crucial
Most AIs are made by private companies who do not let outsiders see how they work. Moreover, many AIs employ such complex neural networks that even their designers cannot explain how they arrive at answers. The decisions are delivered from a “black box” and must essentially be taken on trust. That may not matter if the AI is recommending the next series of Game of Thrones. But the stakes are higher if the AI is driving a car, diagnosing illness, or holding sway over a person’s job or prison sentence.

Last month, the AI Now Institute at New York University, which researches the social impact of AI, urged public agencies responsible for criminal justice, healthcare, welfare and education, to ban black box AIs because their decisions cannot be explained.

Artificial intelligence has got some explaining to do
Most simply put, Explainable AI (also referred to as XAI) are artificial intelligence systems whose actions humans can understand. Historically, the most common approach to AI is the “black box” line of thinking: human input goes in, AI-made action comes out, and what happens in between can be studied, but never totally or accurately explained. Explainable AI might not be necessary for, say, understanding why Netflix or Amazon recommended that movie or that desk organizer for you (personally interesting, sure, but not necessary). But when it comes to deciphering answers about AI in fields like health care, personal finances, or the justice system, it becomes more important to understand an algorithm’s actions.

The only way is ethics.

Why teach drone pilots about ethics when it’s robots that will kill us?
For the most part, armies are keen to maintain that there will always be humans in charge when lethal decisions are taken. This is only partly window dressing. One automated system is dangerous only to its enemies; two are dangerous to each other, and out of anyone’s control. We have seen what happens on stock markets when automatic trading programs fall into a destructive pattern and cause “flash crashes”. In October 2016 the pound lost 6% of its value, with blame in part put down to algorithmic trading. If two hi-tech armies were in a standoff where hair-trigger algorithms faced each other on both sides, the potential for disaster might seem unlimited.

Nuclear war has been averted on at least one occasion by a heroic Russian officer overriding the judgment of computers that there was an incoming missile attack from the US. But he had 25 minutes to decide. Battlefield time is measured in seconds.

The Pentagon’s plans to program soldiers’ brains
DARPA has dreamed for decades of merging human beings and machines. Some years ago, when the prospect of mind-controlled weapons became a public-relations liability for the agency, officials resorted to characteristic ingenuity. They recast the stated purpose of their neurotechnology research to focus ostensibly on the narrow goal of healing injury and curing illness. The work wasn’t about weaponry or warfare, agency officials claimed. It was about therapy and health care. Who could object?

Let’s hope nothing goes wrong.

Machine learning confronts the elephant in the room
Then the researchers introduced something incongruous into the scene: an image of an elephant in semiprofile. The neural network started getting its pixels crossed. In some trials, the elephant led the neural network to misidentify the chair as a couch. In others, the system overlooked objects, like a row of books, that it had correctly detected in earlier trials. These errors occurred even when the elephant was far from the mistaken objects.

Snafus like those extrapolate in unsettling ways to autonomous driving. A computer can’t drive a car if it might go blind to a pedestrian just because a second earlier it passed a turkey on the side of the road.

So yes, things can go wrong. But AI and automation will all be good for jobs, right?

Artificial intelligence to create 58 million new jobs by 2022, says report
Machines and algorithms in the workplace are expected to create 133 million new roles, but cause 75 million jobs to be displaced by 2022 according to a new report from the World Economic Forum (WEF) called “The Future of Jobs 2018.” This means that the growth of artificial intelligence could create 58 million net new jobs in the next few years.

With this net positive job growth, there is expected to be a major shift in quality, location and permanency for the new roles. And companies are expected to expand the use of contractors doing specialized work and utilize remote staffing.

robot-round-up-2

AI may not be bad news for workers
Some jobs could be made a lot easier by AI. One example is lorry-driving. Some fear that truck drivers will be replaced by autonomous vehicles. But manoeuvring a lorry around busy streets is far harder than driving down the motorway. So the driver could switch into automatic mode (and get some rest) when outside the big cities, and take over the wheel once again when nearing the destination. The obvious analogy is with jetliners, where the pilots handle take-off and landing but turn on the computer to cruise at 35,000 feet. Using AI may prevent tired drivers from causing accidents.

Ok, yes, I can see that. But then it goes on…

And the report argues that AI can produce better decision-making by offering a contrarian opinion so that teams can avoid the danger of groupthink. A program could analyse e-mails and meeting transcripts and issue alerts when potentially false assumptions are being made (rather like the boy in the Hans Christian Andersen tale who notices that the Emperor has no clothes). Or it can warn a team when it is getting distracted from the task in hand.

Really? That’s quite a jump from automated driving. Having a system read everything a company’s employees write to look for poor assumptions? I cannot see that happening. More over-selling.

But what else could AI do?

AI lie detector tests to get trial run at EU airports
Fliers will be asked a series of travel-related questions by a virtual border guard avatar, and artificial intelligence will monitor their faces to assess whether they are lying. The avatar will become “more skeptical” and change its tone of voice if it believes a person has lied, before referring suspect passengers to a human guard and allowing those believed to be honest to pass through, said Keeley Crockett of Manchester Metropolitan University in England, who was involved in the project.

AI anchors: Xinhua debuts digital doppelgangers for their journalists
The AI-powered news anchors, according to the outlet, will improve television reporting and be used to generate videos, especially for breaking news on its digital and social media platforms.

“I’m an English artificial intelligence anchor,” Zhang’s digital doppelganger said in introduction during his first news telecast, blinking his eyes and raising his eyebrows throughout the video. “This is my very first day in Xinhua News Agency … I will work tirelessly to keep you informed, as texts will be typed into my system uninterrupted.”

 

This is what the world’s first AI newsreader looks and sounds like [via the Guardian]

But let’s not get too carried away here. We’re talking about people’s jobs, their livelihoods.

The automation charade
Since the dawn of market society, owners and bosses have revelled in telling workers they were replaceable. Robots lend this centuries-old dynamic a troubling new twist: employers threaten employees with the specter of machine competition, shirking responsibility for their avaricious disposition through opportunistic appeals to tech determinism. A “jobless future” is inevitable, we are told, an irresistible outgrowth of innovation, the livelihood-devouring price of progress. …

Though automation is presented as a neutral process, the straightforward consequence of technological progress, one needn’t look that closely to see that this is hardly the case. Automation is both a reality and an ideology, and thus also a weapon wielded against poor and working people who have the audacity to demand better treatment, or just the right to subsist.

That article goes on to introduce a new term to describe the overselling the workplace dynamic and the casualisation of low-skilled service work, “fauxtomation.”

robot-round-up-3

But maybe we should all loosen up, and stop being so serious.

Love in the time of AI: meet the people falling for scripted robots
“Obviously as the technology gets better and the interactivity increases we’re going to be able to form closer connections to characters in games,” Reed said. “They will operate with greater flexibility and ultimately seem more lifelike and easier to connect to.”

But for Wild Rose and many of the other dating sims enthusiasts I spoke to, making the characters more “human” wasn’t particularly exciting or even desired. Saeran didn’t need to be real for her to care about him.

The HAL 9000 Christmas ornament
Fans of “2001: A Space Odyssey” will want to bring home this special Christmas ornament that celebrates 50 years of the science-fiction masterpiece. Press the button to see the ornament light up as HAL says several memorable phrases.

robot-round-up-5

AI to the rescue

AI to the rescue

In 2016 the RNIB announced a project between the NHS and DeepMind, Google’s artificial intelligence company.

Artificial intelligence to look for early signs of eye conditions humans might miss
With the number of people affected by sight loss in the UK predicted to double by 2050, Moorfields Eye Hospital NHS Foundation Trust and DeepMind Health have joined forces to explore how new technologies can help medical research into eye diseases.

This wasn’t the only collaboration with the NHS that Google was involved in. There was another project, to help staff monitor patients with kidney disease, that had people concerned about the amount of the medical information being handed over.

Revealed: Google AI has access to huge haul of NHS patient data
Google says that since there is no separate dataset for people with kidney conditions, it needs access to all of the data in order to run Streams effectively. In a statement, the Royal Free NHS Trust says that it “provides DeepMind with NHS patient data in accordance with strict information governance rules and for the purpose of direct clinical care only.”

Still, some are likely to be concerned by the amount of information being made available to Google. It includes logs of day-to-day hospital activity, such as records of the location and status of patients – as well as who visits them and when. The hospitals will also share the results of certain pathology and radiology tests.

The Google-owned company tried to reassure us that everything was being done appropriately, that all those medical records would be safe with them.

DeepMind hits back at criticism of its NHS data-sharing deal
DeepMind co-founder Mustafa Suleyman has said negative headlines surrounding his company’s data-sharing deal with the NHS are being “driven by a group with a particular view to peddle”. […]

All the data shared with DeepMind will be encrypted and parent company Google will not have access to it. Suleyman said the company was holding itself to “an unprecedented level of oversight”.

That didn’t seem to cut it though.

DeepMind’s data deal with the NHS broke privacy law
“The Royal Free did not have a valid basis for satisfying the common law duty of confidence and therefore the processing of that data breached that duty,” the ICO said in its letter to the Royal Free NHS Trust. “In this light, the processing was not lawful under the Act.” […]

“The Commission is not persuaded that it was necessary and proportionate to process 1.6 million partial patient records in order to test the clinical safety of the application. The processing of these records was, in the Commissioner’s view, excessive,” the ICO said.

And now here we are, some years later, and that eye project is a big hit.

Artificial intelligence equal to experts in detecting eye diseases
The breakthrough research, published online by Nature Medicine, describes how machine-learning technology has been successfully trained on thousands of historic de-personalised eye scans to identify features of eye disease and recommend how patients should be referred for care.

Researchers hope the technology could one day transform the way professionals carry out eye tests, allowing them to spot conditions earlier and prioritise patients with the most serious eye diseases before irreversible damage sets in.

That’s from UCL, one of the project’s partners. I like the use of the phrase ‘historic de-personalised eye scans’. And it doesn’t mention Google once.

Other reports also now seem to be pushing the ‘AI will rescue us’ angle, rather than the previous ‘Google will misuse our data’ line.

DeepMind AI matches health experts at spotting eye diseases
DeepMind’s ultimate aim is to develop and implement a system that can assist the UK’s National Health Service with its ever-growing workload. Accurate AI judgements would lead to faster diagnoses and, in theory, treatment that could save patients’ vision.

Artificial intelligence ‘did not miss a single urgent case’
He told the BBC: “I think this will make most eye specialists gasp because we have shown this algorithm is as good as the world’s leading experts in interpreting these scans.” […]

He said: “Every eye doctor has seen patients go blind due to delays in referral; AI should help us to flag those urgent cases and get them treated early.”

And it seems AI can help with the really tricky problems too.

This robot uses AI to find Waldo, thereby ruining Where’s Waldo
To me, this is like the equivalent of cheating on your math homework by looking for the answers at the back of your textbook. Or worse, like getting a hand-me-down copy of Where’s Waldo and when you open the book, you find that your older cousin has already circled the Waldos in red marker. It’s about the journey, not the destination — the process of methodically scanning pages with your eyes is entirely lost! But of course, no one is actually going to use this robot to take the fun out of Where’s Waldo, it’s just a demonstration of what AutoML can do.

There’s Waldo is a robot that finds Waldo

The fabulous future of work awaits

Following on from that article about what it might be like to work until we’re 100, here’s another example of over-optimistic, blue-sky, work-based astrology, this time from Liselotte Lyngsø, a futurist from the Copenhagen-based consultancy Future Navigator.

This is what work will look like in 2100
Human potential, according to Lyngsø, is not best cultivated in today’s workplace structure, and many of the changes she predicts revolve around the ongoing effort to maximize the abilities of individuals. To that end, many of today’s workplace structures, such as the 9-to-5 workday, traditional offices, rigid hierarchies, and the very concept of retirement will change dramatically.

“I don’t think we’ll have work hours like we used to. Likewise I think we’ll replace retirement with breaks where we reorient and retrain, where the borders [of work] are blurred,” she says. “It’s also about creating a sustainable lifestyle so you don’t burn out, and you can keep working for longer.”

Oh great, thanks.

Can’t stop, won’t stop

I’ve mentioned before that, when it comes to our time here, we don’t get long. But perhaps our lives — and our working lives, especially — will be longer than we think.

What if we have to work until we’re 100?
Retirement is becoming more and more expensive – and future generations may have to abandon the idea altogether. So what kinds of jobs will we do when we’re old and grey? Will we be well enough to work? And will anyone want to employ us?

You think your work life balance is tough?

You start off expecting to be amused by the ridiculously overburdened bike, but end up saddened by that overburdened mum.

A migrant worker’s daily circus-like balancing act is a surreal reflection of China’s economy
As China shifted from a small-farm economy to an industrial powerhouse over the past generation, there’s been an enormous demographic shift, with some 282 million migrant labourers splitting their time between cities and their rural homes. For Wo Guo Jie, who makes her living in Shanghai collecting styrofoam boxes from markets and reselling them to a seafood wholesale market, this transformation has meant spending as many as three years at a time away from her family farm, where her children sometimes barely recognise her when she returns.

“Integrating” and “rationalising”

Pearson, the company behind Edexcel and BTEC, amongst others, are in the news today.

Pearson to cut 4,000 jobs after second profit warning in three months
“Faced with these challenges, we are today announcing decisive plans to further integrate the business and reduce the cost base, rationalise our product development and focus on fewer, bigger opportunities.”

Interesting language there, and a slight clash between it and the headline. Another article runs along similar lines:

Pearson to cut 10% of workforce as it issues profit warning
The company said Thursday it expects to report adjusted operating profit in 2015 of approximately £720 million and adjusted earnings per share of between 69 pence and 70 pence. It previously forecast EPS to come in at the lower end of a range of 70 pence to 75 pence. In October, the company also cut its forecasts.

Data Manager priorities?

A half day in the Life of a Data Manager
“What I wanted to ask was if you were in my situation, where should I concentrate my precious few spare hours here and there in order to get to grips with SIMs, what more it can do for me/the school, and what more my role as a Data Manager should include in your experience?”​

The responses so far seem to boil down to moving away from us analysing the data to encouraging teachers and leaders to analyse the data for themselves, by providing better tools and training. Teach a guy to fish, and all that.