I don’t know about you, but I find things to do with AI, robots and automation quite confusing. Will the impact of these technologies really be as widespread as envisaged by the futurists? And what will the consequences and implications really be? Is humanity at stake, even?
Here are a number of articles I’m working through, that will hopefully shed some light on it all. Let’s start with the robot uprising.
Social robots will become family members in the homes of the future
With fewer stay-at-home parents, social robots can serve as personalized practice partners to help with homework and reinforce what children have learned that day in school. Far beyond helping you find recipes and ordering groceries, they can be your personal sous-chef or even help you learn to cook. They can also act as personal health coaches to supplement nutrition and wellness programs recommended by doctors and specialists for an increasingly health-conscious population. As the number of aging-in-place boomers soars, social robots can provide a sense of companionship for retirees while also connecting seniors to the world and to their loved ones, as well as sending doctor-appointment and medication reminders.Robots! A fantastic catalog of new species
IEEE Spectrum editor Erico Guizzo and colleagues have blown out their original Robots app into a fantastic catalog of 200 of today’s fantastic species of robots. They’re cleverly organized into fun categories like “Robots You Can Hug,” “Robots That Can Dance,” “Space Robots,” and “Factory Workers.” If they keep it updated, it’ll be very helpful for the robot uprising.We need to have a very serious chat about Pepper’s pointless parliamentary pantomime
Had the Committee summoned a robotic arm, or a burger-flipping frame they would have wound up with a worse PR stunt but a better idea of the dangers and opportunities of the robot revolution.
Robots can look very cute, but it’s the implications of those faceless boxes housing the AIs that will be more important, I think.
Computer says no: why making AIs fair, accountable and transparent is crucial
Most AIs are made by private companies who do not let outsiders see how they work. Moreover, many AIs employ such complex neural networks that even their designers cannot explain how they arrive at answers. The decisions are delivered from a “black box” and must essentially be taken on trust. That may not matter if the AI is recommending the next series of Game of Thrones. But the stakes are higher if the AI is driving a car, diagnosing illness, or holding sway over a person’s job or prison sentence.
Last month, the AI Now Institute at New York University, which researches the social impact of AI, urged public agencies responsible for criminal justice, healthcare, welfare and education, to ban black box AIs because their decisions cannot be explained.Artificial intelligence has got some explaining to do
Most simply put, Explainable AI (also referred to as XAI) are artificial intelligence systems whose actions humans can understand. Historically, the most common approach to AI is the “black box” line of thinking: human input goes in, AI-made action comes out, and what happens in between can be studied, but never totally or accurately explained. Explainable AI might not be necessary for, say, understanding why Netflix or Amazon recommended that movie or that desk organizer for you (personally interesting, sure, but not necessary). But when it comes to deciphering answers about AI in fields like health care, personal finances, or the justice system, it becomes more important to understand an algorithm’s actions.
The only way is ethics.
Why teach drone pilots about ethics when it’s robots that will kill us?
For the most part, armies are keen to maintain that there will always be humans in charge when lethal decisions are taken. This is only partly window dressing. One automated system is dangerous only to its enemies; two are dangerous to each other, and out of anyone’s control. We have seen what happens on stock markets when automatic trading programs fall into a destructive pattern and cause “flash crashes”. In October 2016 the pound lost 6% of its value, with blame in part put down to algorithmic trading. If two hi-tech armies were in a standoff where hair-trigger algorithms faced each other on both sides, the potential for disaster might seem unlimited.
Nuclear war has been averted on at least one occasion by a heroic Russian officer overriding the judgment of computers that there was an incoming missile attack from the US. But he had 25 minutes to decide. Battlefield time is measured in seconds.The Pentagon’s plans to program soldiers’ brains
DARPA has dreamed for decades of merging human beings and machines. Some years ago, when the prospect of mind-controlled weapons became a public-relations liability for the agency, officials resorted to characteristic ingenuity. They recast the stated purpose of their neurotechnology research to focus ostensibly on the narrow goal of healing injury and curing illness. The work wasn’t about weaponry or warfare, agency officials claimed. It was about therapy and health care. Who could object?
Let’s hope nothing goes wrong.
Machine learning confronts the elephant in the room
Then the researchers introduced something incongruous into the scene: an image of an elephant in semiprofile. The neural network started getting its pixels crossed. In some trials, the elephant led the neural network to misidentify the chair as a couch. In others, the system overlooked objects, like a row of books, that it had correctly detected in earlier trials. These errors occurred even when the elephant was far from the mistaken objects.
Snafus like those extrapolate in unsettling ways to autonomous driving. A computer can’t drive a car if it might go blind to a pedestrian just because a second earlier it passed a turkey on the side of the road.
So yes, things can go wrong. But AI and automation will all be good for jobs, right?
Artificial intelligence to create 58 million new jobs by 2022, says report
Machines and algorithms in the workplace are expected to create 133 million new roles, but cause 75 million jobs to be displaced by 2022 according to a new report from the World Economic Forum (WEF) called “The Future of Jobs 2018.” This means that the growth of artificial intelligence could create 58 million net new jobs in the next few years.
With this net positive job growth, there is expected to be a major shift in quality, location and permanency for the new roles. And companies are expected to expand the use of contractors doing specialized work and utilize remote staffing.
AI may not be bad news for workers
Some jobs could be made a lot easier by AI. One example is lorry-driving. Some fear that truck drivers will be replaced by autonomous vehicles. But manoeuvring a lorry around busy streets is far harder than driving down the motorway. So the driver could switch into automatic mode (and get some rest) when outside the big cities, and take over the wheel once again when nearing the destination. The obvious analogy is with jetliners, where the pilots handle take-off and landing but turn on the computer to cruise at 35,000 feet. Using AI may prevent tired drivers from causing accidents.
Ok, yes, I can see that. But then it goes on…
And the report argues that AI can produce better decision-making by offering a contrarian opinion so that teams can avoid the danger of groupthink. A program could analyse e-mails and meeting transcripts and issue alerts when potentially false assumptions are being made (rather like the boy in the Hans Christian Andersen tale who notices that the Emperor has no clothes). Or it can warn a team when it is getting distracted from the task in hand.
Really? That’s quite a jump from automated driving. Having a system read everything a company’s employees write to look for poor assumptions? I cannot see that happening. More over-selling.
But what else could AI do?
AI lie detector tests to get trial run at EU airports
Fliers will be asked a series of travel-related questions by a virtual border guard avatar, and artificial intelligence will monitor their faces to assess whether they are lying. The avatar will become “more skeptical” and change its tone of voice if it believes a person has lied, before referring suspect passengers to a human guard and allowing those believed to be honest to pass through, said Keeley Crockett of Manchester Metropolitan University in England, who was involved in the project.AI anchors: Xinhua debuts digital doppelgangers for their journalists
The AI-powered news anchors, according to the outlet, will improve television reporting and be used to generate videos, especially for breaking news on its digital and social media platforms.
“I’m an English artificial intelligence anchor,” Zhang’s digital doppelganger said in introduction during his first news telecast, blinking his eyes and raising his eyebrows throughout the video. “This is my very first day in Xinhua News Agency … I will work tirelessly to keep you informed, as texts will be typed into my system uninterrupted.”
This is what the world’s first AI newsreader looks and sounds like [via the Guardian]
But let’s not get too carried away here. We’re talking about people’s jobs, their livelihoods.
The automation charade
Since the dawn of market society, owners and bosses have revelled in telling workers they were replaceable. Robots lend this centuries-old dynamic a troubling new twist: employers threaten employees with the specter of machine competition, shirking responsibility for their avaricious disposition through opportunistic appeals to tech determinism. A “jobless future” is inevitable, we are told, an irresistible outgrowth of innovation, the livelihood-devouring price of progress. […]
Though automation is presented as a neutral process, the straightforward consequence of technological progress, one needn’t look that closely to see that this is hardly the case. Automation is both a reality and an ideology, and thus also a weapon wielded against poor and working people who have the audacity to demand better treatment, or just the right to subsist.
That article goes on to introduce a new term to describe the overselling the workplace dynamic and the casualisation of low-skilled service work, “fauxtomation.”
But maybe we should all loosen up, and stop being so serious.
Love in the time of AI: meet the people falling for scripted robots
“Obviously as the technology gets better and the interactivity increases we’re going to be able to form closer connections to characters in games,” Reed said. “They will operate with greater flexibility and ultimately seem more lifelike and easier to connect to.”
But for Wild Rose and many of the other dating sims enthusiasts I spoke to, making the characters more “human” wasn’t particularly exciting or even desired. Saeran didn’t need to be real for her to care about him.The HAL 9000 Christmas ornament
Fans of “2001: A Space Odyssey” will want to bring home this special Christmas ornament that celebrates 50 years of the science-fiction masterpiece. Press the button to see the ornament light up as HAL says several memorable phrases.