A Machine Has No Friends How soon before an AI robot takes your job?

Text: Jack Cameron Stanton

Let’s say you program a thinking machine to plant sunflowers all over the Earth. Using artificial intelligence, the machine set out to cover the world, eventually realising its biggest obstacles are humanity and concrete. To solve this problem, the thinking machine invests its intelligence in eradicating humanity, destroying every trace of our architecture, so that it can plant flowers in peace.

Today’s Artificial Intelligence machines “literally follow the instructions they’re given,” Professor Toby Walsh tells me. Walsh is known as the rockstar of Australia’s digital revolution and is Scientia Professor of Artificial Intelligence at UNSW, appearing on the stage and TV all the time to discuss robotics and AI’s future. When I was arranging the interview, I envisioned we would meet in a room adorned with robots halfway between machine and human greeting me at the door. Instead, we catch up at Bondi Junction Starbucks.

“Sometimes this [literal] behaviour from thinking machines is an undesirable response,” he says.“Even simple things, such as giving a robot the job of ‘never leaving any customer unsatisfied’. So the robot doesn’t serve anyone. No customer can be unsatisfied if they never exist.”

 

We meet during 9am peak hour rush. Professor Walsh sits by the window watching commuters pour into Bondi Junction. Dressed in Birkenstocks, maroon geometric-patterned shirt, shorts, and a puffer jacket, he notices the way I look at him. “One of the joys of being a researcher,” he remarks. Casual attire aside, Walsh garnered international attention in 2015 at an AI conference in Buenos Aires by announcing an open letter that implored all governments to ban ‘lethal autonomous weapons’, otherwise known in the media as ‘killer robots’.

The open letter states, “If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalalasnikovs [AK47s] of tomorrow.”

This letter now has over 20,000 signatures, including those of the late Stephen Hawking, Elon Musk, and Noam Chomsky. When I ask Walsh why he felt compelled to advocate against killer robots, he says, “As a scientist you have a responsibility for what you’re working on and how it’s used.” Although it might sound banal, Walsh’s comment puts some serious skin in the game.

Foster-Miller TALON SWORDS units equipped with various weaponry.
Photography by By USGov ([1]) [Public domain], via Wikimedia Commons.

In AI research, the pioneers are also its doomsayers. Take the Paypal and Tesla, Inc. founder and architect, Elon Musk, for example. In 2014 he warned an MIT audience that “we should be very careful about artificial intelligence – it’s our biggest existential threat.” In 2015 Musk proved his talk wasn’t cheap by donating $10 million to the Future of Life Institute to fund researchers studying how to keep AI safe. But not everybody is so paranoid. The recently disrupted Mark Zuckerberg stated that, “Some people fearmonger about how AI is a huge danger, but that seems far-fetched to me.”

So far our fears appear to be more a part the ongoing myths of science-fiction. Walsh himself was fascinated by thinking machines from a very young age, thanks to writers such as Arthur C Clarke and Isaac Asimov. “Hollywood doesn’t help,” he laughs. “We tend to sleepwalk into the future.” Indeed, Asimov’s masterpiece, the Foundation series, responds to humanity’s future-sleepwalking by inventing psychohistory, a branch of mathematics that accurately predicts the future, thousands of years ahead of time. In 1980, Asimov claimed, “I don’t know of any science fiction writer who really attempts to be a prophet. Such authors accomplish their tasks not by being correct in their predictions, necessarily, but merely by hammering home – in story after story – the notion that life is going to be different.”

Today, our understanding of thinking machines – with tech giant companies already selling AI products – is due to cause gradual, yet highly elevated technological change in our not-so-distant future. The biggest threat of AI is to jobs and employment. Software company Uber is already investing millions of dollars into AI research to eradicate their largest overhead: paying humans to drive their cars. Walsh remarks upon the violence of this change in his book It’s Alive: Artificial Intelligence from the Logic Piano to Killer Robots, in which he asks the question: “If machines are capable of doing almost any work humans can do, what will the humans do?”

“There’s already a robot factory in Japan that builds robots,” Walsh says, as a way of explaining the concept of ‘technological singularity’, the hypothesis that accelerated machine learning will revolutionise human society. “It’s a dark factory. There are no lights, no need to waste energy on robots that can see in the dark. It’s a seductive idea, that at some point, if a machine is smart enough, it could redesign itself to be better so it can become smarter. That’s snowballing, a tipping point, that would be, perhaps, a rapid acceleration of machines.”

A replica of the Maschinenmensch (“Machine-Man”), on display at the Robot Hall of Fame in the Carnegie Science Center, Pittsburgh, U.S. Photography by By Jiuguang Wang from Pittsburgh, Pennsylvania, United States [CC BY-SA 2.0 (https://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons

More impending is the anticipated job crisis brought upon by job automation. A crisis that is already beginning to be felt. By 2050, he suggests, the changes will be absolute.

In order to be prepared, Walsh suggests, “We must examine how job automation impacts on a person’s sense of meaning and purpose.” Of course we’ve done this before, during the Industrial Revolution, but Walsh is anxious that we may “experience fifty years of pain” starting right now. He nonetheless argues that most of the roles AI will overtake in the coming years are jobs most people don’t want to do anyway. A study from Oxford University by Frey and Osborne in 2015 claimed 43 per cent of the jobs in the United States are “under threat of automation”. Farmers, lawyers, musicians, writers, politicians, vets, software developers and personal trainers need not worry about losing their job anytime soon, with a less than a 10 per cent likelihood of being overrun by thinking machines. That said, a large demographic of people will be affected by automation. “We have to be really worried about that,” Walsh says. Quoting the Oxford Report again, he says, “There’s over a 90 per cent likelihood of fully automated machines replacing jobs such as drivers, restaurant cooks, umpires, guards, quarry workers, and receptionists.” But he stresses that “50 per cent of jobs at risk doesn’t equate to 50 per cent unemployment. Technology creates new jobs as it destroys old ones.”

To survive this revolution, Walsh believes people must anticipate these changes as inevitable and “wonder how to reskill themselves to find value in tomorrow’s society”. So pressing is the issue of AI’s future that Walsh has written another book, 2062: The World that AI Made, which is ostensibly a response to Yuval Noah Harari’s bestseller Homo Deus: A Brief History of Tomorrow.

“I started writing 2062  the day I put down Homo Deus” he says. In that book, Harari writes: “Having secured unprecedented levels of prosperity, health and harmony, and given our past record and current values, humanity’s next targets are likely to be immortality, happiness and divinity… we will now aim to upgrade humans into gods.” This prophecy of cybernetics becoming the next revolution in human society is simultaneously exhilarating and frightening. Walsh is plain sceptical.

“We’re not becoming gods anytime soon,” he assures me. “Almost no one in science is working on artificial general intelligence. Today’s machines are idiot savants: they can only do one thing well.”

There has been a surge of interest in the field among entrepreneurs and big companies. Walsh quotes some recent examples off the top of his head. Right now the computing giant IBM is betting one billion dollars on its cognitive computing platform ‘Watson’, a supercomputer that interprets natural human speech to provide answers to questions. Preliminary trials in healthcare are investigating the effectiveness of Watson’s “natural language, hypothesis generation and evidence-based learning capabilities” to assist with the accuracy of clinical decisions. The official line from IBM is that Watson “informs doctors”, but just like the rollout of robotic arm-assisted surgery in 2008, machines are proving themselves more precise and safer.

In a venture that couldn’t embody Sydneysider hedonism any better, Sydney Olympic Park has been trialling driverless cars since 2017. Soon people will be driven in these vehicles between venues, with the promise of ‘Hunter Valley wine tours’ earning them an extra ten million dollars investment from the state government.

“Toyota invested one billion dollars in automating. And with all this money flying into AI technology,” Walsh says, “it’s natural for us to see accelerated progress. But apart from the moral imperative to have self-driving cars on our roads, it is not at all obvious that the incumbents, like General Motors, Ford and Toyota will win this race over newcomers such as Tesla, Apple and Nvidia.”

There have been tragedies involving self-driving cars: the Tesla crash in 2016 and the recent death of an Arizonian woman who was killed by a Volvo using Uber software. But it seems more likely self-driving cars will save so many lives we are morally obliged to have them. Self-driving cars: 2. Homo sapiens: countless.

Professor Toby Walsh, Scientia Professor of Artificial Intelligence at UNSW.

On 9 May 2018 Google announced their new AI software Google Duplex, which uses a human-sounding voice to speak to people in the real world, booking haircuts or dinner tables without ever admitting it was a machine, even using cute affectations – such as ‘umm’ and ‘mmhm’ – to pretend to be human. Walsh is deeply cynical of this deception and has recently proposed a “new law [of robotics] to prevent machines from being mistaken as humans”. It’s called the Turing Red Flag Law and it states that “an autonomous system should be designed so that it is unlikely to be mistaken for anything besides an autonomous system, and should identify itself at the start of an interaction with another agent.”

Not only does this law safeguard from machines impersonating humans, it also protects “vulnerable people, the old and the young, who even before the technology is very sophisticated can be taken in, made to do things we wouldn’t want them to do.”

Scam callers, cyber-criminals, and other tech-savvy crooks will “have a field day with this technology,” Walsh says. A machine can impersonate and deceive far better than any human mime. Say farewell to the Nigerian Prince.

Spike Jonze’s film Her and Alex Garland’s Ex Machina explored the frankly dystopic consequences of people growing emotionally attached to, even falling in love with, thinking machines. To see how modern technology avoids acknowledging its existential status, ask Siri the following questions: ‘are you a computer?’ ‘are you human?’; ‘are you an AI?’. See what she says.

The final pillar of AI that feeds so much technology paranoia today is the idea of ‘super-intelligence’. In his bestseller Superintelligence, futurist Nick Bostrom defines it as “an intellect that is much smarter than the best human brains in practically every field, including scientific, creativity, general wisdom and social skills.”

But as Walsh mentioned earlier, this super intelligence is far from imminent.

 

The Netflix documentary AlphaGo shows us exactly what Walsh means when he says that today’s artificial intelligence can only do one thing well. In March 2016, Google Deep Mind’s AI program AlphaGo defeated Lee Sedol, the professional South Korean Go player with eighteen world championships under his belt, by playing five rounds of the ancient Chinese board game Go. Sixty million people in China alone tuned in to watch a battle of man vs machine, and it was broadcasted across the globe. But AlphaGo is an idiot savant, as Walsh defines it. It has no idea how to plant sunflowers. Yet.

“For humans, of course, that’s a reasonable presumption,” Walsh says of what such creations might be capable of. “If someone can play Go we can assume they’re smart and savvy at other things. But that’s untrue of machines.” It might be the hyper-specificity of AI that ironically offers us sanctuary from the robots taking over, while also being the very thing that threatens to replace nearly half our jobs in the very-soon future.

As we leave Starbucks and wander across the intersection I confess to the laissez-faire Walsh that before we met I was conjuring all types of sci-fi induced visions of what his workplace must look like. “I imagined you working in a science lab adorned with robots at various points of uncanniness hanging from the ceiling.” He smiles grimly at me and squints a little; the sun is in his eyes. “Oh, but I do have a little robot,’ he says. “I built it to play with my daughter.”

 

Professor Toby Walsh’s books, It’s Alive: Artificial Intelligence from the Logic Piano to Killer Robot and 2062: the year that AI made are published by Black Inc. He is presenting a STEAM talk at the Powerhouse Museum on Sunday, 2 December 2018.

 

X

Sign up to our newsletter, Word on the Street, for your weekly dose of news, features, and culture direct from your neighbourhood.

* Mandatory Privacy Policy