Is AI an existential threat to humanity?

By Professor Anton van den Hengel, FTSE. Director, Centre for Augmented Reasoning, the University of Adelaide.

This article is an extract from Artificial intelligence: your questions answered, a report published in partnership with the Australian Strategic Policy Institute (ASPI).  

There’s a general and widely held misconception about AI. It’s assumed that machines will be able to think and act like human beings, if not now, then soon. We’re in fact a long, long way from that eventuality, if indeed we’ll ever get to that point. And that’s because AI poses a series of very difficult problems to solve.

The technology we have today is machine learning, which teaches a computer to do something specific, but only for one action or item at a time. While that performance, for example describing an image, can be amazing, it’s still only a one-problem solution.

AI isn’t what you think it might be and, besides some clever effects created in movies, it might never be the threat imagined by some. Here’s why.

a composition showing half of a camera lens against half of a human eye close up

AI, as the movies portray it, is far into the future.

Take the example of a bee. A bee lives and functions in its environment every day—it collects food, is part of a community of bees and has a specialist role within the hive. Take that bee and put it in another environment, far removed from its home, and it will find food, create a nest and eventually find other bees with which to create
a colony.

Your mobile phone has something like 100 bits of machine learning in it (and marketed as ‘AI’)—but leave it in a field far away from you and, after a few hours, the battery runs out. The device dies. It can’t do anything by itself or to sustain itself.

AI, as the movies portray it, is far into the future, if we ever get to it at all. Yet the idea that AI can somehow turn conscious, and also somehow turn evil, is no doubt an exciting movie plot.

What such plots are based on is in reality a simple idea: that an AI’s goals are misaligned with the goals of humanity. There’s a famous example used to show how this works. Program a powerful AI-enabled machine to create paper clips in the most efficient manner possible. Without a limit on the number of paper clips needed, the machine will continue to build clips from existing materials and then mine resources and take over manufacturing until it has exhausted every possibility of manufacture from every possible mineral. In this theoretical story, the machine may well have exterminated all life forms that get in the way of making paper clips or could be used in the paper-clip manufacturing process.

The story shows that misaligned goals are the key to the dangers of AI. Machines have jobs to perform; in other words, they have goals. It isn’t the machine that’s the problem—it’s the human making the decisions about the machine’s goals that’s the real issue. Just because AI may be ‘smarter’ in the way in which it solves problems at much faster speed than a human can, it doesn’t follow that it controls anything outside of its own programming. The core of the issue, and the possibility of danger, is about humans making decisions about the goals intended for an AI platform.

front cover of the report - Artificial intelligence: Your questions answered.

This article is an extract from Artificial intelligence: your questions answered, a report published in partnership with the Australian Strategic Policy Institute (ASPI).  

Tagged in AIML