Menu
Menu
Why a technological Singularity is unlikely from a scientific viewpoint

Why a technological Singularity is unlikely from a scientific viewpoint

Computer scientist and AI expert, Toby Walsh, debunks the idea of machines far exceeding human intelligence

The Singularity – it's a nightmare scenario that plays in the minds of most of us whenever we see some freaky smart tech. But should we really be concerned about machine intelligence getting out of control or is it just science fiction?

This is what Toby Walsh, AI researcher at Data61, addressed at the 28th Australasian Joint Conference on Artificial Intelligence in Canberra this week.

“The idea of a technological Singularity is that we are going to have runaway growth in machine intelligence and it’s going to get out of our control. Technological Singularity is about an exponential take off in machine intelligence,” Walsh explained.

“At some point, we’ll design a machine with enough intelligence to redesign itself. At that point, it’ll be able to improve its intelligence and this will be some kind of tipping point where the intelligence of the machine will start to take off, because it doesn’t have our biological limitations.”

While it’s natural to be concerned about the implications of rapidly advancing technology, from a scientific point of view, a technological Singularity is unlikely, Walsh said.

Here’s why:

1. 'Faster thinking dog’

You could supposedly make a dog think faster, but he is not going to be able to play chess, no matter how fast you make the dog think,” explained Walsh.

The argument is that even though computers and machines have an advantage over human biology in speed and memory, this is not what makes up complete intelligence. There’s a lot more involved in creating human-like intelligence than speed alone, he said.

“There are other things like experience and years of training that go into improving and refining your judgement, and so on.

"There are lots of other things that go into intelligence, so just having a speed or the size advantage over us is not going to necessarily going to define intelligence.”

2. Anthropocentrism

If there’s one thing the history of science has taught us, it’s that every time we put ourselves on a podium or at the centre of all things (anthropocentrism) we learn that we are actually not as special as we think we are, Walsh said.

He gave an example of scientist Charles Darwin finding that humans descended from common apes, and that the Earth rotates around the Sun, not rotating around us.

A technological Singularity tends to suggest that we are special and that our intelligence is some ultimate high level that should never be challenged, Walsh said.

He added that human intelligence has a wide spectrum on another spectrum of intelligence among all creatures. So to state that human intelligence as a whole will be exceeded seems not well thought out to Walsh.

“I actually find this a weak logical step in that argument. It’s not clear to me at all on getting to human intelligence what that point exactly is and why suddenly getting past that point is necessarily the tipping point.”

3. Meta-intelligence

“Suppose there is something called intelligence and there is some way of comparing it. Just because it can do some task well, doesn’t mean it can improve its ability to do that task well,” Walsh argued on the idea of super smart machines exceeding humans.

He said a lot of the time with machine learning algorithms, performance tops out at some point and never reaches 100 per cent in accuracy. It’s extremely hard to push for improvements passed that max point, he said.

“Yes, we can do some feature engineering, parameter tuning, and so on, but we’ll typically run into some maximal level of performance where it’s very hard to improve [beyond] that. And that machine learning algorithm cannot improve itself.”

4. Diminishing returns

With a lot of technology, there are diminishing returns and there is no reason to suppose there will be equal returns over time when it comes to artificial intelligence, Walsh said.

“We are all well aware of many different mathematical functions such as Log2X. It may turn out to be Log2X. It may turn out to be what we are all familiar with – lots of infinite series that have finite limits. So we may be able to infinitely improve intelligence, but only ever be able to double the amount of intelligence we get.

“Certainly most people’s experience when trying to write AI programs is there are lots of diminishing returns.”

5. Limits of intelligence

Empirical laws may rise out of complexity of future AI systems and machines, where we run into limits of intelligence, Walsh said.

He gave the example of Dunbar’s number, where the human brain can only maintain so many meaningful social relationships.

“You can have about 250 meaningful social contacts,” Walsh said, referring to Dunbar’s number.

"With that sort of an empirical law that rises out of the complexity of trying to build a social network and maintain it within your brain, maybe we’ll just run into one of these limits.”


6. Exponential growth still won't solve all tasks

If in future we do end up seeing exponential growth in intelligent AI, like with Moore's Law, it still won't help solve some tasks that scale beyond this exponential growth, Walsh said.

"There’s lot so of things that are going to still stay out of reach even if we have exponential improvements.

"It doesn't help you solve tasks where the best algorithm grows faster than exponential, and we already have many such tasks. 2^x growth won't help solve problems that scale as 2^2^x..."

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the CIO newsletter!

Error: Please check your email address.

Tags technological singularityscience-fictionartificial intelligenceintelligent machinessingularity

More about Empirical

Show Comments
Computerworld
ARN
Techworld
CMO