Published in AI

Boffins warn that no one could control an AI supercomputer

by on20 September 2022

If we are dumb enough to give a supercomputer that functionality we are truly stuffed

Top boffins have warned that if humanity ever gives a super computer AI level functions, we are as doomed as a country whose chancellor signed over most of his countries' energy needs to Russia.

According to research published in the Journal of Artificial Intelligence Research which we get in the hope of scoring some code which will help us win the lottery, controlling a super-intelligence beyond human comprehension would require a simulation of that super-intelligence which we can analyse (and control). But if we're too thick to understand it, it's impossible to create such a simulation.

Rules such as 'cause no harm to humans' can't be set if we don't understand the kind of scenarios that an AI is going to come up with. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits.

"A super-intelligence poses a fundamentally different problem than those typically studied under the banner of 'robot ethics'," wrote the researchers.

"This is because a superintelligence is multi-faceted, and therefore potentially capable of mobilizing a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable."

Part of the team's reasoning came from the halting problem put forward by Alan Turing in 1936. The problem centers on knowing whether or not a computer program will reach a conclusion and answer (so it halts), or simply loop forever trying to find one.

As Turing proved through some smart math, while we can know that for some specific programs, it's logically impossible to find a way that will allow us to know that for every potential program that could ever be written. That brings us back to AI, which in a super-intelligent state could feasibly hold every possible computer program in its memory at once.

Any program written to stop AI from harming humans and destroying the world, for example, may reach a conclusion (and halt) or not – it's mathematically impossible for us to be absolutely sure either way, which means it's not containable.

All this makes any containment algorithm unusable. The only way to really is to limit the capabilities of the super-intelligence. It could be cut off from parts of the internet or from certain networks.  Unfortunately, this would make the whole exercise pointless.


Last modified on 20 September 2022
Rate this item
(1 Vote)