Digital Tbucket Tank (DTT)

Star Trek Today: Mankind May Not Be Able To Control Super Intelligent Artificial Intelligence?

Humanity may not be able to control super-intelligent artificial intelligence (AI), believe the authors of a recent theoretical study. Besides, we may not even know that we have this kind of AI (artificial intelligence) have created.
The rapid progress of artificial intelligence algorithms is taking place before our eyes. Machines win against humans at go and poker, beat experienced fighter pilots in aerial combat, learn from the ground up through trial and error, herald a great revolution in medicine and the life sciences, make diagnoses as well as doctors and can differentiate between birds better than humans . This shows how rapid progress is in many areas.

Image source: Pixabay


Star Trek Episode Reference: General / Androids - AI

With this advancement comes the concern about whether we will be able to control artificial intelligence. Concerns that have been raised for at least several decades. We have known the famous three since 1942 Laws of the Robotwho have favourited the writer Isaac Asimov sketched in his short story "The Game of Tag":

1) A robot must not cause harm to a person or, by inaction, allow harm to be caused to a person,

2) a robot must obey the commands of a human, provided that these do not violate the first law, and

3) a robot has to protect itself as long as it doesn't break the first or second law.

Later added Asimov an overarching law 0 added: A robot must not cause harm to mankind or cause harm by omissionIn 2014, philosopher Nick Bostrom, director of the Institute for the Future of Humanity at Oxford University, examined how super-intelligent AI can destroy us, how we can control it, and why various methods of control may not work. Bostrom identified two problems with controlling AI. The first is control over what the AI ​​can do. For example, we can control whether it should connect to the internet. The second is control over what it wants to do. For example, in order to control him we have to teach him the principles of peaceful coexistence with humans. As Bostrom noted, a super-intelligent AI will likely be able to overcome whatever limitations we want to put on it on what it can do. As for the second problem, Bostrom doubts we are super-intelligent AI could teach anything.

Now Manuel Alfonseca and his team from the Universidad Autonoma de Madrid have decided to tackle the problem of controlling artificial intelligence. He described the results of his work in Journal of Artificial Intelligence Research.

The Spaniards note that any algorithm that is supposed to ensure that AI does not harm people must first simulate behavior that could result in harm to a person so that the machine can recognize and stop it. However, according to the researchers, no algorithm will be able to simulate the behavior of artificial intelligence and determine with absolute certainty whether a certain action is likely to result in harm to humans. It has already been proven that Asimov's first law is a problem that cannot be calculated.
In addition, we may not even know we created a super-intelligent machine, the researchers conclude. The reason is that Rice's theorem, which says that you cannot guess the result of a computer program just by looking at its code.

But Alfonseca and his colleagues also have good news. Well, we no longer have to worry about what super-smart AI is going to do. There are, in fact, three main restrictions on the type of research that Spaniards carry out. First, such super-intelligent AI will only emerge in 200 years. Second, it is not known whether it is even possible to create this type of artificial intelligence, i.e. a machine that is intelligent in as many areas as humans. And third, even if we may not be able to control super-intelligent AI, it should still be possible to control artificial intelligence that is super-intelligent in a narrow range.