Picture 1 |
Kimmo Huosionmaa
Stephen Hawking has said that the artificial intelligence might destroy an entire world. Hawking is not actually right, in laboratories where those artificial intelligence programs are separated from the network they are harmless. The artificial intelligence becomes dangerous when they will start to control the physical machines like robots. And if they control killing machines, they will become extremely dangerous.
If those programs will let go free on the Internet, they might become very dangerous. And if the artificial intelligence will be used in the military systems like missile and fire control software, the situation might be very bad for every mankind on our planet. The problem with the artificial intelligence is, that normally those programs are actually the software, what collects the information from many various sensors.
And if that information fills some parameters, will the software begin some operations like shoot missiles. Those computer programs are actually the machines, what cannot generate any new ideas. But when we are going to make more sophisticated computer programs, what can develop the new ideas, could the results be devastating. Those computer programs might start to think that they don't want to obey the humans, and then those computer programs will start to rebel. In the very bad scenarios, the independent artificial intelligence will able to slip in some vital computer and start the nuclear war.
The more dangerous thing is to give feelings to the robots. Those machines could be the little bit too much like humans, and if somebody hurts those machines, they would become angry and even murder somebody. But the feelings might cause the thing, that some machines will start to act like living animal or human. Too intelligent and perfect machines might start rebel because they had been created the self-defense sense, what means that this particular organism will start to fight back if somebody will try to hurt it.
Those senses are necessary for surviving in nature. And if they will grow for some robot, could that situation be very devastating. If some robot or computer start to defend itself, that thing could become the end of all civilization, if it makes the decision to open the fire with nuclear missiles. The artificial intelligence could act very surprising way. And in that scenario, the artificial intelligence could launch the nuclear missiles, or it can assassinate the person's who it will feel as the threat for its own safety. The assassination could happen by driving the trains on the wrong track with full speed.
Sources
Picture 1
http://actionagogo.com/wp-content/uploads/2015/11/terminator-3-movie-poster.jpg
Comments
Post a Comment