Skip to main content

About the risks of the artificial intelligence and computer controlled weapon systems.



Picture 1

Kimmo Huosionmaa

Stephen Hawking has said that the artificial intelligence might destroy an entire world. Hawking is not actually right, in laboratories where those artificial intelligence programs are separated from the network they are harmless. The artificial intelligence becomes dangerous when they will start to control the physical machines like robots. And if they control killing machines, they will become extremely dangerous.


If those programs will let go free on the Internet, they might become very dangerous. And if the artificial intelligence will be used in the military systems like missile and fire control software, the situation might be very bad for every mankind on our planet. The problem with the artificial intelligence is, that normally those programs are actually the software, what collects the information from many various sensors.



And if that information fills some parameters, will the software begin some operations like shoot missiles. Those computer programs are actually the machines, what cannot generate any new ideas. But when we are going to make more sophisticated computer programs, what can develop the new ideas, could the results be devastating. Those computer programs might start to think that they don't want to obey the humans, and then those computer programs will start to rebel. In the very bad scenarios, the independent artificial intelligence will able to slip in some vital computer and start the nuclear war.



The more dangerous thing is to give feelings to the robots. Those machines could be the little bit too much like humans, and if somebody hurts those machines, they would become angry and even murder somebody. But the feelings might cause the thing, that some machines will start to act like living animal or human. Too intelligent and perfect machines might start rebel because they had been created the self-defense sense, what means that this particular organism will start to fight back if somebody will try to hurt it.



Those senses are necessary for surviving in nature. And if they will grow for some robot, could that situation be very devastating. If some robot or computer start to defend itself, that thing could become the end of all civilization, if it makes the decision to open the fire with nuclear missiles. The artificial intelligence could act very surprising way. And in that scenario, the artificial intelligence could launch the nuclear missiles, or it can assassinate the person's who it will feel as the threat for its own safety. The assassination could happen by driving the trains on the wrong track with full speed.

Sources

Picture 1

http://actionagogo.com/wp-content/uploads/2015/11/terminator-3-movie-poster.jpg

Comments

Popular posts from this blog

Chinese innovations and space lasers are interesting combinations.

Above: "Tiangong is China's operational space station located in low Earth orbit. (Image credit: Alejomiranda via Getty Images)" (Scpace.com, China's space station, Tiangong: A complete guide) Chinese are close to making nuclear-powered spacecraft.  Almost every day, we can read about Chinese technical advances. So are, the Chinese more innovative than Western people? Or is there some kind of difference in culture and morale between Western and Chinese societies? The Chinese superiority in hypersonic technology is one of the things that tells something about the Chinese way of making things.  In China, the mission means. And the only thing that means is mission. That means that things like budgets and safety orders are far different from Western standards. If some project serves the Chinese communist party and PLA (People's Liberation Army) that guarantees unlimited resources for those projects. Chinese authorities must not care about the public opinion.  If we th

Iron Dome is one of the most effective air defense systems.

The Iron Dome is a missile defense system whose missiles operate with highly sophisticated and effective artificial intelligence. The power of this missile defense base is in selective fire. The system calculates the incoming missile's trajectory. And it shoots only missiles that will hit the inhabited area. The system saves missiles and focuses defense on areas that mean something. The system shares the incoming missiles in, maybe two groups. Another is harmless and another is harmful.  Things like killer drones are also problematic because their trajectories are harder to calculate than ballistic missiles. The thing that makes drones dangerous is that they can make masks for ballistic missiles. And even if those drones are slow, all of them must be shot down.  The thing is that the cooperation between drone swarms and ballistic missiles is the next danger in conflict areas. In the film, you can see how drones make light images of the skies. The killer drones can also carry LED li

The innovative shield that protects OSIRIS-APEX can also protect the new hypersonic aircraft.

"NASA’s OSIRIS-APEX spacecraft successfully completed its closest solar pass, protected by innovative engineering solutions and showing improvements in onboard instruments. Credit: NASA’s Goddard Space Flight Center/CI Lab" (ScitechDaily, Innovative Engineering Shields NASA’s OSIRIS-APEX During Close Encounter With the Sun) The OSIRIS-APEX probe travels close to the sun. The mission plan is to research the sun. And especially find things that can warn about solar storms. Solar storms are things that can danger satellites at the Earth orbiter. And the purpose of OSIRIS-APEX is to find the method of how to predict those solar storms. Another thing is that the OSIRIS-APEX tests the systems and materials that protect this probe against heat and plasma impacts.  The same technology. The researchers created for OSIRIS-APEX can used in the materials and structures. That protects satellites against nuclear explosions. That means this kind of system delivers information on how to prot