Skip to main content

The center of the development of AI should be how that thing can serve people better?

 

 The center of the development of AI should be how that thing can serve people better?



The problem with artificial intelligence development is that the intrinsic value. That means that creation of the more and more intelligent machines is the prime objective. In that development process, the developing artificial intelligence is intrinsic value. 

The prime objective should be how artificial intelligence can serve humans. How AI might turn life easier and safer?

When we are thinking that AI can fully replace humans that thing is pure imagination. There are lots of things that we don't know about brains. We know maybe how neurons are switching connections and how brains are learning new things. 

But we don't know what kind of role certain things are in certain actions. The things like imagination are totally out of artificial intelligence. Even if we could model that ability to abstract thinking in theory. That thing is hard to make in real life. 

The complicated AI requires powerful computers. And the thing is that AI that runs on the quantum computer can learn things unpredicted fast. Quantum computers are millions of times more powerful than binary computers. 

The self-learning algorithms that are run on quantum platforms can make unpredicted things. And the machine that involves things that are not predicted is always dangerous. 

When we are thinking about the feelings and consciousness of the computer. We must remember that if the machine has feelings. It is dangerous. If the robot would turn conscious that thing makes that thing similar to living organisms. 

And all organisms are defending themselves if they are under threat. The AI might feel it is under threat in this case. Where its server shuts down. The AI itself is not dangerous. But if it's the system that controls things like weapon systems. It can try to destroy the people who are shutting it down. 

Making a real-world computer that has dreams and imagination is a thing that is very hard. The things like quantum computers are shown that theoretically easy things can turn difficult in real life. 

Artificial intelligence can be better than humans in certain limited sectors. AI can play chess better than humans. But humans can make more things than AI. The thing is that humans can do many more things than AI. And making AI that has similar transversal competence as humans is difficult.

There is a possibility that every single neuron in humans 200 billion neurons have different individual programming. So for making the AI that has the same capacity requires 200 billion tables for the database. And maybe that thing requires the 200 billion microprocessors. 

But of course, we could create artificial neurons by using small bottles there is some kind of microchip and quicksilver. The quicksilver will close the electric connections of those bottles. 

In that system, quicksilver is acting as a liquid switch. For making the connection in that system, the magnet will pull quicksilver at connection points of wires. That thing makes the system route data to the right wire. This is the model of an artificial neuron. 

And the microchip involves the database. That kind of system can emulate single neurons. But for emulating humans there is needed 200 billion bottles. 

Humans should be the thing that technology serves. And in the real world in the center of development should be humans. The fact is that. The development of artificial intelligence is different than anything else. Artificial intelligence is an open-source thing. Almost all programming languages are public. And that means everybody can start to make their artificial intelligence projects. 

Artificial intelligence is a powerful tool. Many people are saying that the AI steals jobs of people. The question is: what kind of jobs AI will take? Are those jobs popular? Or do those people who are criticizing the AI. Willing to make those jobs? The question is always about morals and ethics. What if somebody makes the robot for military purposes? 

So ethically that thing is wrong. But also things like nuclear weapons are inhumane. Nobody is stopping to development of nuclear reactors. Because of Plutonium that those reactors are creating can use for nuclear weapons. Every single nuclear reactor in the world is producing Plutonium. But there are no large-scale campaigns on the ethics of nuclear technology. Same way fusion technology can use for weapon research in both plasmoid and fusion explosives. 

But somehow artificial intelligence is a different thing. AI can make human lives better. The only thing that is seen in AI is the military systems that are killing people without mercy. Things like nuclear weapons are not merciless killers. They are inhumane military technology. If some person will get radiation poisoning that thing causes extreme pain and finally slow and certain death. But when inhumane weapons use by human operators it's more acceptable than some kind of robot that shoots enemies by using a machine gun. 

Robots are the thing that can misuse. They can use as riot police and military operators. The thing is that the humans who are serving in those roles are serving governments. The government makes decisions where it wants to use those things. 

But those things can also save humans. They can use as tools for giving medical attention to people. Or they can go in the nuclear reactors in the cases when there is an overheating situation. Robots can research the jungles and volcanoes. Without risking human lives. And robots can travel to other planets. Those trips take years. But for robots, that time doesn't matter. 

So I believe that the first thing that walks at the surface of Mars or icy moons of Jupiter is a robot that is controlled by very independent artificial intelligence. That thing means that. No researcher must spend a lot of the lifetime on that trip. A trip to Jupiter takes 600 days in a flyby mission. 

But if the craft will want to position itself to the orbiter that journey takes 2000 days. That means a one-way trip takes over 5 years. Return to Earth will take 5 or more years. And that means that the minimum time for that mission is 10 years. 

Of course, there should spend some time at the orbiter. If robots would make that mission. The researchers can stay at their homes and make everyday jobs. That doesn't require that human operators should spend 10-20 years away from home. That is one example of how AI can help researchers in extremely difficult missions. 


Image: https://www.salon.com/2021/04/30/why-artificial-intelligence-research-might-be-going-down-a-dead-end/


https://likeinterstellartravelingandfuturism.blogspot.com/


Comments

Popular posts from this blog

Chinese innovations and space lasers are interesting combinations.

Above: "Tiangong is China's operational space station located in low Earth orbit. (Image credit: Alejomiranda via Getty Images)" (Scpace.com, China's space station, Tiangong: A complete guide) Chinese are close to making nuclear-powered spacecraft.  Almost every day, we can read about Chinese technical advances. So are, the Chinese more innovative than Western people? Or is there some kind of difference in culture and morale between Western and Chinese societies? The Chinese superiority in hypersonic technology is one of the things that tells something about the Chinese way of making things.  In China, the mission means. And the only thing that means is mission. That means that things like budgets and safety orders are far different from Western standards. If some project serves the Chinese communist party and PLA (People's Liberation Army) that guarantees unlimited resources for those projects. Chinese authorities must not care about the public opinion.  If we th

Iron Dome is one of the most effective air defense systems.

The Iron Dome is a missile defense system whose missiles operate with highly sophisticated and effective artificial intelligence. The power of this missile defense base is in selective fire. The system calculates the incoming missile's trajectory. And it shoots only missiles that will hit the inhabited area. The system saves missiles and focuses defense on areas that mean something. The system shares the incoming missiles in, maybe two groups. Another is harmless and another is harmful.  Things like killer drones are also problematic because their trajectories are harder to calculate than ballistic missiles. The thing that makes drones dangerous is that they can make masks for ballistic missiles. And even if those drones are slow, all of them must be shot down.  The thing is that the cooperation between drone swarms and ballistic missiles is the next danger in conflict areas. In the film, you can see how drones make light images of the skies. The killer drones can also carry LED li

The innovative shield that protects OSIRIS-APEX can also protect the new hypersonic aircraft.

"NASA’s OSIRIS-APEX spacecraft successfully completed its closest solar pass, protected by innovative engineering solutions and showing improvements in onboard instruments. Credit: NASA’s Goddard Space Flight Center/CI Lab" (ScitechDaily, Innovative Engineering Shields NASA’s OSIRIS-APEX During Close Encounter With the Sun) The OSIRIS-APEX probe travels close to the sun. The mission plan is to research the sun. And especially find things that can warn about solar storms. Solar storms are things that can danger satellites at the Earth orbiter. And the purpose of OSIRIS-APEX is to find the method of how to predict those solar storms. Another thing is that the OSIRIS-APEX tests the systems and materials that protect this probe against heat and plasma impacts.  The same technology. The researchers created for OSIRIS-APEX can used in the materials and structures. That protects satellites against nuclear explosions. That means this kind of system delivers information on how to prot