Skip to main content

Neural networks cannot learn anything without information.

We can think that the neural network is like a factory. Information is the raw material that this factory uses. Sensors send the raw information to the neural network. And this information factory's product is processed information.

During information processing, the neural network interconnects information that it gets from different sources. And learning means that this system creates behavioral models for similar situations. Using ready-made templates. That made by using experiences makes reactions to similar situations easier and faster. 

Learning neural networks are systems where the data sources are interconnected with data-handling units. But without information those networks are useless. 

When neural networks categorizing the trusted pages that they use, they can make that thing automatically. The AI can use the user as an assistant, and show homepages to that person who estimates the data. And then the user can say if the data has enough high value. Or the user can simply put the home pages to a trusted list. 

That means if the user wants to use AI for physics that means that person can put things like "Phys.org" and "scitechdaily.com" into trusted page lists. And that means the AI uses those pages to select data. That kind of thing requires high skills to analyze information. So in this model, the system has a prime user. Who selects sources. That the AI uses. And then other users can use trusted data in their work. 

All systems need information for learning. And the problem is that when the system learns things. The system doesn't know whether is information that it uses is real or fake. This is the thing with all neural networks. And one of the neural networks is the human brain. The problem is this: without information, the system is like an empty paper that cannot do anything. 

Learning is that the system makes the models using the information it gets from sensors. Then the system can escalate that model to all similar situations. That thing means machine learning and human learning are similar processes. The machines get information the same way as the human brain. But there is one big difference between those learning models. The AI doesn't understand the meaning of the words. 



AI is an impressive tool. And there are trillions of observation systems that are transferring information to AI-based systems. NASA and other organizations send space probes to take information from distant planets, and the AI can interconnect that data with other systems. In this version, we are talking about AI-based systems in astronomy. 

The AI can search and follow the changes in the brightness of stars. That helps it to find new exoplanets. But without simultaneously taking images the AI cannot compare the object's brightness within a certain period. So without the telescope, the AI is useless. 

When we are celebrating things like ChatGPT and other AI systems and telling how good answers they can make, we must realize one thing. Those systems use certain categories in how they choose the homepages where they get information. And that thing means that the system might not know. Is the information on those home pages trusted or faked? 

There are certain parameters in how the AI selects homepages where it gets information. And there is a theoretical possibility that somebody makes a practical joke to the person who uses AI for making the thesis. The joke could be that some other person would change the information that is on the trusted page for a moment when AI uses that page. 

In this version, the AI makes mistakes because it doesn't know what the text means. And there is the possibility that by changing the text on the trusted homepage to things like lines from Donald Duck the AI would put that thing to the thesis. Fixing that error is quite easy. The AI can use two or more home pages. And if there are big differences, the system can introduce those homepages to the user. The user makes the decision that is information valuable. 

So that means AI requires a new type of skills in working life. The person who uses the AI must have skills to estimate the text, find conflicts and then estimate the source. 

Science advances very fast. And that thing means the AI must have parameters that it selects only the newest possible data. And that means the AI must use only the last updates on trusted homepages. The problem is that our "practical joke" would be the last update. 


Comments

Popular posts from this blog

Chinese innovations and space lasers are interesting combinations.

Above: "Tiangong is China's operational space station located in low Earth orbit. (Image credit: Alejomiranda via Getty Images)" (Scpace.com, China's space station, Tiangong: A complete guide) Chinese are close to making nuclear-powered spacecraft.  Almost every day, we can read about Chinese technical advances. So are, the Chinese more innovative than Western people? Or is there some kind of difference in culture and morale between Western and Chinese societies? The Chinese superiority in hypersonic technology is one of the things that tells something about the Chinese way of making things.  In China, the mission means. And the only thing that means is mission. That means that things like budgets and safety orders are far different from Western standards. If some project serves the Chinese communist party and PLA (People's Liberation Army) that guarantees unlimited resources for those projects. Chinese authorities must not care about the public opinion.  If we th

Iron Dome is one of the most effective air defense systems.

The Iron Dome is a missile defense system whose missiles operate with highly sophisticated and effective artificial intelligence. The power of this missile defense base is in selective fire. The system calculates the incoming missile's trajectory. And it shoots only missiles that will hit the inhabited area. The system saves missiles and focuses defense on areas that mean something. The system shares the incoming missiles in, maybe two groups. Another is harmless and another is harmful.  Things like killer drones are also problematic because their trajectories are harder to calculate than ballistic missiles. The thing that makes drones dangerous is that they can make masks for ballistic missiles. And even if those drones are slow, all of them must be shot down.  The thing is that the cooperation between drone swarms and ballistic missiles is the next danger in conflict areas. In the film, you can see how drones make light images of the skies. The killer drones can also carry LED li

The innovative shield that protects OSIRIS-APEX can also protect the new hypersonic aircraft.

"NASA’s OSIRIS-APEX spacecraft successfully completed its closest solar pass, protected by innovative engineering solutions and showing improvements in onboard instruments. Credit: NASA’s Goddard Space Flight Center/CI Lab" (ScitechDaily, Innovative Engineering Shields NASA’s OSIRIS-APEX During Close Encounter With the Sun) The OSIRIS-APEX probe travels close to the sun. The mission plan is to research the sun. And especially find things that can warn about solar storms. Solar storms are things that can danger satellites at the Earth orbiter. And the purpose of OSIRIS-APEX is to find the method of how to predict those solar storms. Another thing is that the OSIRIS-APEX tests the systems and materials that protect this probe against heat and plasma impacts.  The same technology. The researchers created for OSIRIS-APEX can used in the materials and structures. That protects satellites against nuclear explosions. That means this kind of system delivers information on how to prot