Skip to main content

Neural networks cannot learn anything without information.

We can think that the neural network is like a factory. Information is the raw material that this factory uses. Sensors send the raw information to the neural network. And this information factory's product is processed information.

During information processing, the neural network interconnects information that it gets from different sources. And learning means that this system creates behavioral models for similar situations. Using ready-made templates. That made by using experiences makes reactions to similar situations easier and faster. 

Learning neural networks are systems where the data sources are interconnected with data-handling units. But without information those networks are useless. 

When neural networks categorizing the trusted pages that they use, they can make that thing automatically. The AI can use the user as an assistant, and show homepages to that person who estimates the data. And then the user can say if the data has enough high value. Or the user can simply put the home pages to a trusted list. 

That means if the user wants to use AI for physics that means that person can put things like "Phys.org" and "scitechdaily.com" into trusted page lists. And that means the AI uses those pages to select data. That kind of thing requires high skills to analyze information. So in this model, the system has a prime user. Who selects sources. That the AI uses. And then other users can use trusted data in their work. 

All systems need information for learning. And the problem is that when the system learns things. The system doesn't know whether is information that it uses is real or fake. This is the thing with all neural networks. And one of the neural networks is the human brain. The problem is this: without information, the system is like an empty paper that cannot do anything. 

Learning is that the system makes the models using the information it gets from sensors. Then the system can escalate that model to all similar situations. That thing means machine learning and human learning are similar processes. The machines get information the same way as the human brain. But there is one big difference between those learning models. The AI doesn't understand the meaning of the words. 



AI is an impressive tool. And there are trillions of observation systems that are transferring information to AI-based systems. NASA and other organizations send space probes to take information from distant planets, and the AI can interconnect that data with other systems. In this version, we are talking about AI-based systems in astronomy. 

The AI can search and follow the changes in the brightness of stars. That helps it to find new exoplanets. But without simultaneously taking images the AI cannot compare the object's brightness within a certain period. So without the telescope, the AI is useless. 

When we are celebrating things like ChatGPT and other AI systems and telling how good answers they can make, we must realize one thing. Those systems use certain categories in how they choose the homepages where they get information. And that thing means that the system might not know. Is the information on those home pages trusted or faked? 

There are certain parameters in how the AI selects homepages where it gets information. And there is a theoretical possibility that somebody makes a practical joke to the person who uses AI for making the thesis. The joke could be that some other person would change the information that is on the trusted page for a moment when AI uses that page. 

In this version, the AI makes mistakes because it doesn't know what the text means. And there is the possibility that by changing the text on the trusted homepage to things like lines from Donald Duck the AI would put that thing to the thesis. Fixing that error is quite easy. The AI can use two or more home pages. And if there are big differences, the system can introduce those homepages to the user. The user makes the decision that is information valuable. 

So that means AI requires a new type of skills in working life. The person who uses the AI must have skills to estimate the text, find conflicts and then estimate the source. 

Science advances very fast. And that thing means the AI must have parameters that it selects only the newest possible data. And that means the AI must use only the last updates on trusted homepages. The problem is that our "practical joke" would be the last update. 


Comments

Popular posts from this blog

MIT's tractor beam can make the new types of SASER systems possible

   "This chip-based "tractor-beam," which uses an intensely focused beam of light to capture and manipulate biological particles without damaging the cells, could help biologists study the mechanisms of diseases."(Interesting Engineering, MIT’s Star Wars-inspired ‘tractor beam’ uses light to capture, manipulate cells) MIT's tractor beam can make the new types of SASER systems possible. The tractor beam just hovers the nanoparticle in air or medium, and then the laser or some other electromagnetic system transports oscillation into those particles. The ability to make cells and other particles hover in the system makes it possible to create particles whose energy level or resonance frequencies are accurately calculated things.  That thing makes it possible to create things that transmit wave movement accurately and cleanly. This is one version of the use of a tractor beam. Modern tractor beams are like acoustic tweezers where sound waves lock the object in its cr

The new observations tell that the thunderstorms form gamma-rays. That could make gamma-ray lasers possible.

  "An illustration of NASA’s research plane ER-2 flying over thunderstorms. Credit: University of Bergen / Mount Visual (CC BY 4.0), edited" (ScitechDaily, Surprising Discovery: NASA’s Retrofitted U2 Spy Plane Reveals Tropical Lightning Storms Are Radioactive) The new observations tell that the thunderstorms form gamma-rays. That could make gamma-ray lasers possible. The process has been observed by the NASA (Lockheed) ER-2 research plane, which is a modified U-2 spy plane. The gamma-ray formation in thunderstorms. Where lightning and electric fields release electrons that impact the air molecules and water droplets is an interesting thing. That thing opens the route to solving many mysteries.  "The general physics behind how thunderstorms create high-energy flashes of gamma radiation is not a mystery. As thunderstorms develop, swirling drafts drive water droplets, hail, and ice into a mixture that creates an electric charge much like rubbing a balloon on your shirt. Pos

Chinese innovations and space lasers are interesting combinations.

Above: "Tiangong is China's operational space station located in low Earth orbit. (Image credit: Alejomiranda via Getty Images)" (Scpace.com, China's space station, Tiangong: A complete guide) Chinese are close to making nuclear-powered spacecraft.  Almost every day, we can read about Chinese technical advances. So are, the Chinese more innovative than Western people? Or is there some kind of difference in culture and morale between Western and Chinese societies? The Chinese superiority in hypersonic technology is one of the things that tells something about the Chinese way of making things.  In China, the mission means. And the only thing that means is mission. That means that things like budgets and safety orders are far different from Western standards. If some project serves the Chinese communist party and PLA (People's Liberation Army) that guarantees unlimited resources for those projects. Chinese authorities must not care about the public opinion.  If we th