We can think that the neural network is like a factory. Information is the raw material that this factory uses. Sensors send the raw information to the neural network. And this information factory's product is processed information.
During information processing, the neural network interconnects information that it gets from different sources. And learning means that this system creates behavioral models for similar situations. Using ready-made templates. That made by using experiences makes reactions to similar situations easier and faster.
Learning neural networks are systems where the data sources are interconnected with data-handling units. But without information those networks are useless.
When neural networks categorizing the trusted pages that they use, they can make that thing automatically. The AI can use the user as an assistant, and show homepages to that person who estimates the data. And then the user can say if the data has enough high value. Or the user can simply put the home pages to a trusted list.
That means if the user wants to use AI for physics that means that person can put things like "Phys.org" and "scitechdaily.com" into trusted page lists. And that means the AI uses those pages to select data. That kind of thing requires high skills to analyze information. So in this model, the system has a prime user. Who selects sources. That the AI uses. And then other users can use trusted data in their work.
All systems need information for learning. And the problem is that when the system learns things. The system doesn't know whether is information that it uses is real or fake. This is the thing with all neural networks. And one of the neural networks is the human brain. The problem is this: without information, the system is like an empty paper that cannot do anything.
Learning is that the system makes the models using the information it gets from sensors. Then the system can escalate that model to all similar situations. That thing means machine learning and human learning are similar processes. The machines get information the same way as the human brain. But there is one big difference between those learning models. The AI doesn't understand the meaning of the words.
AI is an impressive tool. And there are trillions of observation systems that are transferring information to AI-based systems. NASA and other organizations send space probes to take information from distant planets, and the AI can interconnect that data with other systems. In this version, we are talking about AI-based systems in astronomy.
The AI can search and follow the changes in the brightness of stars. That helps it to find new exoplanets. But without simultaneously taking images the AI cannot compare the object's brightness within a certain period. So without the telescope, the AI is useless.
When we are celebrating things like ChatGPT and other AI systems and telling how good answers they can make, we must realize one thing. Those systems use certain categories in how they choose the homepages where they get information. And that thing means that the system might not know. Is the information on those home pages trusted or faked?
There are certain parameters in how the AI selects homepages where it gets information. And there is a theoretical possibility that somebody makes a practical joke to the person who uses AI for making the thesis. The joke could be that some other person would change the information that is on the trusted page for a moment when AI uses that page.
In this version, the AI makes mistakes because it doesn't know what the text means. And there is the possibility that by changing the text on the trusted homepage to things like lines from Donald Duck the AI would put that thing to the thesis. Fixing that error is quite easy. The AI can use two or more home pages. And if there are big differences, the system can introduce those homepages to the user. The user makes the decision that is information valuable.
So that means AI requires a new type of skills in working life. The person who uses the AI must have skills to estimate the text, find conflicts and then estimate the source.
Science advances very fast. And that thing means the AI must have parameters that it selects only the newest possible data. And that means the AI must use only the last updates on trusted homepages. The problem is that our "practical joke" would be the last update.
Comments
Post a Comment