Skip to main content

Complex math and physics can use to test machine learning.

Mathematics and physics are exact sciences with strict rules. That means the solutions, the system gets must come from using certain methods and formulas. And that thing means testing mathematical software is easy. The tester must just use the same formulas in some other mathematical tools, and if the answers are identical, the software should operate as planned. This makes it possible to test things like AI quite easily. 

The AI can get orders that it should make some calculations. And then that system must search for the right formula from the net. That thing can test the AI's ability to search for information from web pages. Because, the solution is made by using certain mathematical or physics formulas, that makes it easy to control the software. 

The problem with deep-learning networks is that they are hard to understand. The testers can test the systems' functionality easily. They just input some code into the system. And then they see the answer. In that testing version, the systems are "black boxes". In those systems, the tester sees only things like answers that the systems can make. 

The "black box" testing where only right answers are enough is theoretically easier to make than grey- or glass (sometimes white) box testing. In the grey box testers test code and functionality. And in the glass box, the tester tests the code without functionality. 



"A new study has found that Fourier analysis, a mathematical technique that has been around for 200 years, can be used to reveal important information about how deep neural networks learn to perform complex physics tasks, such as climate and turbulence modeling. This research highlights the potential of Fourier analysis as a tool for gaining insights into the inner workings of artificial intelligence and could have significant implications for the development of more effective machine learning algorithms". (ScitechDaily/Fourier Transformations Reveal How AI Learns Complex Physics)



*************************

Making the program testing for robots is the key element for guaranteeing their safety. Only error-free control software makes robot cars safe. So in real life, those testers must use all levels of testing. 

The glass box stage means that the safety of the code is tested. And visible errors are removed. So the code is ready to download to prototypes. 

Grey box state means that the testers test code and how it reacts. In that stage, the creators who create as an example the robot vehicle. Those crews are testing using miniature cars. And they follow how the system reacts to some surprises. 

In the black box, the testers test only results. In that stage, the full-scale robot vehicles were tested in closed tracks. Also, the system can test in traffic. But there is a long journey to make the dealers can sell a commercial version of that self-driving car to customers. If there are some errors that thing can cause terrible news and destruction for the project. 


***************************************


The problem with deep learning networks is that there are so many answers and solutions. When we think about deep learning networks by using the term "object-oriented programming" the system is an easy test by using a "black box" but the problem is that there are so many objects that must test separately and individual tests for thousands or millions of objects takes a very long time. 

When a deep learning network makes a new solution it forms a new object inside it. The number of objects cumulates very fast. And that thing means that there are soon much more objects than at the beginning of that process. So things like grey box testing, where testers test code and a couple of objects would be more effective in the case of learning networks than testing all individual objects separately. The problem with black box testing is that it doesn't allow testing entirely. 

And if we are testing things like robots that should operate on streets and every day works the testers must be sure that the program makes what it should. Testing the code same time as its input and output make it possible to find unseen errors in the code of the AI. When we think about things like autopilot-operating cars we must understand that safety plays a prime role in those systems. And in those systems, the testers must use all test levels for making the code safe. 

https://scitechdaily.com/fourier-transformations-reveal-how-ai-learns-complex-physics/

https://en.wikipedia.org/wiki/Fourier_analysis

Comments

Popular posts from this blog

Chinese innovations and space lasers are interesting combinations.

Above: "Tiangong is China's operational space station located in low Earth orbit. (Image credit: Alejomiranda via Getty Images)" (Scpace.com, China's space station, Tiangong: A complete guide) Chinese are close to making nuclear-powered spacecraft.  Almost every day, we can read about Chinese technical advances. So are, the Chinese more innovative than Western people? Or is there some kind of difference in culture and morale between Western and Chinese societies? The Chinese superiority in hypersonic technology is one of the things that tells something about the Chinese way of making things.  In China, the mission means. And the only thing that means is mission. That means that things like budgets and safety orders are far different from Western standards. If some project serves the Chinese communist party and PLA (People's Liberation Army) that guarantees unlimited resources for those projects. Chinese authorities must not care about the public opinion.  If we th

Iron Dome is one of the most effective air defense systems.

The Iron Dome is a missile defense system whose missiles operate with highly sophisticated and effective artificial intelligence. The power of this missile defense base is in selective fire. The system calculates the incoming missile's trajectory. And it shoots only missiles that will hit the inhabited area. The system saves missiles and focuses defense on areas that mean something. The system shares the incoming missiles in, maybe two groups. Another is harmless and another is harmful.  Things like killer drones are also problematic because their trajectories are harder to calculate than ballistic missiles. The thing that makes drones dangerous is that they can make masks for ballistic missiles. And even if those drones are slow, all of them must be shot down.  The thing is that the cooperation between drone swarms and ballistic missiles is the next danger in conflict areas. In the film, you can see how drones make light images of the skies. The killer drones can also carry LED li

The innovative shield that protects OSIRIS-APEX can also protect the new hypersonic aircraft.

"NASA’s OSIRIS-APEX spacecraft successfully completed its closest solar pass, protected by innovative engineering solutions and showing improvements in onboard instruments. Credit: NASA’s Goddard Space Flight Center/CI Lab" (ScitechDaily, Innovative Engineering Shields NASA’s OSIRIS-APEX During Close Encounter With the Sun) The OSIRIS-APEX probe travels close to the sun. The mission plan is to research the sun. And especially find things that can warn about solar storms. Solar storms are things that can danger satellites at the Earth orbiter. And the purpose of OSIRIS-APEX is to find the method of how to predict those solar storms. Another thing is that the OSIRIS-APEX tests the systems and materials that protect this probe against heat and plasma impacts.  The same technology. The researchers created for OSIRIS-APEX can used in the materials and structures. That protects satellites against nuclear explosions. That means this kind of system delivers information on how to prot