Mathematics and physics are exact sciences with strict rules. That means the solutions, the system gets must come from using certain methods and formulas. And that thing means testing mathematical software is easy. The tester must just use the same formulas in some other mathematical tools, and if the answers are identical, the software should operate as planned. This makes it possible to test things like AI quite easily.
The AI can get orders that it should make some calculations. And then that system must search for the right formula from the net. That thing can test the AI's ability to search for information from web pages. Because, the solution is made by using certain mathematical or physics formulas, that makes it easy to control the software.
The problem with deep-learning networks is that they are hard to understand. The testers can test the systems' functionality easily. They just input some code into the system. And then they see the answer. In that testing version, the systems are "black boxes". In those systems, the tester sees only things like answers that the systems can make.
The "black box" testing where only right answers are enough is theoretically easier to make than grey- or glass (sometimes white) box testing. In the grey box testers test code and functionality. And in the glass box, the tester tests the code without functionality.
"A new study has found that Fourier analysis, a mathematical technique that has been around for 200 years, can be used to reveal important information about how deep neural networks learn to perform complex physics tasks, such as climate and turbulence modeling. This research highlights the potential of Fourier analysis as a tool for gaining insights into the inner workings of artificial intelligence and could have significant implications for the development of more effective machine learning algorithms". (ScitechDaily/Fourier Transformations Reveal How AI Learns Complex Physics)
*************************
Making the program testing for robots is the key element for guaranteeing their safety. Only error-free control software makes robot cars safe. So in real life, those testers must use all levels of testing.
The glass box stage means that the safety of the code is tested. And visible errors are removed. So the code is ready to download to prototypes.
Grey box state means that the testers test code and how it reacts. In that stage, the creators who create as an example the robot vehicle. Those crews are testing using miniature cars. And they follow how the system reacts to some surprises.
In the black box, the testers test only results. In that stage, the full-scale robot vehicles were tested in closed tracks. Also, the system can test in traffic. But there is a long journey to make the dealers can sell a commercial version of that self-driving car to customers. If there are some errors that thing can cause terrible news and destruction for the project.
***************************************
The problem with deep learning networks is that there are so many answers and solutions. When we think about deep learning networks by using the term "object-oriented programming" the system is an easy test by using a "black box" but the problem is that there are so many objects that must test separately and individual tests for thousands or millions of objects takes a very long time.
When a deep learning network makes a new solution it forms a new object inside it. The number of objects cumulates very fast. And that thing means that there are soon much more objects than at the beginning of that process. So things like grey box testing, where testers test code and a couple of objects would be more effective in the case of learning networks than testing all individual objects separately. The problem with black box testing is that it doesn't allow testing entirely.
And if we are testing things like robots that should operate on streets and every day works the testers must be sure that the program makes what it should. Testing the code same time as its input and output make it possible to find unseen errors in the code of the AI. When we think about things like autopilot-operating cars we must understand that safety plays a prime role in those systems. And in those systems, the testers must use all test levels for making the code safe.
https://scitechdaily.com/fourier-transformations-reveal-how-ai-learns-complex-physics/
https://en.wikipedia.org/wiki/Fourier_analysis
Comments
Post a Comment