Skip to main content

Digital twins and AI are an interesting combination.

    Digital twins and AI are an interesting combination. 


Digital twins are interesting tools. There is a possibility. In the future, the computer's memory will be the digital twin of the universe. Making a simulation of the complete universe is not possible yet. The reason is that we don't know all parts of it. There are missing particles, and we don't know how to model gravitational interactions at all levels. Things like dark energy and dark matter are unknown. 

To make a complete simulation of the interactions in the systems. The computer system with its makers requires complete information about the modeled systems. Even the best and most powerful quantum computers are helpless. If the information they use is not complete and accurate enough. Even the best computers cannot make useful simulations. 

And 95% of the universe is unknown to us. That makes it impossible to make a complete and trusted model of the universe. But science advances and new observations are expanding our knowledge. The AI-based programming tools make this possible. Programmers can handle large code masses and large data masses. 


"The journey to simulate the universe, as exemplified by Michael Wagman’s work, highlights both the historical evolution and the contemporary challenges in this field. While full simulation is out of reach, advancements in computing and algorithms are gradually enhancing our understanding of cosmic phenomena." (ScitechDaily.com/Simulating the Cosmos: Is a Miniature Universe Possible?)


If we want to make a very accurate simulation of the universe programmers need a lot of information but the accuracy of the simulation determines how much data the program needs. High-accurate simulations of big entireties where there are billions of interacting actors are always hard. 

Things like changes in electromagnetic radiation and other things like gravitational effects are things that are important actors in the molecular nebulas. Things like FRB (Fast Radio Bursts) affect ionized gasses, and those things like suddenly coming high energy particles are black swans in those simulations. 

Predictions about hyper-high-energy particles and fast-energy bursts are hard to make because those bursts happen suddenly. And their energy level is hard to measure. 

If we want to simulate galaxy groups' movements. We don't need lots of code. But there still are many unknown things. Things like cosmic fluid and hypothetical mass centers in galaxy groups and the universe require more information. 

 But if we want to make accurate simulations of the interactions of the galaxies and their stars we need huge data. Simulations must be accurate so that they can be useful. Increasing details in simulations increases needed data masses. 

That ability plays a vital role in those kinds of simulations that require lots of code and data. The program itself requires billions or trillions of code lines. And the data mass that it must handle is huge. There are thousands of billions of things. That the programmers must notice and mark in the system, that even galactic-size complete and accurate simulation can be made. 

Another thing is that we don't know how electromagnetism and weak and strong nuclear interactions behave near extremely powerful gravitational fields like near a black hole's event horizon. The complete model of the single galaxies and stars is interesting because those things can help to predict things like a supernova explosion. Predicting those high-energy reactions makes it possible to turn sensors into those points. 


"An innovative AI method developed by University of Konstanz researchers accurately tracks embryonic development stages across species. Initially tested on zebrafish, the method shows promise in studying diverse animal species, enhancing our understanding of evolution." (ScitechDaily.com/AI’s New Frontier: Providing Unprecedented Insights Into Embryonic Development)



Digital human 


AI can search the embryo's advancement. That thing means that the AI searches for anomalies in the embryo. The digital embryo requires complete information on genetics. Then the system must know how certain genetic abilities affect to embryo's fenotype. There is a possibility that every human will have a digital twin in the future. 

That digital twin is used to simulate, how certain medicines act in their bodies. But it's possible that when our knowledge of genetic disorders and other things like those disorders connects with human behavior, researchers can make models that can predict a person's behavior and actions. 

The digital human can make it possible to test environments like echoes and sound volumes in some spaces. In that system, the system makes a digital model of the environment, and then it simulates soundwave reflection from the walls. The system can use microphones to map sound levels. The system can also make simulations about things like how warm the room is. 


Electronic components create heat. That means the system must adjust room temperature during the day.  And if the system knows surface materials and other things. That makes it possible to create simulations. That helps to make more comfortable homes and workspaces. 


"MIT’s StableRep system uses synthetic images from text-to-image models for machine learning, surpassing traditional real-image methods. It offers a deeper understanding of concepts and cost-effective training but faces challenges like potential biases and the need for initial real data training." (ScitechDaily.com/From Pixels to Paradigms: MIT’s Synthetic Leap in AI Training)



There is the possibility that in the future AI will have a digital twin. 


The AI can use simulated reality to test how certain things work. The AI can drive cars in virtual cities. And that thing makes it possible to simulate real-life situations. In augmented reality, the AI can use camera drives to simulate situations where people suddenly walk to the front of the car. Those humans can exist in the digital memory. But they also can have holographic forms. The system can use holograms that operators activate to test robot vehicles.  

Developers can install those hologram projectors into drones. And the vehicle must avoid the impact with them. In those systems, the LRAD searches point to where holograms are. And then if the car cuts that line that means the system fails. 

In cheaper models, the quadcopters can have photovoltaic cells. Two quadcopters are in opposite positions at the front of the hologram or balloon doll. They are equipped with a laser that is aimed at photovoltaic cells. If that laser ray cuts the car is crossed the line. Operators can use a kind of system in portable systems that should alarm if somebody comes to the area. 

"A new paper argues that AI’s intelligence, as seen in systems like ChatGPT, is fundamentally different from human intelligence due to its lack of embodiment and understanding. This difference highlights that AI does not share human concerns or connections with the world." (ScitechDaily.com/The Limits of AI: Why ChatGPT Isn’t Truly “Intelligent”)




But the fact is this: The AI is not intelligent. 


The R&D process of the AI can involve two systems. The system that the R&D team uses for development. The developers will make changes in the code with that system, and then the system is connected with simulators that test how it works in real life. The AI can drive a virtual car simply by interacting with the simulator by using regular game sets. 

The simulator runs on a computer and the AI looks at the screen by using a web camera. And it is connected to the game set's steering wheel and pedals. That denies the AI's ability to affect the simulator's environment. That thing makes it possible to test automatic driving systems in space which is like reality. This kind of set can used to control all other systems. Like fighter airplanes. 

The AI is a language model. At the center of all AI solutions is a language model that translates the commands into the form that the computer understands. The language model can ask CAD programs to make 3D images, and then send them to 3D printers. 

It's easy to make an application that makes AI follow spoken commands. The system requires a speech-to-text application that fills spoken words into AI's lookup field. And AI can also communicate with users by using text-to-speech applications as a medium. 

When we ask something from the AI, it collects a certain number of sources. And then it connects parts of those sources into the new entirety. The situations where we see that the AI is not intelligent are some spatial queries. In those cases, the AI can make mistakes that make the answer look like a crab. If we ask about things like Ramsay's numbers, Ramsay's theory in mathematics, or some rare things, the system gives things like noble families sitting places in their clubs. In those cases, we must always mention the topic of the query. 

These kinds of things are good examples of things that AI requires human users for a long time. The human operators must recognize errors that the AI makes. And the use of AI is not as easy as we think. The AI requires precise and well-articulated commands or the AI can give answers to wrong topics. 


https://scitechdaily.com/simulating-the-cosmos-is-a-miniature-universe-possible/


https://scitechdaily.com/from-pixels-to-paradigms-mits-synthetic-leap-in-ai-training/


https://scitechdaily.com/ais-new-frontier-providing-unprecedented-insights-into-embryonic-development/


https://scitechdaily.com/the-limits-of-ai-why-chatgpt-isnt-truly-intelligent/


https://en.wikipedia.org/wiki/Ramsey_theory

Comments

Popular posts from this blog

MIT's tractor beam can make the new types of SASER systems possible

   "This chip-based "tractor-beam," which uses an intensely focused beam of light to capture and manipulate biological particles without damaging the cells, could help biologists study the mechanisms of diseases."(Interesting Engineering, MIT’s Star Wars-inspired ‘tractor beam’ uses light to capture, manipulate cells) MIT's tractor beam can make the new types of SASER systems possible. The tractor beam just hovers the nanoparticle in air or medium, and then the laser or some other electromagnetic system transports oscillation into those particles. The ability to make cells and other particles hover in the system makes it possible to create particles whose energy level or resonance frequencies are accurately calculated things.  That thing makes it possible to create things that transmit wave movement accurately and cleanly. This is one version of the use of a tractor beam. Modern tractor beams are like acoustic tweezers where sound waves lock the object in its cr

The new observations tell that the thunderstorms form gamma-rays. That could make gamma-ray lasers possible.

  "An illustration of NASA’s research plane ER-2 flying over thunderstorms. Credit: University of Bergen / Mount Visual (CC BY 4.0), edited" (ScitechDaily, Surprising Discovery: NASA’s Retrofitted U2 Spy Plane Reveals Tropical Lightning Storms Are Radioactive) The new observations tell that the thunderstorms form gamma-rays. That could make gamma-ray lasers possible. The process has been observed by the NASA (Lockheed) ER-2 research plane, which is a modified U-2 spy plane. The gamma-ray formation in thunderstorms. Where lightning and electric fields release electrons that impact the air molecules and water droplets is an interesting thing. That thing opens the route to solving many mysteries.  "The general physics behind how thunderstorms create high-energy flashes of gamma radiation is not a mystery. As thunderstorms develop, swirling drafts drive water droplets, hail, and ice into a mixture that creates an electric charge much like rubbing a balloon on your shirt. Pos

Chinese innovations and space lasers are interesting combinations.

Above: "Tiangong is China's operational space station located in low Earth orbit. (Image credit: Alejomiranda via Getty Images)" (Scpace.com, China's space station, Tiangong: A complete guide) Chinese are close to making nuclear-powered spacecraft.  Almost every day, we can read about Chinese technical advances. So are, the Chinese more innovative than Western people? Or is there some kind of difference in culture and morale between Western and Chinese societies? The Chinese superiority in hypersonic technology is one of the things that tells something about the Chinese way of making things.  In China, the mission means. And the only thing that means is mission. That means that things like budgets and safety orders are far different from Western standards. If some project serves the Chinese communist party and PLA (People's Liberation Army) that guarantees unlimited resources for those projects. Chinese authorities must not care about the public opinion.  If we th