Showing posts with label Interactive AI. Show all posts
Showing posts with label Interactive AI. Show all posts

Saturday, September 16, 2023

The next step in generative AI is interactive generative AI.

The next step in generative AI is interactive generative AI.


The interactive AI means that the person just talks to AI and then AI follows the orders. The ability to use the natural language makes it easier to communicate with robots. The human operators can give instructions to robots on what they should do next. 

The interactive AI-based translator makes it possible. People can use their languages in communication whenever they want. The translation program can installed on cell phones. And then people can use it like any other social media program. The communication could happen through central computers where the server program translates words to the target language. 

Or the system can use the morphing network or cloud-based architecture where a cloud of cell phones shares their calculation capacity. Interactive AI can also be a powerful tool for programming and any other things like the R&D process of physical tools. 

The interactive AI can give instructions to programmers. Or it can generate the code from examples. Then the AI asks for the databases' names that want to connect with the program. If AI has access to hard disk, it can search those databases itself, and make the paths. 

There is the possibility that the AI has a multi-tasking user interface. If the names are not  said clearly. The AI can ask to write those filenames. Or it can suggest files that look a little bit about the name that AI heard. The AI can give lists about the found names and then the user can select the right one. 

The next big step in the development of AI is interactive, generative AI. Traditional generative AI just follows the order like "draw me a flower". And then the generative AI just creates that image. Interactive AI discusses with the user, it asks what colors and surfaces the user wants to use in the flower. 




The person can ask to make the yellow flower with crystal leaves and metallic surface and interactive AI just starts to make the flower. During that process, after every stage, it asks is customer happy. Is there something that wanted to change, or is there some hue of color that the user wants to use? 

If the user wants the AI to make a painting, the AI might ask if is there some painting, that the user wants to use as the base. Then the AI asks what parts are the best in the painting, and then it might ask what details the user wants to use in the AI-generated version. 

In the case of the R&D process, the interactive AI is the tool that can ask if is there some airfield where the aircraft must fit. Then it can ask some special variables what are the wanted payload and other abilities that the system must have. If AI creates drones it requires the computer code, that it uses. 

The complicated AI requires powerful computers, and those computers require cooling systems and other kinds of things. Also, the sensors and other things like the drone's purpose determine it's size. The things that the AI also requires are manufacturing tools and available materials. Interactive AI is a tool that discusses with people during the generative process. 

If the interactive AI  drives a car it acts like a second driver. The system might observe a human driver, and if that person is tired or nervous, it can ask what's problem, is and tell that the person needs a break. The interactive AI can take information from the traffic control and adjust the car's speed to a level that it must not stop. The AI can also make a report to authorities if the person seems drunk or somehow angry and request traffic police to stop the driving. 

If a driver makes the mistake that takes the car into the traction. AI can fix that error. When AI takes control the clutch separates the wheel from the power steering so that the driver cannot turn the car. That thing is an important detail in robot cabs. If a drunk person can take a car in control, that thing causes a terrible situation. 

When every single wheel is operated in separate electric engines the AI can adjust their rotation direction and rotation speed. These kinds of systems can installed in hybrid and electric cars. This type of vehicle can use diesel- (or combustion)-electric driving systems. And those systems make them very fast. 


https://www.analyticsinsight.net/interactive-ai-a-step-closer-to-conversational-artificial-intelligence/

Monday, December 6, 2021

Human-looking robots and interactive artificial intelligence are a powerful combination.

   

 Human-looking robots and interactive artificial intelligence are a powerful combination. 



Here is the most realistic human-looking robot ever. This title is seen in Nerdist.com. The thing is that the human-looking robots are coming. 

And many actors are interested in those things. Of course, entertainment companies are interested in the possibility to use robots in dangerous or embarrassing scenes. But also security, military, and law enforcement companies might find the use of those machines. 

Human-looking robots can use in covert missions. And those robots cannot turn against their controllers. The thing is that the cyborgs that are looking like humans can send data from their environment to the control centers. Their ears would be microphones. Eyes will be CCD cameras. 

And piezoelectric sensors transmit the feel of touch to the controller. The system will send all data about their environment to operators by using the internet. The quantum system where the data routes are decentralized will make the robot act perfectly even if the internet is slow guaranteeing its ability to operate and interact with data centers. 


x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x

There are three ways to make a human-looking robot. 


1) Make just a remote-controlled body that is controlled by using a data suit and data gloves. Also, the operators can use the chambers or sensors that are following their movements. 

But the data suit with data gloves is giving the best touch for the robot. The BCI (Brain-Computer Interfaces) are also promising tools for that thing. The thing that makes the remote-control robot interesting is that systems require only simple algorithms. But of course, the communication between robot and controller must protect. 


Machine learning would revolutionize that thing. 


And then to learning systems. Those systems require powerful computers for running complicated algorithms. So they might use the internet to communicate with computer centers. The system requires complicated artificial intelligence-based algorithms. 

2) The semi-automatic version of that thing. The system benefits machine learning. If the robot faces a situation that is not in its database it will ask the human operator to solve the problem. The system requires complicated algorithms. For benefiting the thing that stored in its database. 


In that case, the human operator will use the data gloves and data suit and make the movement series that fits in the situation. Then that data is stored in memory for similar situations. This type of system is used in Mars-rovers. 


The thing is that the Mars-rovers are much easier to control than the man-shaped robots on the streets. There are much more variables on the streets than on Mars and programming those robots is complicated. So the system can benefit method where the controller will record or teach certain actions to the robot. 

And those actions can use to create independently operating robots. The thing is that when the data mass that is stored in the computers increases that will make robots more independent.

Each time. When the operator teaches some new activities for the robot. Those machines learn a new trick that can connect with similar environments. 

3) Fully independently operating robot. The thing is that this kind of system can use the data that is collected from the semi-automatic systems. The problem with this type of system is that the databases of that system are extremely large. The system requires complicated AI-based algorithms. And that means the robot must communicate all the time with supercomputer centers. 


x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x


The communication between robots and computer centers is the key element in that kind of system. So it must be secure. And make effective even if the internet is sometimes slow. 

In decentralized communication, the system uses individual data links for each action. There is an individual data link for each hand and leg. Also, the other things might have their own data links. The reason for that is sharing the data routes to smaller parts will decrease the stress of the individual data lines.

The problem with the development of mankind robots is that the size of those machines is limited. That means the powerful computers that are needed for running highly advanced and complicated artificial intelligence algorithms cannot be put in the body. 


https://nerdist.com/article/most-realistic-humanoid-robot-engineered-arts-ameca/


New autonomous task units are entering service.

"The deal will create much-needed competition for the Department of War acquisition process. (Representational image)" (Interestin...