The AI can lie if programmers order it to do that thing. One of the biggest threats in the AI world is that somebody creates a copy of the well-known AI. And then, that actor turns the data traffic to that trusted AI's digital twin. The limited AIs can used as the modules that operate under one domain. The versatile AIs can act as attack tools.
Or the digital twin of the well-known AI chatbots can used as the greatest honey pots in the world. The AI-based chatbot can store all queries that users make. And answers that they get in the mass memories. There the intelligence can analyze that data.
The attackers may work for some governments. And if they can create a large number of limited AIs they can create the entirety that is the same way versatile as the Chat GPT or Bing. But the programmers can modify that thing into the cyber attack tools.
When we talk about language models and especially network-based giant applications. There is always a small possibility that hackers create duplicates of famous artificial intelligence tools. In that case, the hackers can create the language model, that generates new types of malicious software. The hackers can double the data that travels in and out from the language models, and then they get confidential data about things. That is not meant for the public.
Another problem is that it's possible to create the digital twin of the language models. The operators can create limited AIs. And then connect those limited AIs to the new entirety. In that model, the smaller AIs are the modules in the system. And making smaller AIs is possible to create a system that looks like the Chat GPT or Bing, and then those people can route traffic to that fake system. Those systems can modified to attack tools that can break any firewall or antivirus.
The AI doesn't think as we think. That means the AI can tell lies if programmers order it to do that. The AI's answers are programmed in its code. The programmers can order the AI answer certain way to the certain questions.
If somebody asks the espionage AI does it make some kind of data fishing? The AI can say that it will not make that. Even if its main purpose is to collect data from secured systems. The AI can give false information if that action is programmed for it. The AI can also create fake memories for itself. When we think about drones and robots the AI can clean the mission recordings and then replace that data using some other drone's mission records. So the AI remembers what its users want.
https://www.sciencealert.com/ai-has-already-become-a-master-of-lies-and-deception-scientists-warn
Comments
Post a Comment