"Leading AI scientists have issued a call for urgent action from global leaders, criticizing the lack of progress since the last AI Safety Summit. They propose stringent policies to govern AI development and prevent its misuse, emphasizing the potential for AI to exceed human capabilities and pose severe risks. Credit: SciTechDaily.com" (ScitechDaily, Leading AI Scientists Warn of Unleashing Risks Beyond Human Control)
When we think about small, limited AIs that can used for commercial use, we sometimes forget, that those limited AIs can act as modules in large-scale AI. That means. Hostile actors can even steal things like Chat GPT and the Copilot AIs by making large groups of limited AIs. In those cases, the hostile actors interconnect those limited AIs under one entirety.
AI is a tool that can generate many good things. But the same tool can create many bad things. AI-based image recognition is one of the tools. That can track terrorists. The same tool can also used to track protesters in non-democratic countries. The AI can also used to create computer viruses and surveillance tools.
Those tools are good or bad, depending on how people use them. The user of the AI determines if is it good or bad. AI is an excellent tool for material research. Those materials are the tools that can used to create new stealth fighters and other kinds of things. And you can think what those things are in the wrong hands.
The AI that controls cyber attacks is a tool that is quite hard for defenders. The AI can control computer groups. The AI can change the attacking computer. That changes the attacking IP address all the time. That makes it difficult for defenders to block attacks denying the query. That comes from a certain IP.
The attackers can use AI-controlled virtual actors to cheat people. The dead bots can also used to imitate people. Those things are the ultimate tools for phishing campaigns.
The AI can make many things into the reality. And the AI-based systems can hack human brains. Things like Neuralink implants are made for good. They are meant to help people who have no hope.
Those neuro-implants can someday control the exoskeletons or wearable robots that can carry paralyzed patients. The Brain-Computer Interface (BCI) is a tool that can control computers using EEG waves. Those systems can offer new ways to interact with robots. But if some hackers can break the BCI system, that can cause a terrible situation.
But in the wrong hands that technology is devastating. The neuro-implanted microchips can control the entire person or some animals. So in the wrong hands, those things can open frightening scenarios in our minds. The AI can decode EEG. And that gives a chance to see what people think. This is one of the biggest data security problems with the BCI systems.
https://scitechdaily.com/leading-ai-scientists-warn-of-unleashing-risks-beyond-human-control/
Comments
Post a Comment