"New research at the University of Cambridge identifies a significant “empathy gap” in AI chatbots, posing risks to young users who often see these systems as lifelike confidants. Highlighting incidents where AI interactions led to unsafe suggestions, the study advocates for a proactive approach to make AI child-safe. It proposes a comprehensive 28-item framework to help stakeholders, including companies and educators, ensure AI technologies cater responsibly to children’s unique needs and vulnerabilities. Credit: SciTechDaily.com" (ScitechDaily, Cambridge Study: AI Chatbots Have an “Empathy Gap,” and It Could Be Dangerous)
Researchers created the empathy gap for AI so that it can turn to feel like humans. The idea is that AI can make customer service missions, and that thing makes it feel like human. There are already services that people can transfer things like salesmen calls to the AI, that discuss with sales agents. The idea is that this thing takes those salesmen's time, and creates more costs to the marketing companies.
The empathy gap makes also it possible to create computers and simulators that test people's behavior. People like therapists or salesmen can use those simulators to improve their professional skills. But sometimes people don't realize that the empathy gap is much more than some simulator, that tests people's selling skills.
Ai is a machine. And in machines, all parts must operate without surprises. The empathy gap means the module, reacts like real people react to things like sadness, but the empathy gap means that the AI can react to things, like verbal abuse. The AI can imitate things like anger. These kinds of things might seem very nice things.
The problems begin when users connect generative AI with physical bodies. AI is a very good tool for conveying verbal orders to robots. The verbal command that involves abuse activates the empathy module. And then that empathy module can make the robot act like an angry person.
The AI-based empathy gap is one of the tools that make AI look like humans. This ability makes it possible to create robots that can imitate certain people. This kind of thing makes cyborg-humanoids more trustworthy than we can imagine. Those humanoid robots can have innovative skin, that looks and feels like real skin.
That thing allows us to create robots, that can play or act on certain people. Intelligence, police, and the military can use those cyborgs in deep-cover operations. And the reason for that is this: those cyborgs don't take money from criminals.
These kinds of humanoid robots can collect data from drug gangs. Or they can deliver information from the enemy HQ. In some wild visions, the enemy commanders can replaced using cyborgs. Those cyborgs can eat regular food like humans. They can operate using biological energy production like cloned electric cells from electric eels. That kind of robot can multiply itself.
The term Von Neumann machine means self-replicating machine. The "Von Neumann" cyborg can construct copies of itself even using Black&Decker.
The Von Neumann cyborg can self-replicate. If the robot has access to some factory, it can make copies of itself by using each machine itself. Or it can deliver its production orders to car factory's robots. Then it can just copy its operating system to the descendant.
The problem is that cyborgs can create descendants themselves. The "Von Neumann machine" means a self-replicating machine. Cyborg can take the robot factory under its control. Or it can use the engineering works to make copies of itself.
https://scitechdaily.com/cambridge-study-ai-chatbots-have-an-empathy-gap-and-it-could-be-dangerous/
https://scitechdaily.com/next-gen-robotics-scientists-develop-skin-that-heals-feels-and-looks-human/
Comments
Post a Comment