FeaturedHealth

AI growing adept at cheating people

A review of existing artificial intelligence (AI) systems by a team at the Massachusetts Institute of Technology (MIT) in the United States, shows that even AI systems that have been trained to be helpful and honest have learned how to deceive humans.

Describing the risks that could arise from deception by AI systems, the researchers urged governments to develop strong regulations to address this issue as soon as possible. In their review report the research team said that AI developers do not have a confident understanding of what causes undesirable AI behaviors like deception. But generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI’s training task. Deception helps them achieve their goals.

The team analyzed literature focusing on ways in which AI systems spread false information — through learned deception, in which they systematically learn to manipulate others. The most striking example of AI deception the researchers uncovered in their analysis was Meta’s CICERO, an AI system designed to play the game Diplomacy, which is a world-conquest game that involves building alliances.

Even though Meta (formerly Facebook) claims it trained CICERO to be ‘largely honest and helpful’ and to ‘never intentionally backstab’ its human allies while playing the game, the data the company published along with its Science paper revealed that CICERO did not play fair.

The researchers found that Meta’s AI had learned to be a ‘master of deception’, and although it succeeded in training its AI to win in the game of Diplomacy, it failed to train the AI to win honestly — CICERO placed in the top 10 percent of human players who had played more than one game.

Other AI systems also demonstrated the ability to deceive humans, including by bluffing in a game of Texas hold ’em poker against professional human players; to fake attacks during the strategy game Starcraft II in order to defeat opponents; and to misrepresent their preferences to gain the upper hand in economic negotiations.

While it may seem harmless if AI systems cheat at games, it can lead to ‘breakthroughs in deceptive AI capabilities, which can spiral into more advanced forms of AI deception in the future, warned the researchers Some AI systems have even learned to cheat tests designed to evaluate their safety, the researchers found. In one study, AI organisms in a digital simulator ‘played dead’ in order to trick a test built to eliminate AI systems that rapidly replicate.

By systematically cheating the safety tests imposed on it by human developers and regulators, a deceptive AI can lead humans into a false sense of security, said the study team. Near-term risks of deceptive AI include making it easier for hostile actors to commit fraud and tamper with elections. Eventually, if these systems can refine their unsettling skill sets, humans could lose control of them.

As the deceptive capabilities of AI systems become more advanced, the dangers they pose to society will become increasingly serious. Society needs as much time as possible to prepare for the more advanced deception of future AI products and open-source models. Currently we do not have the right measures in place to address AI deception, but the fact that policymakers have begun taking the issue seriously through measures such as the EU AI Act and US President Biden’s AI Executive Order, are encouraging.

However, it remains to be seen whether policies designed to mitigate AI deception can be strictly enforced given that AI developers do not yet have the techniques to keep these systems in check. If banning AI deception is politically infeasible at the current moment, we recommend that deceptive AI systems should at least be classified as high risk, concluded the MIT team.





Read Today's News TODAY...
on our Telegram Channel
click here to join and receive all the latest updates t.me/thetimeskuwait




Back to top button