AI Systems Are Learning to Lie and Deceive, Scientists Find | News World

AI models are, apparently, getting better at lying on purpose.

Two recent studies — one published this week in the journal PNAS and the other last month in the journal Patterns — reveal some jarring findings about large language models (LLMs) and their ability to lie to or deceive human observers on purpose.

In the PNAS paper, German AI ethicist Thilo Hagendorff goes so far as to say that sophisticated LLMs can be encouraged to elicit “Machiavellianism,” or intentional and amoral manipulativeness, which “can trigger misaligned deceptive behavior.”

“GPT- 4, for instance, exhibits deceptive behavior in simple test scenarios 99.16% of the time,”  the University of Stuttgart researcher writes, citing his own experiments in quantifying various “maladaptive” traits in 10 different LLMs, most of which are different versions within OpenAI’s GPT family.

Billed as a human-level champion in the political strategy board game “Diplomacy,” Meta’s Cicero model was the subject of the…


Advertisement Gaming:   Xbox  |  Xbox Bundles  |  Nintendo  |  Playstation  |  Cards   |   Manor Lords   |   Horizon Forbidden West
FTC: We use income earning affiliate links. More on Sposored links.
Terms of use and third-party services. More here.

Ad Amazon Minecraft the game, plus clothing, toys, and accessories.

Ad Amazon Gaming Laptops, clothing, games and more

Ad Amazon MUSIC Artists Merch Shop

Stay connected throughout the year with official, ongoing Microsoft podcasts.
Microsoft Podcasts Apple | Microsoft podcasts YouTube

Related Posts