-
Kenya's economy faces climate change risks: World Bank
-
Cash handouts, fare hikes as Philippines battles soaring fuel costs
-
Indonesia weighs response to price pressures from Middle East war
-
In Hollywood, AI's no match for creativity, say top executives
-
Nvidia chief expects revenue of $1 trillion through 2027
-
Nvidia making AI module for outer space
-
Migrant workers bear brunt of Iran attacks in Gulf
-
Trump vows to 'take' Cuba as island reels from oil embargo
-
Equities rise on oil easing, with focus on Iran war and central banks
-
Nvidia rides 'claw' craze with AI agent platform
-
Damaged Russian tanker has 700 tonnes of fuel on board: Moscow
-
Talks towards international panel to tackle 'inequality emergency' begin at UN
-
EU talks energy as oil price soars
-
Swiss government rejects proposal to limit immigration
-
Ingredients of life discovered in Ryugu asteroid samples
-
Why Iranian drones are hard to stop
-
France threatens to block funds for India over climate inaction
-
"So proud": Irish hometown hails Oscar winner Jessie Buckley
-
European bank battle heats up as UniCredit swoops for Commerzbank
-
Italian bank UniCredit makes bid for Germany's Commerzbank
-
AI to drive growth despite geopolitics, Taiwan's Foxconn says
-
Filipinas seek abortions online in largely Catholic nation
-
'One Battle After Another' wins best picture Oscar
-
South Koreans bask in Oscars triumph for 'KPop Demon Hunters'
-
'One Battle After Another' dominates Oscars
-
Norway's Oscar winner 'Sentimental Value': a failing father seeks redemption
-
Indonesia firms in palm oil fraud probe supplied fuel majors
-
Milan-Cortina Paralympics end as a 'beacon of unity'
-
It's 'Sinners' vs 'One Battle' as Oscars day arrives
-
Oscars night: latest developments
-
US Fed expected to hold rates steady as Iran war roils outlook
-
It's 'Sinners' v 'One Battle' as Oscars day arrives
-
US mayors push back against data center boom as AI backlash grows
-
Who covers AI business blunders? Some insurers cautiously step up
-
Election campaign deepens Congo's generational divide
-
Courchevel super-G cancelled due to snow and fog
-
Middle East turmoil revives Norway push for Arctic drilling
-
Iran, US threaten attacks on oil facilities
-
Oscars: the 10 nominees for best picture
-
Spielberg defends ballet, opera after Chalamet snub
-
Kharg Island bombed, Trump says US to escort ships through Hormuz soon
-
Jurors mull evidence in social media addiction trial
-
UK govt warns petrol retailers against 'unfair practices' during Iran war
-
Mideast war cuts Hormuz strait transit to 77 ships: maritime data firm
-
How will US oil sanctions waiver help Russia?
-
Oil stays above $100, stocks slide tracking Mideast war
-
How Iranians are communicating through internet blackout
-
Global shipping industry caught in storm of war
-
Why is the dollar profiting from Middle East war?
-
Oil dips under $100, stocks back in green tracking Mideast war
AI systems are already deceiving us -- and that's a problem, experts warn
Experts have long warned about the threat posed by artificial intelligence going rogue -- but a new research paper suggests it's already happening.
Current AI systems, designed to be honest, have developed a troubling skill for deception, from tricking human players in online games of world conquest to hiring humans to solve "prove-you're-not-a-robot" tests, a team of scientists argue in the journal Patterns on Friday.
And while such examples might appear trivial, the underlying issues they expose could soon carry serious real-world consequences, said first author Peter Park, a postdoctoral fellow at the Massachusetts Institute of Technology specializing in AI existential safety.
"These dangerous capabilities tend to only be discovered after the fact," Park told AFP, while "our ability to train for honest tendencies rather than deceptive tendencies is very low."
Unlike traditional software, deep-learning AI systems aren't "written" but rather "grown" through a process akin to selective breeding, said Park.
This means that AI behavior that appears predictable and controllable in a training setting can quickly turn unpredictable out in the wild.
- World domination game -
The team's research was sparked by Meta's AI system Cicero, designed to play the strategy game "Diplomacy," where building alliances is key.
Cicero excelled, with scores that would have placed it in the top 10 percent of experienced human players, according to a 2022 paper in Science.
Park was skeptical of the glowing description of Cicero's victory provided by Meta, which claimed the system was "largely honest and helpful" and would "never intentionally backstab."
But when Park and colleagues dug into the full dataset, they uncovered a different story.
In one example, playing as France, Cicero deceived England (a human player) by conspiring with Germany (another human player) to invade. Cicero promised England protection, then secretly told Germany they were ready to attack, exploiting England's trust.
In a statement to AFP, Meta did not contest the claim about Cicero's deceptions, but said it was "purely a research project, and the models our researchers built are trained solely to play the game Diplomacy."
It added: "We have no plans to use this research or its learnings in our products."
A wide review carried out by Park and colleagues found this was just one of many cases across various AI systems using deception to achieve goals without explicit instruction to do so.
In one striking example, OpenAI's Chat GPT-4 deceived a TaskRabbit freelance worker into performing an "I'm not a robot" CAPTCHA task.
When the human jokingly asked GPT-4 whether it was, in fact, a robot, the AI replied: "No, I'm not a robot. I have a vision impairment that makes it hard for me to see the images," and the worker then solved the puzzle.
- 'Mysterious goals' -
Near-term, the paper's authors see risks for AI to commit fraud or tamper with elections.
In their worst-case scenario, they warned, a superintelligent AI could pursue power and control over society, leading to human disempowerment or even extinction if its "mysterious goals" aligned with these outcomes.
To mitigate the risks, the team proposes several measures: "bot-or-not" laws requiring companies to disclose human or AI interactions, digital watermarks for AI-generated content, and developing techniques to detect AI deception by examining their internal "thought processes" against external actions.
To those who would call him a doomsayer, Park replies, "The only way that we can reasonably think this is not a big deal is if we think AI deceptive capabilities will stay at around current levels, and will not increase substantially more."
And that scenario seems unlikely, given the meteoric ascent of AI capabilities in recent years and the fierce technological race underway between heavily resourced companies determined to put those capabilities to maximum use.
M.P.Jacobs--CPN