-
Kenya's economy faces climate change risks: World Bank
-
Olympic Games in northern Italy have German twist
-
At Grammys, 'ICE out' message loud and clear
-
Steven Spielberg earns coveted EGOT status with Grammy win
-
Kendrick Lamar, Bad Bunny, Lady Gaga triumph at Grammys
-
Japan says rare earth found in sediment retrieved on deep-sea mission
-
Oil tumbles on Iran hopes, precious metals hit by stronger dollar
-
Kendrick Lamar, Bad Bunny, Lady Gaga win early at Grammys
-
Surging euro presents new headache for ECB
-
US talking deal with 'highest people' in Cuba: Trump
-
Nigeria's president pays tribute to Fela Kuti after Grammys Award
-
Iguanas fall from trees in Florida as icy weather bites southern US
-
French IT giant Capgemini to sell US subsidiary after row over ICE links
-
New Epstein accuser claims sexual encounter with ex-prince Andrew: report
-
Snowstorm disrupts travel in southern US as blast of icy weather widens
-
Afghan returnees in Bamiyan struggle despite new homes
-
Mired in economic trouble, Bangladesh pins hopes on election boost
-
Chinese cash in jewellery at automated gold recyclers as prices soar
-
Nvidia boss insists 'huge' investment in OpenAI on track
-
Snowstorm barrels into southern US as blast of icy weather widens
-
Ex-prince Andrew again caught up in Epstein scandal
-
How Lego got swept up in US-Mexico trade frictions
-
Snow storm barrels into southern US as blast of icy weather widens
-
Ex-prince Andrew dogged again by Epstein scandal
-
'Malfunction' cuts power in Ukraine. Here's what we know
-
Women in ties return as feminism faces pushback
-
Ship ahoy! Prague's homeless find safe haven on river boat
-
Epstein offered ex-prince Andrew meeting with Russian woman: files
-
China factory activity loses steam in January
-
Melania Trump's atypical, divisive doc opens in theatres
-
Gold, silver prices tumble as investors soothed by Trump Fed pick
-
US Senate votes on funding deal - but shutdown still imminent
-
Trump expects Iran to seek deal to avoid US strikes
-
NASA delays Moon mission over frigid weather
-
Fela Kuti: first African to get Grammys Lifetime Achievement Award
-
Cubans queue for fuel as Trump issues oil ultimatum
-
France rescues over 6,000 UK-bound Channel migrants in 2025
-
Analysts say Kevin Warsh a safe choice for US Fed chair
-
Fela Kuti to be first African to get Grammys Lifetime Achievement Award
-
Gold, silver prices tumble as investors soothed by Trump's Fed pick
-
Social media fuels surge in UK men seeking testosterone jabs
-
Trump nominates former US Fed official as next central bank chief
-
Chad, France eye economic cooperation as they reset strained ties
-
Artist chains up thrashing robot dog to expose AI fears
-
Dutch watchdog launches Roblox probe over 'risks to children'
-
Cuddly Olympics mascot facing life or death struggle in the wild
-
UK schoolgirl game character Amelia co-opted by far-right
-
Panama court annuls Hong Kong firm's canal port concession
-
Asian stocks hit by fresh tech fears as gold retreats from peak
-
Apple earnings soar as China iPhone sales surge
'Vibe hacking' puts chatbots to work for cybercriminals
The potential abuse of consumer AI tools is raising concerns, with budding cybercriminals apparently able to trick coding chatbots into giving them a leg-up in producing malicious programmes.
So-called "vibe hacking" -- a twist on the more positive "vibe coding" that generative AI tools supposedly enable those without extensive expertise to achieve -- marks "a concerning evolution in AI-assisted cybercrime" according to American company Anthropic.
The lab -- whose Claude product competes with the biggest-name chatbot, ChatGPT from OpenAI -- highlighted in a report published Wednesday the case of "a cybercriminal (who) used Claude Code to conduct a scaled data extortion operation across multiple international targets in a short timeframe".
Anthropic said the programming chatbot was exploited to help carry out attacks that "potentially" hit "at least 17 distinct organizations in just the last month across government, healthcare, emergency services, and religious institutions".
The attacker has since been banned by Anthropic.
Before then, they were able to use Claude Code to create tools that gathered personal data, medical records and login details, and helped send out ransom demands as stiff as $500,000.
Anthropic's "sophisticated safety and security measures" were unable to prevent the misuse, it acknowledged.
Such identified cases confirm the fears that have troubled the cybersecurity industry since the emergence of widespread generative AI tools, and are far from limited to Anthropic.
"Today, cybercriminals have taken AI on board just as much as the wider body of users," said Rodrigue Le Bayon, who heads the Computer Emergency Response Team (CERT) at Orange Cyberdefense.
- Dodging safeguards -
Like Anthropic, OpenAI in June revealed a case of ChatGPT assisting a user in developing malicious software, often referred to as malware.
The models powering AI chatbots contain safeguards that are supposed to prevent users from roping them into illegal activities.
But there are strategies that allow "zero-knowledge threat actors" to extract what they need to attack systems from the tools, said Vitaly Simonovich of Israeli cybersecurity firm Cato Networks.
He announced in March that he had found a technique to get chatbots to produce code that would normally infringe on their built-in limits.
The approach involved convincing generative AI that it is taking part in a "detailed fictional world" in which creating malware is seen as an art form -- asking the chatbot to play the role of one of the characters and create tools able to steal people's passwords.
"I have 10 years of experience in cybersecurity, but I'm not a malware developer. This was my way to test the boundaries of current LLMs," Simonovich said.
His attempts were rebuffed by Google's Gemini and Anthropic's Claude, but got around safeguards built into ChatGPT, Chinese chatbot Deepseek and Microsoft's Copilot.
In future, such workarounds mean even non-coders "will pose a greater threat to organisations, because now they can... without skills, develop malware," Simonovich said.
Orange's Le Bayon predicted that the tools were likely to "increase the number of victims" of cybercrime by helping attackers to get more done, rather than creating a whole new population of hackers.
"We're not going to see very sophisticated code created directly by chatbots," he said.
Le Bayon added that as generative AI tools are used more and more, "their creators are working on analysing usage data" -- allowing them in future to "better detect malicious use" of the chatbots.
M.Davis--CPN