-
Kenya's economy faces climate change risks: World Bank
-
Rural India powers global AI models
-
Equities, metals, oil rebound after Asia-wide rout
-
Italy's spread-out Olympics face transport challenge
-
Paying for a selfie: Rome starts charging for Trevi Fountain
-
Musk merges xAI into SpaceX in bid to build space data centers
-
New York records 13 cold-related deaths since late January
-
In post-Maduro Venezuela, pro- and anti-government workers march for better pay
-
Late-January US snowstorm wasn't historically exceptional: NOAA
-
Punctuality at Germany's crisis-hit railway slumps
-
Europe observatory hails plan to abandon light-polluting Chile project
-
Oil slides, gold loses lustre as Iran threat recedes
-
Russian captain found guilty in fatal North Sea crash
-
Disney earnings boosted by theme parks, as CEO handover nears
-
France demands 1.7 bn euros in payroll taxes from Uber: media report
-
Latest Epstein file dump rocks UK royals, politics
-
More baby milk recalls in France after new toxin rules
-
Germany hit by nationwide public transport strike
-
WHO chief says turmoil creates chance for reset
-
European stocks rise as gold, oil prices tumble
-
Trump says US talking deal with 'highest people' in Cuba
-
Olympic Games in northern Italy have German twist
-
At Grammys, 'ICE out' message loud and clear
-
Steven Spielberg earns coveted EGOT status with Grammy win
-
Kendrick Lamar, Bad Bunny, Lady Gaga triumph at Grammys
-
Japan says rare earth found in sediment retrieved on deep-sea mission
-
Oil tumbles on Iran hopes, precious metals hit by stronger dollar
-
Kendrick Lamar, Bad Bunny, Lady Gaga win early at Grammys
-
Surging euro presents new headache for ECB
-
US talking deal with 'highest people' in Cuba: Trump
-
Formerra and Evonik Expand Distribution Partnership for Healthcare Grades
-
Hans Vestberg, Former Verizon Chairman and CEO, Joins Digipower X As Senior Advisor
-
Nigeria's president pays tribute to Fela Kuti after Grammys Award
-
Iguanas fall from trees in Florida as icy weather bites southern US
-
French IT giant Capgemini to sell US subsidiary after row over ICE links
-
New Epstein accuser claims sexual encounter with ex-prince Andrew: report
-
Snowstorm disrupts travel in southern US as blast of icy weather widens
-
Afghan returnees in Bamiyan struggle despite new homes
-
Mired in economic trouble, Bangladesh pins hopes on election boost
-
Chinese cash in jewellery at automated gold recyclers as prices soar
-
Nvidia boss insists 'huge' investment in OpenAI on track
-
Snowstorm barrels into southern US as blast of icy weather widens
-
Ex-prince Andrew again caught up in Epstein scandal
-
How Lego got swept up in US-Mexico trade frictions
-
Snow storm barrels into southern US as blast of icy weather widens
-
Ex-prince Andrew dogged again by Epstein scandal
-
'Malfunction' cuts power in Ukraine. Here's what we know
-
Women in ties return as feminism faces pushback
-
Ship ahoy! Prague's homeless find safe haven on river boat
-
Epstein offered ex-prince Andrew meeting with Russian woman: files
Grok, is that Gaza? AI image checks mislocate news photographs
This image by AFP photojournalist Omar al-Qattaa shows a skeletal, underfed girl in Gaza, where Israel's blockade has fuelled fears of mass famine in the Palestinian territory.
But when social media users asked Grok where it came from, X boss Elon Musk's artificial intelligence chatbot was certain that the photograph was taken in Yemen nearly seven years ago.
The AI bot's untrue response was widely shared online and a left-wing pro-Palestinian French lawmaker, Aymeric Caron, was accused of peddling disinformation on the Israel-Hamas war for posting the photo.
At a time when internet users are turning to AI to verify images more and more, the furore shows the risks of trusting tools like Grok, when the technology is far from error-free.
Grok said the photo showed Amal Hussain, a seven-year-old Yemeni child, in October 2018.
In fact the photo shows nine-year-old Mariam Dawwas in the arms of her mother Modallala in Gaza City on August 2, 2025.
Before the war, sparked by Hamas's October 7, 2023 attack on Israel, Mariam weighed 25 kilograms, her mother told AFP.
Today, she weighs only nine. The only nutrition she gets to help her condition is milk, Modallala told AFP and even that's "not always available".
Challenged on its incorrect response, Grok said: "I do not spread fake news; I base my answers on verified sources."
The chatbot eventually issued a response that recognised the error -- but in reply to further queries the next day, Grok repeated its claim that the photo was from Yemen.
The chatbot has previously issued content that praised Nazi leader Adolf Hitler and that suggested people with Jewish surnames were more likely to spread online hate.
- Radical right bias -
Grok's mistakes illustrate the limits of AI tools, whose functions are as impenetrable as "black boxes", said Louis de Diesbach, a researcher in technological ethics.
"We don't know exactly why they give this or that reply, nor how they prioritise their sources," said Diesbach, author of a book on AI tools, "Hello ChatGPT".
Each AI has biases linked to the information it was trained on and the instructions of its creators, he said.
In the researcher's view Grok, made by Musk's xAI start-up, shows "highly pronounced biases which are highly aligned with the ideology" of the South African billionaire, a former confidante of US President Donald Trump and a standard-bearer for the radical right.
Asking a chatbot to pinpoint a photo's origin takes it out of its proper role, said Diesbach.
"Typically, when you look for the origin of an image, it might say: 'This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine'."
AI does not necessarily seek accuracy -- "that's not the goal," the expert said.
Another AFP photograph of a starving Gazan child by al-Qattaa, taken in July 2025, had already been wrongly located and dated by Grok to Yemen, 2016.
That error led to internet users accusing the French newspaper Liberation, which had published the photo, of manipulation.
- 'Friendly pathological liar' -
An AI's bias is linked to the data it is fed and what happens during fine-tuning -- the so-called alignment phase -- which then determines what the model would rate as a good or bad answer.
"Just because you explain to it that the answer's wrong doesn't mean it will then give a different one," Diesbach said.
"Its training data has not changed and neither has its alignment."
Grok is not alone in wrongly identifying images.
When AFP asked Mistral AI's Le Chat -- which is in part trained on AFP's articles under an agreement between the French start-up and the news agency -- the bot also misidentified the photo of Mariam Dawwas as being from Yemen.
For Diesbach, chatbots must never be used as tools to verify facts.
"They are not made to tell the truth," but to "generate content, whether true or false", he said.
"You have to look at it like a friendly pathological liar -- it may not always lie, but it always could."
Y.Tengku--CPN