Paris: A heartbreaking photo of a starving young girl in Gaza has fueled strong reactions online, not only because of the suffering it catches, but because Elon Musk’s Ai -Chatbot, grock, wrongly said that the image was from Yemen years ago.
The mixture quickly spread over social media, making many people angry and upset. Some accused the chatbot of adding the confusion and spreading false information at a time when the emotions are already high.
This image of AFP photo journalist Omar al-Qattaa shows a skeleton, malnourished girl in Gaza, where Israel’s blockade has fueled the fear of mass famine on Palestinian territory.
But when users of social media Grok asked where it came from, the artificial intelligence -chatbot of X Boss Elon Musk was sure that the photo was taken in Yemen almost seven years ago.
The false reaction from the AI Bot was shared online on a large scale, and a left-wing pro-Palestinian French legislator, Aymeric Caron, was accused of spreading disinformation about the Israel-HAMAS war for placing the photo.
At a time when internet users are increasingly turning to AI to verify images, the furore emphasizes the risks of trust of tools such as grok when the technology is far from error -free.
GROK said the photo in October 2018 Amal Hussain, a seven -year -old Yemenitian child, showed.
In fact, the photo shows nine -year -old Mariam Dawwas in the arms of her mother Modallala in Gaza City on August 2, 2025.
Before the war, fueled by Hamas’s 7 October 2023, attack on Israel, Mariam weighed 25 kilograms, her mother told AFP.
Today she only weighs nine. The only food she gets to help her state is milk, Modallala told AFP – and even that is “not always available”.
Grok was challenged to the incorrect response and said: “I did not spread fake news; I base my answers on verified sources.”
The chatbot eventually gave an answer that recognized the error, but in response to further questions the next day repeated grock his claim that the photo of Yemen was.
The Chatbot has previously issued content that Nazi leader Adolf Hitler praised and suggested that people with Jewish surnames would spread online earlier.
Stadical Right Bias
The mistakes of Grok illustrate the limits of AI tools, the functions of which are as impenetrable as “black boxes,” Louis said the Diesbach, a researcher in technological ethics.
“We don’t know exactly why they give this or that answer, nor how they prioritize their sources,” said Diesbach, author of a book about AI Tools, Hello Chatgpt.
Every AI has linked prejudices to the information on which it was trained and the instructions of his makers, he said.
According to the researcher, Grok-made by Musk’s Xai-Start-Up- “highly pronounced prejudices that are strongly tailored to the ideology” of the South African billionaire, a former confidant of US President Donald Trump and a standard carrier for radical law.
Asking a chatbot to determine the origin of a photo, gets it out of its right role, Diesbach said.
“Usually, when you are looking for the origin of an image, it could say:” This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in almost every country where famine is. ” “
AI is not necessarily looking for accuracy – “that is not the goal,” the expert said.
Another AFP photo of a starving Gazan-Kind by Al-Qattaa, taken in July 2025, was already wrongly established and dated by Grok to Yemen, 2016.
That error led to internet users accusing the French newspaper Libération, who had published the photo, of manipulation.
‘Friendly pathological liar’
The bias of an AI is linked to the data it is fed and what happens during the coordination-the so-called alignment phase that then determines what the model would assess as a good or poor answer.
“Only because you explain that the answer is wrong does not mean that it will give another,” Diesbach said.
“The training data has not changed, nor is coordination.”
Grok is not only in the wrongful identification of images.
When AFP the Le-Chat of Mistral Ai was partially trained on the articles of AFP under an agreement between the French start-up and the news agency, De Bot also has the photo of Mariam Dawwas incorrectly identified as from Yemen.
For Diesbach, chatbots should never be used as aids to verify facts.
“They are not made to tell the truth,” but to “generate content, where or where,” he said.
“You have to look at it like a friendly pathological liar – it can’t always lie, but that is always possible.”


