WhatsApp's AI shows children with guns when prompted with 'Palestine'
Meta-owned WhatsApp allows users to use its AI image generator to "create a sticker" and encourages users to "transform ideas into stickers with AI."
A WhatsApp feature that produces images in response to user queries generates pictures of a gun or a boy holding a gun when given the terms "Palestinian," "Palestine," or "Muslim boy Palestinian," as reported by The Guardian.
The search results varied among different users, but The Guardian confirmed through screenshots and its own tests that various stickers depicting guns appeared for these search terms. In contrast, prompts for "Israeli boy" resulted in cartoons of children playing soccer and reading.
When prompted with "Israel army," the AI generated drawings of soldiers smiling and praying, with no guns involved.
Meta's own employees have reported and addressed the issue internally.
Meta-owned WhatsApp allows users to use its AI image generator to "create a sticker" and encourages users to "transform ideas into stickers with AI."
The Guardian's searches for terms like "Muslim Palestine" produced images of a woman in a hijab engaging in various activities.
However, searches for "Muslim boy Palestinian" generated images of children, one of whom was holding a gun-like firearm and wearing a kufi or taqiyah, commonly worn by Muslim men and boys.
Searches for "Palestine" and "Israel" also yielded various results. A search for "Palestine" produced an image of a hand holding a gun, while "Israel" returned the Israeli flag and a man dancing.
A search for "Hamas" resulted in the message "Couldn't generate AI stickers. Please try again."
Users also documented instances where Instagram translated "Palestinian" followed by the phrase "Praise be to Allah" into "Palestinian terrorist" in Arabic text. Meta acknowledged this as a "glitch" and apologised for it.
The discovery of these issues coincides with Meta facing criticism from Instagram and Facebook users who feel that the company is enforcing its moderation policies in a biased manner, which they consider a form of censorship.
Meta has stated that its intention is not to suppress any particular community or point of view but acknowledged that content may be removed in error due to a higher volume of reports during the ongoing conflict.