Seeing is believing? Decoding AI-generated images in times of crisis
When AI-generated photos are used inappropriately during a crisis, it can lead to misinformation, a decline in public confidence, and difficulties coordinating relief efforts
The recent catastrophic floods in Bangladesh have brought attention to how important visual media is for communicating the scope of natural disasters and coordinating relief efforts.
The proliferation of artificial intelligence (AI)-generated images that can be mistaken for real ones, however, is a new challenge brought about by AI's quick development.
When these photos are used inappropriately during a crisis, it can lead to misinformation, a decline in public confidence, and difficulties coordinating relief efforts.
The misinformation dilemma
One of the most serious risks posed by AI-generated images is the possibility of misinformation. When these images are shared online without proper verification, the public may be misled about the severity of the damage, the number of people affected, or even the cause of the crisis.
This can result in a distorted view of the situation, making it difficult for organisations and individuals to respond effectively.
For example, during the Bangladesh floods, AI-generated images depicting exaggerated levels of destruction or fabricated scenes of suffering may have instilled a false sense of urgency or panic.
This may have diverted resources away from areas that were truly in need, while also undermining public trust in the accuracy of information disseminated.
Erosion of public trust
The use of AI-generated images can also undermine public trust in news media and other sources of information.
When people discover that they have been duped by fabricated images, they may become wary of all forms of reporting, including those based on verified facts.
This can make it difficult to spread accurate information and rally support for relief efforts.
In the aftermath of the Bangladesh floods, the spread of AI-generated images may have harmed the credibility of news outlets and humanitarian organisations.
This could have made it more difficult to raise funds or mobilise volunteers because people may have doubted the veracity of the information presented.
Impeding relief efforts
Misuse of AI-generated images can also impede relief efforts by diverting attention away from real needs and posing logistical challenges.
If people believe the situation is worse than it is, they may donate more money or resources than are required, creating a sense of urgency that can overwhelm relief organisations.
Furthermore, the spread of misinformation can make it difficult for humanitarian workers to assess the situation on the ground and effectively allocate resources.
False reports of damage or danger may result in unnecessary deployments of personnel or equipment, diverting resources away from areas in real need.
How to verify images?
To mitigate the risks associated with AI-generated images, it is essential to prioritise verification and transparency in the dissemination of visual content.
News organisations, social media platforms, and individuals should be vigilant about checking the authenticity of images before sharing them online.
Look for anomalies and inconsistencies:
- Unnatural textures: AI models might struggle to replicate the intricate details of real-world textures. Pay attention to inconsistencies in patterns, shadows, or reflections.
- Distorted backgrounds: Objects in the background might appear distorted or out of place, suggesting AI manipulation.
- Unrealistic lighting: AI models might have difficulty accurately simulating the effects of light, leading to unnatural shadows or highlights.
Check for finer details:
- Blurry edges: AI-generated images often have blurry or pixelated edges, especially when zoomed in.
- Lack of detail: Examine the fine details within the image. If they appear overly smooth or lacking in texture, it could be a sign of AI manipulation.
- Inconsistent perspective: Pay attention to the perspective of objects in the image. If it seems off or inconsistent, it might be a red flag.
Analyse facial features:
- Unnatural expressions: AI-generated faces might exhibit unnatural expressions or exaggerated features.
- Missing details: Look for missing details in facial features, such as eyelashes, pores, or wrinkles.
- Inconsistencies in eyes: Pay close attention to the eyes. AI-generated eyes might appear too perfect or lack the natural variations found in real-world images.
Consider the source and context:
- Unreliable sources: Be wary of images from untrustworthy or unknown sources.
- Rapidly spreading content: If an image is spreading rapidly online without a clear origin, it could be a sign of AI-generated content.
- Contextual clues: Consider the context in which the image is presented. If it seems out of place or inconsistent with the surrounding information, it might be a red flag.
Use AI detection tools:
Specialised software: There are various AI detection tools available that can help you identify AI-generated images. These tools use advanced algorithms to analyse images and detect anomalies.
Websites like aiornot.com, hivemoderation.com and Illuminarty.ai can help you get to the bottom of things. However keep in mind they are not accurate all the time, so using multiple such tools is recommended.