Do independent research, look for clues in the image and trust expert fact-checkers are good ways to identify whether an image could be AI-generated.

On the 10th annual International Fact-Checking Day, it is a good time to review how to identify AI-generated disinformation.

This type of content is appearing everywhere, from the US-Iran war to the lead-up to the Hungarian elections, and even on individual feeds.

A recent study, published in the journal PNAS Nexus, asked 27,000 people from 27 EU countries to rank eight human- and AI-generated news headlines on how real they appeared.

Nearly half of the AI-generated headlines were considered “mostly” or “completely real,” compared with 44 per cent of those written by humans. They were also more likely to share and trust an AI-generated news story than one written by humans if they knew it dealt with a real news event.

However, respondents said they were less likely to share a news story, written by a human or AI, if they knew it was fake.

The findings indicate that people are unable to distinguish between human- and AI-generated content, the researchers said.

Here are some tips for how to do that.

Look for visual cues
The first AI videos that were generated online had some obvious tells: humans with too many fingers, voices out of synch with the audio, or objects that were distorted.

There are fewer signs like this now because the technology has evolved, but it’s still worth looking for them.

Users can watch for inconsistencies, such as a car that is in a video one moment and gone the next.

Some AI images might also be considered overly polished or have an unnatural sheen, according to the Global Investigative Journalism Network (GIJN).

The GIJN recommends that people ask themselves, while looking at a potentially AI-generated image, whether the person looks too polished, especially for the context of the image (ie, do they look magazine-ready despite being in a conflict zone?) They also suggest looking at the quality of the skin to see whether there is missing texture.

Do your research

If an image or video seems suspicious, there are ways to check whether it is authentic.

One way to do this is through a reverse image search, where a user takes a screenshot of the video, uploads it to the Google search bar and presses the camera icon that says “search for image.”

Once a user uploads the image, Google will give visual matches for the same photo, and the first time it was posted can be quickly identified.

This can be done on other search engines or with specialised tools such as TinEye.

Users could also use technical solutions to trace a piece of content’s watermarks or metadata to figure out whether the information is trustworthy or not, according to the European Commission.

For example, images generated with Google’s Gemini AI include an invisible digital watermarking tool called SynthID, which the app can detect.

Listen to the experts
Users could also look to see whether media organisations, statements from public figures or a social media post from a misinformation expert has already debunked the image or video that they’ve seen circulating.

There are fact-checking organisations in Europe, such as the European Fact-Checking Standards Network (EFCSN), European Digital Media Observatory (EDMO), and EUvsDisinfo, run by the European Union’s External Action Service (EEAS), that publish trends, research and debunks into various forms of AI-generated disinformation.

These sources may have more advanced techniques for identifying AI-generated content or access to information about the image that is not accessible by the general public.

Users can also check to see whether the information they are seeing is part of the Database of Known Fakes, a database of fact-checks done by journalists, researchers and professional fact-checkers.

Make use of technology
There are some AI detection tools that can be used, but how accurate these tools are at flagging AI content is up for debate.

Some of the AI detection tools include: Winston AI, a tool that detects AI-generated images; TruthScan, which offers an initial assessment of whether an image is AI; and Originality AI, which detects if text was generated by AI.

Other AI creation tools have added visible watermarks to content they generate.

They are often easy to removeor crop out, meaning the absence of such a watermark is not proof that an image is genuine.

Slow down

Stop, take a breath and do not immediately share something that is not real.

Bad actors are often counting on the fact that people let their emotions and existing viewpoints guide their reactions to content.

Looking at the comments may provide clues about whether the image the user is seeing is real or not, because others might have noticed something else that points to AI generation.

It’s not always possible to determine whether an image is AI-generated, so remain alert to the possibility that it might not be real.

Credit: Euronews