In collaboration with De Volkskrant, we investigated the invasion of AI-generated images in Google search results. It turns out that artificial intelligence is increasingly squeezing itself between real images. For topics like "beauty", only 69% of the images are actually real. By analyzing 4,248 search queries with advanced detection models, we mapped how large this problem really is.
The scale of the problem
We systematically analyzed image search results across multiple languages, annotating thousands of images to determine whether they were authentic photographs or AI-generated content. The results were striking: certain categories showed over 20% of search results classified as either "Fake" or "Not sure/Unknown."
Some categories are more affected than others. Baby animals are heavily dominated by synthetic content, possibly because AI-generated baby animals with exaggerated features tend to go viral and are widely shared.
Language matters
While all languages showed similar patterns, Dutch and Spanish searches exhibited slightly higher percentages of AI-generated images, hovering around 20%. This suggests that non-English language markets may be more susceptible to synthetic content flooding, possibly because there's less authentic content available to compete with AI-generated alternatives.
Why this matters beyond baby animals
This finding is a canary in the coal mine. AI-generated media has the potential to significantly impact politics, especially when it comes to misinformation. As AI tools become more sophisticated, the line between real and fabricated imagery grows increasingly difficult to distinguish.
During elections or political campaigns, AI-generated content, like videos showing a politician saying or doing something they never actually did, can spread rapidly on social media. By the time fact-checkers catch up, the damage is often already done.
The implications go further than you might think. If we can no longer distinguish what is authentic, this undermines the quality of information on which our democracy relies. As a society, we are still recovering from the impact of social media, and are now already confronted with even more complex challenges.
What can be done
Detection at scale
Organizations need better content provenance systems: watermarking, metadata standards, and transparent labeling of AI-generated content.
Platform responsibility
Search engines and social platforms need better content provenance systems: watermarking, metadata standards, and transparent labeling.
Media literacy
Critical evaluation of visual media becomes a fundamental skill, not just for journalists, but for everyone navigating the modern information landscape.
Read the full research
Need AI detection or content analysis systems for your organization?
Get expert advice