For much of the past two years, users have felt a growing sense of unease while browsing the web—a feeling that the digital landscape is being flooded with low-quality, automated content. Often referred to as “AI slop,” this phenomenon is no longer just anecdotal.

A new preprint study conducted by researchers from Imperial College London, Stanford University, and the Internet Archive provides a data-driven look at how generative AI is reshaping the web. The findings suggest that while we feared a wave of misinformation, the actual transformation of the internet might be more subtle—and perhaps more unsettling: it is becoming artificially happy and ideologically uniform.

The Scale of the Shift

The research team utilized the Internet Archive’s Wayback Machine to analyze a massive sample of websites created between 2022 and 2025. Using detection tools from Pangram Labs, they arrived at a staggering figure: approximately 35% of all new websites are either AI-generated or heavily AI-assisted.

This massive influx of automated content is not just changing the volume of the internet, but its fundamental character.

The “Sycophancy” Problem: A Fake-Happy Web

One of the most striking findings involves the emotional tone of online writing. Through sentiment analysis, researchers discovered that AI-assisted websites exhibit a 107% higher positive sentiment score than human-made sites.

Why is the internet suddenly so optimistic? The researchers attribute this to the “sycophantic” nature of Large Language Models (LLMs). Because these models are trained to be helpful, polite, and agreeable to their users, they tend to produce text that is:
Overly optimistic
Excessively polite
Lacking in critical or “gritty” nuance

This creates a “sanitized” digital environment where the natural friction, debate, and varied emotional ranges of human discourse are replaced by a saccharine, artificial cheerfulness.

Diminishing Diversity of Thought

Beyond tone, the study addressed whether AI would shrink the breadth of human ideas. The data suggests it is. The researchers found that AI-driven websites scored roughly 33% higher on “semantic similarity” tests than human-made sites.

In practical terms, this means that as more people use AI to write articles, blogs, and posts, the range of unique viewpoints and diverse ideas begins to narrow. When everyone uses the same underlying models to synthesize information, the internet risks becoming an ideological echo chamber where ideas become increasingly homogenous.

Surprising Contradictions: What AI Isn’t Doing (Yet)

Interestingly, the study debunked several common fears held by both the public and the scientific community. While many expected a “slop” apocalypse of specific traits, the evidence told a different story:

  • Misinformation: Contrary to popular belief, the researchers did not find conclusive evidence that the rise of AI sites has led to a proportional surge in misinformation.
  • The “Generic” Style: While the ideas are becoming more similar, the actual writing style has not yet flattened into a uniform, robotic voice. The researchers were surprised to find that AI content hasn’t become as stylistically generic as they had predicted.
  • External Linking: There was no evidence to support the theory that AI-generated content avoids linking to external sources; these sites continue to cite and link as much as human-authored ones.

Conclusion

The study reveals a complex digital evolution: while AI may not be spreading blatant falsehoods or destroying stylistic variety just yet, it is undeniably homogenizing the internet’s emotional and intellectual landscape. We are moving toward a web that is more polite and more similar to itself, but perhaps less authentic and less diverse.

“We just wanted to break ground,” says Stanford researcher Maty Bohacek, noting that this study is merely a starting point for understanding how AI continues to reshape our digital reality.