top of page
  • Writer's pictureEvan J. Cholfin

Neck-deep in "Copypollution"

Copypollution is an insidious new form of destruction that comes from the least expected place - the things your AI lies about.

By Evan J. Cholfin

Copypollution proliferates with a rabbit in the court

Since very few people seem to be talking about it, I’d like to coin a new term: “copypollution”.

With the proliferation of “AI” (Large Language Model or LLM) usage around the world, a byproduct of the language mimicry is pure, hot, useless garbage. 

You’ve seen it. Whether it’s a series of incorrectly used words and typos strung together to bogus answers to entirely AI-invented background research for a court case, copypollution has already begun to proliferate. Clearly not everyone can spot it before they share their results.

It’s even been shown that when LLMs are trained on generated content, more copypollution comes out.

At best it can be an amusing anecdote at a party, at worst it can be quite dangerous.

A more insidious version could be used to put out information that could turn the tide of an election. To change laws. To change history. To destroy communication itself—the very fabric of our society.

This is a relatively new form of misinformation. And at the rate AI usage is growing, copypollution could become indistinguishable from real information. Or surpass the amount of accurate information to the point where truth is just a needle in a haystack, and most of what we find is just whatever copypollution the machines are spitting out that day.

We need transparent safeguards against this inconspicuous threat. I for one don’t want to see our world flooded with copypollution. For the safety of truth itself. And a cleaner internet.

5 views0 comments


bottom of page