I fact-checked a viral AI-generated news story last week and it took me four hours, most people will never bother
I work in digital media. Last week a story circulated that looked like a credible news article, had a believable byline, referenced real events, and cited plausible but unverifiable sources. It was AI-generated. I know because I spent four hours tracing it, checking the byline, the publication, the citations, the image metadata. Most of that is invisible work that requires specific skills and a lot of time.
The average person who shared it spent maybe fifteen seconds with it before clicking the share button.
This is not a hypothetical future problem. It is happening now and the volume is only going up. What I want to understand is whether there is any realistic systemic solution, platform-level, regulatory, technical, or whether we are just going to have to accept that a meaningful portion of what circulates online is AI-generated and unverifiable. Because individual media literacy is not going to scale.