I monitored an election last year where AI-generated content was used as a deliberate campaign tool and it was more effective than I expected
I work for an organisation that monitors elections in emerging democracies. In an election I observed last year, one campaign used AI-generated content at scale, fabricated quotes attributed to opposition candidates, synthetic audio of public figures saying things they never said, and AI-generated images placed in contexts designed to mislead.
None of it was technically sophisticated. All of it was detectable with moderate effort. But the effort required to detect it was higher than the effort required to spread it, and the spread happened faster than the corrections.
What I want to discuss is not whether this is bad, it obviously is. What I want to understand is whether the international community, platform companies, and civil society organisations have developed any response that actually works at the speed this content travels. Because from what I observed, the answer is currently no.