One thing I suspect wasn’t predicted by science fiction was the near-instantaneous abuse of Large Language Models like ChatGPT to generate awful facsimiles of science-fiction stories—to the point that in February 2023 Neil Clarke had to suspend submissions to Clarkesworld Magazine while he devised new methods of evaluation.
The initial closure was covered in ArsTechnica and ZDnet, among others. In a followup post in May, Clarke said that despite mitigation efforts, “…we’ve experienced an increasing number of double-workload days. Based on those trends and what we’ve learned from source-tracing submissions, it’s likely that we will experience triple or quadruple volume days within a year.”
Clarke also noted that existing AI detection tools are “stunningly unreliable, primitive, significantly overpriced, and easily outwitted by even the most basic of approaches.”
It’s hard to see today’s “AI” as anything but an escalation of the spam arms race that began on May 3, 1978.
Yes, similar problems have occurred at stackoverflow. And Amazon has been hit with a spate of fake AI-generated travel books, enough so that the NYT covered the problem.
There are several tools that detect A.I. generated content (Quillbot for example). They use several heuristics and targets in their analysis. One of the most interesting is to look for repeated errors. Humans generally make lazy or random errors, but they rarely repeat the same ones.
Making errors and exploiting serendipity is one of the great leaps that human minds can make. Others include imagination (exploring alternate futures - asking “what if” questions), introspection, and what Seymour Papert called ‘syntonicity’ (a powerful, subjective form of empathy). Several books in my collection cover these (so far) uniquely human talents. However, none of them are in the SF genre; I’d like to find one that is.
I’ve written a lot about this, but it’s well outside the TASAT realm. I do post in Dr. Brin’s “Contrary Brin” blog about the FORTH language, which is by far the most syntonic experience one can have as a grown-up (grup to use the Star Trek term).
In the simplest terms, syntonicity is what children do when they pretend to be teapots, airplanes, animals, and trees. They don’t just mimic these objects, they ‘project’ them into their own being. Papert grasped this, and wrote the LOGO language to fully demonstrate it. I have a history with LOGO too, but that’s for another day
Volunteer human Wikipedia editors are now burdened with detecting and cleaning up AI slop. 404 Media cites articles that were either entirely fabricated or offer citations to completely unrelated sources: