Now AI Wants to Poison People so That's Fun
HomeHome > Blog > Now AI Wants to Poison People so That's Fun

Now AI Wants to Poison People so That's Fun

Jul 18, 2023

As if the nightmare that is AI-written non-fiction wasn’t bad enough already, we’ve now reached a point where it’s gone from frustrating (AI-written neo-pagan tomes filled with aggressive misinformation about myth and history) to potentially deadly, with AI-produced guides to foraging wild plants and mushrooms. Yeah, that’s really not something you want the equivalent of a random word generator spewing out. Not unless you want people to actually die.

Writers, particularly small press and independent writers in niche fields, have been trying to address the problem of AI-generated books for some time now. Not only are they drowning out actual human authors through sheer volume and the effect that has on the algorithm, but because the information in them is basically crowd-sourced and then further scrambled in the process of “creating” something “new.” The market is being flooded with heaps of misinformation that make the worst, most badly researched fluff of the past start to look plausible. It’s bad enough when it comes to topics like religion and history—we’ve all seen where misinformation on those subjects lead—but with something like foraging, where correctly identifying a plant is a matter of life or death, it’s suddenly a much more acute problem.

When it comes to identifying books as AI-generated, it can be hard to be absolutely certain. It’s always possible you’ve somehow come across an author with no internet presence whatsoever, not even social media. One who just so happens to be both a prolific and terrible writer, whose work is about as accurate as a stoned college student hoping to scrape by just by turning up and handing something in. Whose books have a bunch of suspiciously positive five-star reviews that don’t quite match up to the book’s actual content, and then a handful of one or two star ratings written by actual people wanting to know what the hell even is this? Possible, but extremely unlikely.

Still, it’s hard to prove a negative, especially when you take into account the use of pen names or the fact that AI books are now being published, without consent, with real authors’ names and identities attached. The situation is a mess, and all we’ve got are best guesses, but some of those guesses are pretty damning when you put all of the evidence together.

It was members of online foraging communities on Facebook, Reddit, and other social media platforms, who sounded the alarm about these almost certainly AI-generated, definitely terrible guides to finding and processing wild foods.

The books in question first started appearing on Amazon around 2022, all with very similar titles and covers, each promising to be some variant on the ultimate foraging “bible” or the only guide to wild foods you’d ever need. Despite these grandiose claims, the information in the books is inadequate at best, strange or misleading at worst.

One such book includes entries on broccoli, brussels sprouts, oranges, and lemons—all domesticated plants you can only “forage” by raiding someone else’s garden, and definitely not something you’ll find growing wild. Pictures, essential to any guide on wild, edible plants, are either missing, in black and white, or lacking in sufficient detail to be of any use, and the reviews are split between the five-star meaningless kind, and highly critical one or two-star demands to know why the book’s content doesn’t match up to the claims on the cover.

Most damning is the fact that the alleged authors of these books appear to have no online presence whatsoever, even a reverse image search of their author photographs turned up nothing, suggesting that these “photographs” are likely AI-generated as well.

For an experienced forager, books like this are easy to dismiss, but of course, that’s also not the target audience. Books like this are aimed at beginners, and the problem there is that you don’t know what you don’t know. While some are more obviously useless than others (the forage-your-own broccoli book and the one with a list of recipes but no actual instructions on how to make them come to mind), others are going to be harder to spot for people just starting out, and that’s where things get actively dangerous.

A lot of wild plants are deadly, and, like water hemlock, bear a close resemblance to harmless edible ones. You really have to know what you’re doing if you’re going to take up foraging, and any misinformation put out there, especially with claims of expertise behind it, has the potential to kill someone.

Foraging isn’t the only potentially deadly hobby likely to be affected by these AI-written guides. It’s common in a lot of craft circles for expert hobbyists and professionals to create self-published handbooks to bring in a little extra cash, and while an AI guide to crochet is likely to be merely an annoyance, AI books on electronics or metalwork have the potential to be extremely dangerous. All it takes is missing one step (not grounding something properly, failing to fully dry out the metal clay before firing) and someone can end up dead or seriously injured. And as with those new to foraging, you don’t know what you don’t know.

We’re used to being able to assume that anything written in a book is probably, if not 100% accurate, then at least safe. A proper publishing house is supposed to make sure of that before going to press—if only because no one wants a lawsuit. This hasn’t been a safe assumption for around a decade now thanks to the Amazon-fueled boom in self-publishing, but the proliferation of AI-produced works has exacerbated the problem to the extreme, and if something isn’t done to mitigate it things are only going to get worse.

This doesn’t mean we can’t trust anything we read, or that we can only rely on large, mainstream publishers for aurate information. We just need to be a lot more cautious when selecting our sources, and one of the key ways to do that, perhaps ironically, is through social media. Independent authors and experts have, often to their chagrin, had to maintain an active social media presence for some time now in order to establish their expertise and build a following for their work.

Before buying anything off Amazon or a similar marketplace it’s important to thoroughly research the author, not only to ensure they’re a real person (and actually the author behind the book) now thanks to AI, but to be sure they know what they’re talking about, that they’re respected by their peers and not pushing pseudo-science or disinformation. Especially when it’s something that comes with a high potential of killing or injuring yourself if you do it wrong.

As for the deluge of AI books we’re seeing on Amazon right now, that’s only going to stop if Amazon decides to do something about it. Which is unlikely unless it stops being profitable for them or legal action forces their hand. Something needs to be done about AI-generated work, preferably before it poisons someone or leads to third-degree burns, the only question is how we’re going to make sure it happens.

(featured image: Fox)

Have a tip we should know? [email protected]

Siobhan Ball (she/her) is a contributing writer covering news, queer stuff, politics and Star Wars. A former historian and archivist, she made her first forays into journalism by writing a number of queer history articles c. 2016 and things spiralled from there. When she's not working she's still writing, with several novels and a book on Irish myth on the go, as well as developing her skills as a jeweller.