Writing in the Age of AI Spotting

Chels

At this point, pretty much everyone working in a creative field is expected to have an opinion on AI, and the general consensus is an overall negative. As a writer, I fear the inevitable narrowing of the job market as AI becomes more commonplace (why hire a writer? It’s much less profitable). As a tutor, I see the way AI affects young people’s ability to problem solve. But as someone who initially believed that the AI boom over the last few years would lead to absolute chaos, I’ve learned a lot about how useful artificial intelligence can be in the sciences.

My current stance, for what it’s worth, stems from a post I read a while back.

If you would believe a trained animal could do it, then it’s a reliable use of AI. AI, like trained pigeons, rats, or dogs, is great at pattern recognition, which means it’s great for detecting patterns and shifts in the climate, and can spot signs of diseases pretty reliably. If I read on the news that a pigeon had been trained to detect a form of cancer, I’d consider that a great scientific achievement. If I read that a pigeon had been trained to create marketing strategies, I’m not so sure I’d believe it. So for me, the pigeon test is where I gauge how I feel about AI use in a given setting. (This theory unfortunately doesn’t apply to art, because while I much prefer art created by humans than computer programs, I get very excited when I see paintings made by dogs or rats or elephants on Instagram).

With the AI boom has come a rise in AI criticism, and in talking about AI. It’s getting increasingly hard for the untrained eye to spot AI, and that pattern will likely continue. There are tells, but the problem is that AI writing was trained on existing writing, so these tells come from real life writing conventions. The other problem is, there’s a lot of human error involved. Early on during the growth of AI, I read stories of neurodivergent workers and candidates having their communications flagged by employers as potentially AI generated, often because of a ‘robotic tone’, or a lack of human error (no spelling mistakes and perfect grammar). That was a worry for me, because I know that my own communication can read as very stiff and to the point. The problem with this kind of calling out is that it’s pretty hard to prove that something was AI written, because even the ‘tells’ are based on existing patterns and trends. AI writes the way it does because it was trained on writing by people who write that way.

The Well-Known Tells


There are two main tells that people point out most often. The em-dash and the Oxford comma. Despite being an English graduate and a writer, it took me until now to fully understand the difference between the em-dash and en-dash. After all, there’s only one dash key on a keyboard. The en-dash is the standard dash that appears when you hit the dash key on a keyboard. It’s the one I tend to use in all cases, mostly because I don’t quite know how to turn an en-dash into an em-dash on my writing software. That’s beside the point though. The em-dash is the dash used to break up clauses in a sentence, it essentially functions like a comma.

The novel – written by a human – was full of em-dashes


Of course, it’s not the presence of em-dashes that alerts people to potentially AI written work, it’s the overuse and reliance on them. The same goes for the Oxford comma. That is, the comma that separates the final two items in a list. There’s no hard and fast rule on whether you should or shouldn’t use one – the rules only come from publishers and editors with their own personal preference, but works written using generative AI pretty consistently include them. This could be for a few reasons. Maybe the people who initially trained the programs implemented it as a hard rule, or maybe the works used to train the programs all made use of the Oxford comma. Either way, they’re often there.

The problem, then, is that these are two pretty common grammar techniques. It’s easy to say don’t use the Oxford comma to avoid your work being mistaken for AI, but that comes with other risks. Publishers may prefer works that use it, even going as far as considering the use grammatically correct. AI is ever evolving, and its main skill is pattern recognition, so it would surely fade out the use of Oxford commas in line with future trends.

The tells I’ve found


My guilty pleasure is watching analysis videos on Youtube of, honestly, any topic. Usually old TV shows or online history. I queue up a set of video essays to listen to as I craft, and more often than I’d like to admit, an AI scripted video slips through the cracks. Between text to speech narration and a slew of comments, it’s usually easy to spot whether or not a video has been AI generated. It hasn’t been a net negative, though, because in listening to these videos, I’ve found some tells that I think are a little more reliable in spotting AI generated content.

There are definite patterns and structures to AI scripts. The most common one is negation, specifically, the phrase it’s not just x, its y. This ties into the other frequent technique of a bizarre exaggeration. With AI, everything feels a bit like an advertisement. Something can’t just be good or great, it’s the best. A character can’t just be a villain, they’re the most evil character in the whole show. Everything becomes so black and white, so final. My guess would be that this combats the expectation set by early AI writing of the robotic tone. By exaggerating, and by coming to extreme opinions, the work can’t be robotic (though it’s still frequently stiff and clunky). It’s almost like the defence mechanism against being spotted is now a way of spotting AI scripts. Camouflage gone wrong. The wording of AI generated scripts make them feel like a constant conclusion. A ten minute video may approach five or six conclusions that continue on until a new conclusion is made. It tags on to that sense of finality created by exaggerations. Often, the conclusions don’t even come from looking at data or summarising previous thoughts, they’re just concluding for the sake of concluding. There are also bizarre idioms that don’t quite make sense – they often just rephrase a statement into a more ‘relatable’ context. I don’t want to give away the content and channels that I’ve seen, but one said that a character was a terrible boss, then added ‘it’s like that one friend that’s the boss at work and treats his employees badly’. Like, yeah? You said that already.

It’s definitely hard, as a writer or any kind of creative, to navigate this new world of intense scrutiny, not only on the quality of the work we produce, but on the specifics. Have I used too many commas? Is my communication too formal? Do I sound like an advertisement? It can really hinder the creative process, and self-scrutiny can overtake creativity. Ultimately I think even taking away the AI ‘tells’, you do get a feel for what’s written by AI or by humans. The errors are different, the clunkiness is different. Even if we strive for perfection, there are things that inevitably slip through the cracks every so often, and I think different things slip through with human error or AI oversight.

All I know for sure is Jane Austen never had to deal with this problem.

Leave a comment