I’m Criticising Generative AI Again

Chels

Back in September, I made a post about my thoughts on generative AI, and how difficult it’s becoming to navigate writing as a career and a hobby with the rise of AI. I didn’t plan on talking about it so soon again (I really am no expert in AI), but we’ve arrived at the first set of university assessment deadlines for the year, so I’m back to proofreading, and generative AI is, dare I say, ruining the experience for university students.

It’s a bold claim, but I’m sticking with it. This isn’t about students using AI to write assessments (I don’t know any of those, I imagine they’d also use an AI proofreader), but about students who are genuinely capable writers getting accused of using AI to write their work as a result of putting effort into their assessments. The thing is, for secondary school students, teachers have seen their in class work; they know the quality of work each student is capable of producing, they recognise particular language or grammar they use, so identifying if homework is AI produced or not is relatively easy for them. Once you get to university, however, this isn’t the case. There are rarely opportunities to see students’ written work prior to marking assessments, so there’s no benchmark for what students can and can’t produce. You have no evidence of how articulate a student is in writing, or of the structure they use to make their points, or really anything. There’s a chance you can get an idea of how well a student is paying attention in classes by their verbal responses, but for a lot of people, our verbal and written communication skills can vary widely.

This all means that we have to rely on the so-called AI ‘tells’ to identify AI written work – excess em-dashes, the oxford comma, continuous conclusions. This is all well and good, but academic writing often requires these conventions. All essays are working towards a conclusion – all through school I was taught that each essay paragraph should have a mini-conclusion, and should build towards my overall conclusion. Often, the oxford comma is necessary for a sentence to make sense, even outside of personal preference. Of course, it’s no one’s fault that AI models have picked up on features of competent writing and incorporated them to excess, but it’s extremely frustrating that it now means that skilled writers are avoiding completely normal and necessary grammar conventions to avoid being accused of using AI to write. What are we supposed to do? Let em-dashes die out, retroactively edit classic and modern literature to avoid the association with AI? That’s dramatic, but some of the AI-spotting discourse makes me feel that way. The problem is that as a species, humans struggle with nuance. We often think in black and white, so in online spaces, ‘AI relies on and overuses the oxford comma’ becomes ‘this is probably AI; look at the commas’. It’s beyond frustrating to see this misinformation being spread across social media, it’s almost becoming its own subcategory of media illiteracy. Not that the inability to tell generative AI apart from human made media should inherently be looked down upon – as these models learn, their ability to recreate text and images improves, and it’s extremely easy to misidentify AI media.

The most frustrating thing for me, though, is the way the AI boom is shaping the attitudes of both students and teachers. I’ve seen a rise in posts where students joke about leaving in errors or adding misspellings to their work to ‘prove’ that it wasn’t AI generated. Beyond that, students are avoiding phrases and grammar conventions that could be flagged as AI, at the expense of the readability of their work. It’s very disheartening to see that students are less inclined to strive to improve their work, because improving too much risks flagging their essays as AI generated, by either AI programs themselves or by their teachers. Worse, to me, is the discourse from teachers being more inclined to give higher grades to clunkier, messier work than high quality work, because ‘at least the messy work was written by a human’. On the surface, yes, students who don’t rely on generative AI to write their essays for them should be praised, but again, it’s the lack of nuance – there are some students in all stages of education who have really high quality writing, and I think the constant AI suspicion could hold these students back from striving to improve, or even putting effort in at all. After all, if work of a lower quality than yours was achieving better grades purely because it’s clearer that that work wasn’t AI, you’d surely stop putting in as much effort. I know I would feel discouraged by that, if I was in that position. After all, it’s pretty impossible to prove you wrote your own work, unless you filmed yourself writing it. And even then – AI videos are becoming more impressive, it would undoubtedly be unreliable if people are already suspicious of your work.Last time I talked about AI, I said that it was frustrating to be a writer in the current technological climate – AI has destroyed a lot of the jobs we’d previously been able to secure, and AI accusations for well-established grammar conventions are more frequent than ever. Now, though, I’ve realised how much more difficult it is for students. Every few weeks, a voice in my head tries to convince me to go back to university and do a PhD, but when I see posts and discussions from current university students, I think AI has made the university experience so much more difficult. After all, how do you balance trying hard to get the best grades you can with trying to avoid your work being too polished so that you don’t get accused of academic misconduct? Looking at the culture now, I’m incredibly thankful that I completed my undergraduate degree before chatGPT was released, and before this technological climate emerged.

Leave a comment