top of page

I Was Accused of Being AI - is This the Road to Dystopia?

  • Writer: Joseph Stevenson
    Joseph Stevenson
  • Apr 28
  • 6 min read

Why does this read so much like chatGPT?


A second user responded, seemingly in support of the suggestion. Research companies have been putting chat gpt [sic] answers into subs for psychology experiments recently, they theorised.


Why have these seemingly innocuous responses to a Reddit comment given me cause to furiously type the below article in an almost blinding rage? Well, for three reasons:

  1. The comment under accusation of being AI was simply structured in an easy-to-read manner using grammar consistent with the expected linguistic skills of a native English speaker;

  2. It suggests a future of widespread disbelief and distrust;

  3. Perhaps most frustrating of all, it was my comment being decried as AI generated.


Sure, I might have failed a CAPTCHA or two in my time, but they've never provoked an existential crisis quite like this before.


So, how did all of this come about?


Doctor Who and the Eager Commenter

It all started with one Reddit user's fascinating idea and my own eagerness to contribute two cents.


The idea in question was to host a psychological experiment whereby members of the public would be shown 'Midnight' - a well-regarded episode of Doctor Who - before being asked to choose which character was in the right and who was in the wrong. As the episode is a low budget affair, with an extension of the Trolley Problem (with extra aliens) playing out among a small, isolated cast of characters, the suggested experiment sounded of great interest.


As someone with an avid interest in psychology, the idea sparked an inspiration not often found when I'm hiding under my duvet on a Monday morning. I allowed myself just five more minutes on Reddit before starting my day, purely because I felt I had a contribution to make. I wrote and rewrote my comment a couple of times, breaking it up with bullet points as I'd done on other lengthy Reddit comments. After all, aside from the woeful state of people's attention spans in the 21st Century, I'm keenly aware that my own mental wiring can result in confused communication and frustration on my part.


To create my comment, I blew the dust from my A-Level psychology knowledge, thought about the BBC script archive, and employed some common sense and creative problem solving. I hit submit and went about my day, not realising that I'd soon need to defend my own humanity like some poor replicant staring down the barrel of Harrison Ford's gun.



I, Human

How does one insist upon their own existence in a way that convinces those who are so confidently ignorant? This question weighed on me a lot as I processed the opinions of these strangers. After all, we're so used to automatically believing people who pass off AI-generated content as they're own - much to the frustration of the artists, writers, musicians, and professionals who have toiled for their labours - that I don't feel we've adequately broached the opposite. What happens when we start assuming everything is AI?


It's not been a sudden shift; I've spotted a few instances online where people have made such accusations in response to excellent works of art, and on social media when users suspect one another of milking AI for the opening chapters of their supposed fantasy epic. But despite the heads-up that it can happen, nobody's ever really guided us on how to navigate the accusations when they happen to you. How do we prove that we are the creators of the work we're taking credit for?


To aid in the defence of myself, I almost sent a screenshot of my ChatGPT history, just to show that it played no part in formulating my comment. After all, I mainly use it as a glorified search engine (conversation prompts include can I eat this food I made three days ago? and please can you find the listing for a weird house that I saw five years ago - it was a converted school?). In the end, I didn't send a screenshot - all I could think about was how they'd tell me I'd faked it.


As I typed an angry response to both users, Hamlet's second (third?) most famous line echoed in my head: the lady doth protest too much. At what point do my protestations start to sound insincere? How long do I have before I'm being tried as a robot and sent to the scrapheap? Worst, for somebody who struggles with getting out of their head, how does one mentally recover from being disbelieved and untrusted?


A Rumination of One's Own

(Props to any Virginia Woolf fans who enjoy the word play in the above heading)


Although the situation is frustrating, I think it's useful to take a pause and acknowledge the entanglements in my brain that might cause this indignant and furious response. I pride myself on having enough self-awareness to do so (even if I am now checking the mirror every five minutes to make sure I haven't developed metal skin or robotic eyes).


Trust is a highly valuable currency to me; to be disbelieved brings with it various negative emotional responses - including anger, sadness, and frustration - and it throws fuel onto the burning trash pile of shame that gums up my inner machinery. Through this lens, I can understand how those comments have triggered the resulting existential crisis.


Similarly, I also recognise that I hate someone thinking I'm not responsible for the output that I worked hard on - even as a child, if accused of tracing a drawing, I'd recoil as if in pain (especially seeing as I'm objectively bad at drawing - I could therefore only have traced the scribblings of a caveman).


All of this aside - and thanks for being here while I process this out of my system in realtime - there is still a legitimate concern we can (and should) start to think about. For me, this was my first real glimpse at what might be awaiting us further down the road.



Where Do We Go From Here?

Indeed, if we ponder for a moment where this small incident might lead, we can find ourselves weaving quite terrifying tales of the future that wouldn't be amiss in the next series of Black Mirror.


There is, in my mind, the risk that societal trust issues will evolve beyond mis- and disinformation - just as they sprouted from the clickbait that flooded the early internet. Back then, we grew collectively weary of them, turning the whole thing into a joke.


In days to come, however, the passive stream of false information that currently piggybacks off of various algorirthms to skew people's opinions could become second to the individual's active choice of distrusting others. And that choice would, ultimately, be based on a limited scope of what they know AI content to look and sound like - ignoring the fact that the underpinning models are constantly trained on existing human output.


While I'm just a psychology hobbyist on Reddith with a penchant for lurking in Doctor Who comment threads, how many steps away from this scenario is the reality of people decrying an expert, an academic, someone with experience etc.? There's already an anti-science sentiment among the anti-vax crowd; what happens when they add a second layer of paranoia, suspecting that the machines are lying to them?


In essence, it dismisses the intelligence, creativity, and lived experiences of real people - just as it does when people accuse other uses with differening opinions of being 'bots'. What's more dangerous in this eventual situation, however, is that it's so hard for the accused to 'prove' they're genuine; how can an AI argue that it's human?


In my opinion, this could either lead to a world where distrust grows so much that technology is shunned by some, creating a generational chasm, or one where the untrustworthy provoke unrest by playing on people's fear of artificial intelligence, cultivating suspicion for their own gain.


The Road Ahead

And if we truly can't tell the difference between an articulate human being and AI, where does that leave us? For me, this small occurrence has sparked an inordinate amount of turmoil - human turmoil, that is. Very much human, and very much capable of thinking and typing out a reply that's not in binary.


But underneath the roiling ire of being mistakenly identified as AI (with no apology so far...), is a deeper worry - one that is much colder and surprisingly terrifying. Could this mark the beginning of the end of trust, and what do we do when people stop trusting one another altogether?


If AI is trained on our behaviour and our speech patterns, we might soon rely on it to make our arguments for us. Then we really are screwed as a species.

Comments


© 2025 Joseph Stevenson. All written material is the property of the writer. Visual and video materials promoting fictional pieces created using Canva Pro. All components remain property of Canva and its affiliates, contributors, and partners. No work is endorsed by the original owners of these components. Used under Canva's Pro licence.

  • Instagram
bottom of page