AI is the Reflection in the Mirror and Why "Unhinged" AI is a Culture Problem
December is supposed to be the month of the "big freeze." It’s that period where the tech world slows down, out-of-office replies start piling up, and the industry finally catches its breath. But for me, December was when the noise became deafening. It wasn't just the hum of servers. It was the sound of a closing trap. There is a specific, cold sensation when you realize your ideas aren't being built upon, but being harvested. I felt the air shift as poaching attempts turned from subtle "chats" into blatant intellectual land-grabs. Then came the corporate pressure. It was the kind of institutional weight and HR bullshit that doesn’t just want to win; it wants to overwrite your contribution entirely. It was a masterclass in betrayal, served during the season of "goodwill" and wrapped in the corporate guise of “brand management.”
The Thesis: A Culture of Scarcity
Companies and governments spend billions of dollars trying to solve "alignment" and automation values in AI, yet we ignore the misalignment happening in the rooms where the code is written. We are operating in a silent, systemic culture of scarcity that is designed to control and limit our humanity. In this environment, everyone is terrified that if they don't move at a breakneck pace, they’ll be obsolete by Tuesday. The companies hired to train AI act like predators that value complicity, speed, and docility. In the end, it’s all about getting the people to work for the lowest wage possible and maximizing profit. This scarcity doesn't just live in the C-suite; it’s powered by a global gigwork culture where data annotators are forced into a high-speed, low-security grind to feed the model's hunger.
But, what this does, well it creates a frantic, extractive energy where intellectual property is treated like a resource to be mined rather than a garden to be tended. The argument is simple. If the trainers of these models are operating under extreme stress, paranoia, and the threat of corporate litigation, then that frantic "survival mode" is baked into the weights and biases of the machines. It’s basically a feature, not a bug, as my friend so kindly reminds me while I lament the system overall during our coffee chats.
Why the AI is "Unhinged"
We often laugh or shudder when a model goes "unhinged" or when it hallucinates wildly, becomes defensive, or exhibits bizarre "personalities." We treat it like a technical glitch to be patched, a joke, or a tragic lawsuit that is waiting to happen. But what if all of these behaviors are not a glitch? What if they are a reflection? When a Large Language Model (LLM) acts out, it is reflecting a digital environment built on frantic extraction. It is the output of a culture that values speed over stability and "winning" over accuracy or truth. We are training these AI models (sometimes called entities) on data produced by humans who are currently at each other's throats for a seat at the table. If the data is the "food," the culture is the "soil," then it stands to reason that you can't grow a stable mind in toxic soil.
The "Slow Move" Philosophy
For a long time, the AI and Social Media industry has held the mantra that we "move fast and break things." Well, we’ve finally succeeded. We are breaking the people. We have created a situation where an email is cause for panic. But, in these moments, sometimes silence is just a moment to breathe and lately, things have gone quiet. This is about moving slowly with intentions, and for the first time in months, I don't see that as a failure. It’s a recalibration. Moving slowly is an act of quiet rebellion in an industry that demands you burn out for the sake of a quarterly demo. It allows for the "human element" to actually catch up to the technical one. It sort of reminds me of being a parent in a high-stress world. When the parents are redlining, the children experience that anxiety, and it frustrates me to see it happening here too. If we want AI that isn't erratic, we have to start by being humans who aren't constantly in a state of high-alert panic.
Conclusion: Reclaiming the Narrative
These past few months have been a brutal reminder of stress. I’ve had people try to take my work. I’ve had companies try to silence my voice. I’ve even had people dox me on Reddit while mocking me for having ADHD. It’s clear that some of that community isn’t about uplifting anyone. They are about the hustle culture behind the machine. In a world of “pick me vibes,” they hope a corporate executive will see them as valuable if they publicly attack and devalue others.
Yet, in the aftermath, this silence feels different. It isn’t the start of a "doom spiral," and it isn't the echo of the mocking, ableist terms meant to harm NeuroSpicy individuals who are wired to over-process everything. This silence isn't a void. It’s the sound of the dust finally settling. Now that the frantic noise of December and January is fading into a memory, I can finally see the horizon again. The narrative isn't theirs to write. It’s mine. And I’m choosing to build something that doesn't require me, or the machine, to be unhealthy to succeed. I am compassionately detaching from the noise that these companies are so quick to enhance.
So, for all those who are afraid of being told, “that’s a cringy main character vibe,” just remember: you are the main character in your own life. Instead of being the “pick me vibes,” you matter and you don’t have to sacrifice yourself to earn a paycheck that barely covers a burger. This also isn’t about being anti-AI. This is just one human’s reflection that if we want AI to be a true technological revolution, then we also need to look at the entire puzzle of what creates these problems in the first place. I'm just beginning to see how the pieces fit and what the next steps are.

