
Homogenized: How AI is Flattening us
- Amal Altwaijri
- Sep 10
- 4 min read
The promise of AI was seduction by abundance: everyone could become a writer, an artist, a filmmaker, a composer. No more gatekeepers, no more lonely stabs in the dark. Just prompts, outputs, endless supply.
What we got instead was the aesthetic equivalent of airport sushi—plentiful, technically competent, and utterly indistinguishable.
This flattening is not entirely the machine’s fault. The models are mirrors, and we fed them what we had: millions of images, essays, and songs that already leaned toward the middle. They return to us what was most common, most repeated, most liked. In short, they hand us back ourselves—only averaged, compressed, and bleached of anomaly.
We have, in other words, automated the median.
If culture once staggered forward by way of accidents—wrong notes, failed experiments, eccentric visions—it now glides frictionlessly along the grooves of statistical probability. And we, apparently exhausted by choice, accept it. Efficiency has become our highest virtue.
The danger isn’t that AI will replace human creators, but that humans will stop bothering to compete with it.
The internet once promised an infinite library of human voices. Every blog post, tweet, or grainy YouTube monologue arrived bearing the fingerprint of its maker—half-formed, eccentric, unmistakably personal. Now the hum feels more uniform. What was once noisy, chaotic, even abrasive, has been sanded down into a smooth, frictionless feed. The culprit, of course, is our new collaborator: artificial intelligence.
AI-generated content has the curious quality of being both impressive and forgettable. Its prose is grammatically unassailable, its images gleam with cinematic polish, its music hovers somewhere between an indie soundtrack and a glossy commercial. But after a while, everything begins to sound and look the same, as though culture itself has been pushed through a compressor. The machines have learned our patterns, and in returning them to us, they hand back the average.
Sameness, of course, is not new.
Television sitcoms in the nineties were nearly indistinguishable in their pacing, their laugh tracks, their cast of six. Pop songs of any era follow formulas so precise that producers trade in spreadsheets. Earlier still, industrialization had already prepared the ground. Ernest Gellner’s Modernization Theory argued that industrial societies tend toward homogenization: standardized education, bureaucratic communication, linguistic uniformity, shared media. All of it designed to bind atomized individuals into collective order.
AI now extends that same logic under the banner of efficiency—flattening stylistic diversity just as industrial modernity once flattened local dialects, folk traditions, and eccentric ways of life.
And the effect is not only cultural but cognitive.
A recent study from MIT’s Media Lab found that students writing essays with ChatGPT showed reduced brain activity in regions tied to creativity and working memory compared to those writing with Google or unaided. Their essays converged on similar phrases and structures, suggesting that when we outsource invention to the machine, we do not just get uniform outputs—we begin to think more uniformly ourselves.
Still, flattening has a countereffect. Against a backdrop of endlessly competent, AI-polished work, the lopsided human voice suddenly feels urgent again. A typo, a tangent, an image that doesn’t resolve—these imperfections read less like mistakes than like proof of life. The rough edges of creation, once liabilities, become signatures. The strange, the stubbornly personal, the untrainable will stand out all the more.
The Japanese tradition of Wabi-Sabi long ago recognized what we are rediscovering: beauty lies in imperfection, impermanence, incompleteness. A cracked teacup mended with gold, a weathered beam left uneven, a painting with empty space—these are not flaws but revelations of time, touch, and contingency.
And we should be clear: the authors are not the supposed “creators” tapping prompts into the interface, but the coders who built the system. They train models on oceans of images, lines of code, terabytes of prose. They decide what counts as noise and what counts as signal. They tune probability distributions, weighting some outputs over others. And in doing so, they decide—quietly, almost invisibly—what the rest of us will call “style.”
This is artistry at the infrastructural level. Just as Renaissance painters obsessed over pigments, today’s engineers fuss over loss functions and architectures. The difference is scale. A single tweak in a training set can ripple out into millions of outputs across the globe. What looks like “generic AI art” is in fact the aesthetic fingerprint of the engineers who trained the model.
It is tempting to romanticize this: to say that code is poetry, that the neural net is a new canvas. But this shift in authorship also raises the stakes. The coder as artist is not working in a studio but on an industrial press. Every line of code is a brushstroke with global reach and power.
Just as surely as Hemingway smuggled in his clipped stoicism or Warhol his pop detachment, the engineers of AI models smuggle in a worldview: cautious, polished, vaguely corporate.
Perhaps the task, then, is not to resist the machine but to be mindful of its tendencies—to notice where it smooths, where it averages, where it erases. In that awareness lies our chance to keep seeking the rough edges, the strange inflections, the human fingerprints that remind us creation is more than competence...

