Oh! To compete!
With an autocomplete!
In a recent RCT[1]
They found that good ol' GPT
Could beat!
A human like me.
Oh! Dare I ask!
At what special human task!
Poetry, and humans guessed
worse than chance, at the test
of who's human, and confessed
to like AI above the rest:
Obsolete!
A human like me.
Oh! What a shame!
Who's the scapegoat I shall blame!
The masses, their dumb asses?
Postmodern poets who don't rhyme?
The study, it's too cruddy?
Sam Altman tryna make a dime?
Or, let's go back in time?
Copernicus! You did frick us!
WE ARE NOT THE CENTER.
Darwin-Wallace! Apes, you call us?!
WE ARE NOT THE CENTER.
Turing-Gödel! Turning hurdles!
Understanding is compression.[2]
Intelligence is search.[3]
Existential-o depression
from each scientific lurch.
WE ARE NOT THE CENTER.
THERE IS NOTHING SPECIAL ABOUT BEING:
A human like me.
.
.
.
O... kay....?
Pretty erudite, for a Luddite
I was playing. Science is a'ight, a'ight?
I like not dying of cholera or smallpox
or burying half our kids in a small box
(as they did before germ theory[4])
It's erm, eerie,
how the past wasn't health-compliant.
I'd have to be an ungrateful brat
to piss on the shoulders of giants.
And yet, the ape-part of me
the part that lives on a Disc World
under a dome speckled with Gods,
the part that hates poetry that doesn't rhyme,
the part that lives
in the center
That part cries out!
It's got to shout!
Oh! Myth is dead.
It's ok to feel bereft.
Oh! The ineffable!
It's been fully eff'd!
Once there was a time when Art was the pinnacle
of human glory. Then a billion dollars and a nickel
later, a next-token-predictor can write poetry the fickle
humans prefer over human poetry. It tickles
my ironic nerves that we now take superior pride in,
what,
Counting the R's in "strawberry"?
Playing a pixel puzzle game?[5]
Drawing hands that aren't scary?
That's our claim to fame?
(bish, i can't draw hands either 😭)
And sure, this current boom may fail
We've hit diminishing returns on scale[6]
Backprop, big data and a VC whale
May not be enough this time.
Or, let's go forth in time?
Not enough this time.
But it's only a matter of time.
We will not be the center.
We never were.
And that's fine.
They raised us to love being special
(then complain that "narcissism" is up[7])
But growing up means realizing it's okay
to not be special
to not be the center of Attention and/or Creation
to not have poetry that scans good
Fuck it, I'm having fun
Once we automate it all away
And there's nothing we need to do
And there's nothing we need from each other
We'll be forced to answer,
scared
sacred
What do we want to do?
What do we want to be for each other?
Whose center
do you want to be?
"A human likes me?"
.
.
.
Wait, was that my so-called wisdom?
A saccharine cliché?
"The real fully-automated space communism
was all the friends we made along the way"?
What about AI existential risk?
What was that about the world being a disk?
What about AI being used to rope ya
into a totalitarian dystopia
and oop, yeah
well
Were you expecting anything actually useful from a poem?
Art? Being useful? lol
This is why humans prefer AI poetry
Shit, I can't even have an uplifting moment
Without undercutting it with ironic cynicism
(Hey! Maybe that's what'll stop rogue AI)
(By being trained on internet data)
(If it becomes self-aware it'll just get depressed)
Fuck it, I'm having fun
Let's end this with a true human poem, that I know a corporate chatbot will never make: a poem, of copyrighted characters and sexual content.
Behold, Art:
There once was a mouse named Mickey
Who whipped out his mousey dickey
And started humping
Mr. Xi Jinping
While reading the Tiananmen Square Wiki(don't worry, it's consensual)
Oh! What a purge!
Of many a mental urge!
And my ape, who wants to sing the mythical
And my brain, who wants to eff the ineffable
And my heart, who's torn about being ethical
And the machine, maybe at the pinnacle
We'll all merge?...
A human-like me.
Well, "recent". From a 2024 paper: "participants performed below chance levels in identifying AI-generated poems [and] were more likely to judge AI-generated poems as human-authored than actual human-authored poems." The human poets ranged from Shakespeare to Whitman to Plath. The AI poet was ChatGPT 3.5, with no prompt engineering, or feedback & iteration. ↩︎
"Understanding is Compression" is an idea that's been around for centuries, if not exactly in those words. Ockham's Razor says that given two theories that explain the same thing, we should pick the simpler one. Einstein said "A theory is more impressive the greater the simplicity of its premises, the more different things it relates, and the more expanded its area of application.”
And now, this idea is finding good use in AI! Neural networks trained with regularization (rewarding simplicity), and Auto-encoders (compress large input → small embedding → decompress back to original input), both lead to AI that's more robust & generalizes better.
Hat tip to these papers: Understanding as Compression by Daniel A. Wilkenfeld, a delightful accessable read. Information compression, intelligence, computing, and mathematics by J Gerard Wolff, founder of the SP (Simplicity-Power) Theory of Intelligence. Understanding is Compression by Li, Huang, Wang, Hu, Wyeth, Bu, Yu, Gao, Liu & Li, which show that LLMs can compress text (and even images/audio) better than standard compression algorithms! ↩︎
"Problem-solving is (heuristic) search" is also an old idea:
Turing & Champernowne designed the first chess AI in 1948, which searched every possible move & counter-move, then selected the best move, based on a "heuristic" (rule-of-thumb) based on the value of pieces, safety & mobility of the pieces, etc.
Simon & Newell created the General Problem Solver in 1957, which could take any formal game/system, like mazes or Sudokus or geometry, and search through possible moves ("operators") to get to the solution. Because of the exponential explosion in possible states, their AI tried to narrow down its search using a means-ends heuristic: first try moves that directly get you closer to your goal, backtrack only when that fails.
Skipping decades ahead, DeepMind released AlphaZero in 2017, which beat human grandmasters at chess, go, and shogi ("Japanese Chess"). The core of AlphaZero is the Monte Carlo Tree Search, which, once again, is "just" search through possible moves & counter-moves. But this time, instead of being hard-coded like in past chess AIs, the heuristic is a neural network that's learnt from scratch! ↩︎
For thousands of years, across all human cultures: the percent of kids who died before age 15 was around 50%. This only changed around ~1850, when the work of Jon Snow & Louis Pasteur (& many others) proved that diseases spread through "germs" that were invisible to the naked eye. Only after this scientific discovery, public health infrastructure actually worked. Global child mortality plummeted from 50% then to "only" 4% now, and "only" 0.3% in the best countries. To quote a wise philosopher: "YEAH SCIENCE, BITCH!" ↩︎
The Abstraction & Reasoning Corpus (ARC) is one of the few famous tests for AI that is (currently) unbeaten. You know those IQ tests where you look at some shapes, figure out the pattern, then select the shape that continues the pattern? ARC is like those, but 1) in pixel-game format, and 2) designed to be "easy for Humans, hard for AI".
The reason they're hard for current AI is because — unlike AI excelling at math word problems or reciting facts from textbooks — there is no common pattern in the ARC puzzles, and no way to memorize the answers, because the non-profit behind ARC uses a private test that's never been put online. (but they have a public version humans can play with, and AIs can train with.) ↩︎
Toby Ord, co-founder of Giving What We Can and author of The Precipice — a book on existential risk that gave AI the highest probability of all things that could cause human extinction this century — more recently, Ord is collecting stats that show, contrary to AI bulls, modern LLM improvements are coming at unsustainably exponential costs:
Inference Scaling and the Log-x Chart shows how AI companies lie with statistics by making their x axis exponential, and not the y axis.
Are the Costs of AI Agents Also Rising Exponentially? shows that, unlike humans — whose performance is linear: we can do 10× as much work in 10× as much time — LLM agents hit diminishing returns fast. ↩︎
While factchecking my poem, I found a 2024 meta-analysis that showed young people's "narcissism" has actually been declining since 2008, possibly due to the financial crisis! See Figure 5. I guess the misperception that "narcissism is rising" is due to narcissistic traits being more visible & rewarded in a modern attention economy, not due to actual generational changes.
Also, reminder that:
1) The "narcissism scale" is a flawed measure that mixes up healthy & unhealthy traits,
2) Don't confuse "younger people score higher on narcissism" (age effect) with "each generation is getting more narcissistic" (cohort effect). That's a fallacy, like confusing "younger people are shorter in height" and "each generation is getting shorter in height".
3) People with narcissistic personality disorder aren't cartoon villains, they're actual people. ↩︎