|

by
A Lily Bit
October 12, 2025
from
ALilyBit Website
Article also HERE
|
I've
been part of what people call
"the
Deep State" for years.
Now
I'm exposing it and the people that created and run it.
Bit
by bit... |

Sam Altman wants seven trillion dollars for artificial
"intelligence" that may or may not some day be able to figure out
how many legs a horse has. Or was it to cure cancer? I don't
remember...
Anyway, staying true to this quest, OpenAI
released
Sora, a video generator that burns through $700,000 daily
to produce clips where people's fingers occasionally phase through
solid objects, where gravity works sideways, where horses sprout
extra legs mid-gallop - and where your brain rots away with every
swipe.
They're calling it intelligence, the
"democratizing" of video creation. It's a random number generator
with a marketing department that caters to the dumbest people of
society.
The entire
AI industry rests on a fundamental lie:
that pattern
matching equals thinking, that statistical correlation equals
understanding, that predicting the next most probable token equals
intelligence...
Sora doesn't know what a coffee cup is - it's seen
millions of images labeled "coffee cup" and learned to produce pixel
arrangements that statistically resemble those patterns.
When it
generates a video of someone drinking coffee, it's not modeling the
physics of liquids or the anatomy of swallowing. It's performing
multidimensional regression on pixel distributions.
The cup phases
through the hand because the machine has no concept of "solid" or
"liquid" or "hand" - only statistical correlations between RGB
values.
AGI - artificial general intelligence - was supposed to mean
machines that,
could think, reason, and understand like humans.
Solve
novel problems, based on their own newly created thoughts.
Generate
genuine insights.
Exhibit creativity that isn't just recombination
of training data.
Instead, OpenAI and its competitors are redefining
AGI to mean,
"passes enough benchmarks that we can declare victory."
When they announce AGI next year or the year after, it won't be
because they've created thinking machines. It'll be because they've
successfully moved the goalposts to where their pattern matchers are
standing.
You know what this is?
It's autocorrect all over again, just with
better UI; autocomplete for pixels.
The same fundamental process
that suggests "u up?" after "hey" at 2 a.m., scaled up to
unconscionable computational requirements and dressed in
revolutionary rhetoric.
OpenAI's GPT models - the supposed foundation of artificial general
intelligence - are glorified Markov chains on steroids. They don't
understand language; they predict token probabilities.
When ChatGPT
writes about democracy or love or quantum physics, it has no concept
of governance, emotion, or particles.
It's performing statistical
pattern matching on text stolen from the internet, predicting what
word most probably follows the previous words based on billions of
examples.
The machine that writes poetry about heartbreak has never
experienced anything, understands nothing, knows nothing.
It's a
Chinese Room with better PR.
The AGI announcement, when it comes, will be pure theater... They'll
show demos of their model passing medical exams (by pattern-matching
against millions of medical texts), writing code (by recombining
Stack Overflow posts), having "creative" conversations (by
interpolating between Reddit comments).
Tech journalists, either too
ignorant or too invested to call out the con, will breathlessly
declare the arrival of artificial general intelligence. The stock
price will soar. The seven trillion in funding will materialize.
And it will all be a lie...
Sora is a monument to misdirected genius, cathedrals built to
worship the wrong god.
Every brilliant mind working on making fake
videos look slightly less fake could have been working on actual
research.
Every dollar spent teaching machines to mimic human
creativity could have funded actual human creators.
Every
kilowatt-hour burned generating "cinematic" garbage, that
"democratizes filmmaking", could have created movies that are
actually fun to watch.
Altman already admitted what critics have long suspected:
the
humanitarian rhetoric is window dressing for wealth extraction...
When
pressed about the gap between their utopian marketing and dystopian
products, the response was revealing:
they need to "demonstrate
capability" while "building revenue streams."
Translation:
the
cancer-curing, poverty-ending, progress-accelerating AI was always
a
lie to justify the real goal - monopolizing the infrastructure of
digital expression, capturing your attention, and torturing your
brain.
AI's training process reveals the scam's magnitude.
Sora consumed
millions of hours of video, billions of images, unknowable
quantities of stolen human creative output - all reduced to
numerical weights in a neural network.
The machine didn't learn what
a sunset is; it learned statistical correlations between pixel
gradients typically labeled "sunset."
It can generate infinite
variations of things that look like sunsets without ever
understanding that the sun is a star, that Earth rotates, that light
refracts through atmosphere.
It's pattern matching without
comprehension, correlation without causation, syntax without
semantics.
Consider what Sora actually does.
It ingests decades of human visual
culture - every film, every video, every fragment of recorded
humanity it can access - and reduces it to statistical patterns.
Then it reconstitutes these patterns into uncanny simulacra that
look almost but not quite right, like memories of dreams of movies
you half-remember.
The output invariably features that distinctive
AI shimmer:
surfaces that seem to breathe, faces that melt if you
stare too long, physics that operates on dream logic rather than
natural law.
This would be merely embarrassing if it weren't
so expensive.
The computational requirements for generating a single
minute of Sora footage could power a small town for a day.
The
training process consumed enough electricity to supply thousands of
homes for a year.
And for what?
So influencers can generate
backgrounds for TikToks?
So marketers can create synthetic
testimonials?
So we can further blur the already fuzzy line between
authentic human expression and algorithmic approximation?
They will, however, assure you that this is an "essential step" toward
their sacred AGI objective. That's another lie...
Real AGI would
require something these systems fundamentally lack:
A model of
reality.
Understanding causation, not just correlation.
Grasping
concepts, not just patterns.
The ability to reason from first
principles rather than interpolate from examples.
No amount of
scaling current approaches will achieve this because the
architecture is wrong at its foundation.
You can't build
intelligence from statistics any more than you can build
consciousness from clockwork.
The computational requirements expose the inefficiency of
brute-forcing intelligence through statistics.
Human children learn
object permanence from dozens of examples; Sora needs millions of
videos and still generates coffee cups that spontaneously become tea
kettles mid-frame.
A three-year-old understands that people have two
arms; Sora, after consuming more visual data than any human could
process in a lifetime, still generates people with three arms
growing from their torsos.
Altman knows AGI isn't coming.
In private conversations leaked by
former employees, he's admitted that current approaches have
fundamental limitations. But publicly, he maintains the fiction
because the entire house of cards - the valuations, the government
approval, the investments, the seven-trillion-dollar ask - depends
on investors "believing" AGI is imminent.
The tell is in how they keep redefining success.
First, AGI meant
human-level intelligence across all domains.
Then it became,
"human-level at most economically valuable
tasks."
Now it's,
"passes certain benchmarks better than
average humans."
By the time they declare victory, AGI will mean,
"generates outputs that fool people who've
been consuming AI slop for so long they can't recognize actual
thought"...
The entire large language model revolution is
built on this foundation of sand.
GPT-5, Claude, Grok,
Gemini
(remember Gemini?) - they're all variations of the same con:
machines that simulate understanding through statistical
correlation, that mimic intelligence through pattern matching, that
fake consciousness through probability calculations.
That's why they
all sound the same.
They generate text that appears meaningful
because they've learned what meaningful text typically looks like,
not because they understand meaning...
We're watching the systematic redefinition of intelligence to match
what machines can fake rather than what intelligence actually is.
It's like declaring that player pianos have mastered musical
performance - technically impressive, completely soulless, and
missing the entire point of what music is.
Consider what OpenAI calls "emergent capabilities" - the supposedly
surprising abilities that appear as models grow larger. But these
aren't emergence of intelligence, they're just statistical
inevitabilities.
Feed enough text about chess into a pattern
matcher, and it will eventually reproduce chess-like moves.
Not
because it understands strategy or planning (or what chess even is),
but because it's seen enough examples to predict probable next
moves.
It's not playing chess.
It's just performing regression
analysis on chess notation.
The industry knows this.
Internal documents from multiple AI
companies reveal that engineers refer to their models as "stochastic
parrots" - machines that randomly recombine learned patterns without
understanding.
Yet publicly, they maintain the fiction of
intelligence, of reasoning, of understanding. They have to. The
entire $150 billion valuation depends on investors believing these
machines think rather than merely compute probabilities.
That's why
these people sound so unbelievably stupid on X...
I'm not against artificial intelligence.
The idea that machines
could one day amplify human insight, accelerate discovery, or help
us understand ourselves more deeply is beautiful.
What I reject is
the fraud :
-
the marketing theater that sells regression
as revelation
-
the executives who cloak cats in cars being pulled
over by cops as beacons of human progress
-
the AI bros who
pretend that slob is a form of art...
There's nothing wrong with building tools that help us think faster
or a little clearer; what's wrong is,
pretending those tools can
think, feel, or create...
The tragedy is that we've turned one of
humanity's most promising ideas into a con built on hype, stolen
labor, and misdirection.
Again, AGI will be called "AGI" because they've successfully
convinced enough people that sophisticated pattern matching equals
thinking.
They'll point to benchmarks conquered, tests passed,
conversations that sound human.
They won't mention that it's all
statistical mimicry, that the machine has no more understanding than
a calculator has of mathematics.
The energy waste becomes even more obscene when you understand
what's actually happening.
We're burning gigawatts of electricity
not for thinking machines but for industrial-scale garbage creators.
The carbon footprint of training GPT-4 - equivalent to the lifetime
emissions of 500 cars - was spent teaching a machine to predict that
"cat" often follows "the" and "sat on the."
The millions of gallons
of water cooling Sora's training infrastructure were used to teach
it that skin-colored blob patterns often appear above shirt-colored
blob patterns.
This is what seven trillion dollars will buy:
more sophisticated
pattern matching.
Larger statistical models.
Higher-resolution
probability calculations.
Not intelligence, not understanding, not
consciousness - just increasingly expensive ways to generate
statistically probable outputs that look meaningful to humans who've
forgotten what meaning is.
And then Sam is happy, Donald is happy,
Jensen is happy...
Simply because they have successfully managed to
scale America's data centers better than others so that it has
become the greatest digital garbage creator on the planet. Thank you
for your attention to this matter.
The "hallucination" problem that AI companies
treat as a minor bug is actually a logical flaw.
These systems don't
hallucinate (that would ironically require being capable of actual
original thought) - they operate exactly as designed, generating
statistically probable outputs regardless of truth or accuracy.
When ChatGPT invents scientific papers that don't exist, it's not making
an error - it's doing what it always does:
producing text that
resembles text it's seen before and sounds plausible.
The machine
doesn't know what "true" or "false" means.
It only knows probable
token sequences.
We're restructuring entire industries around these probability
calculators.
Replacing human judgment with
statistical averaging.
Substituting genuine creativity for
pattern recombination.
Trading
actual intelligence for its simulation.
The more AI is written on
company papers, the higher the stock valuation goes.
Apple's stock
price plummeted as they remained silent while everyone else was
swept up in the AI craze. Apple's innovative image was tarnished.
When they introduced Apple Intelligence, a collection of largely
useless and barely functional tools, investors were pleased again.
The companies selling these systems are aware that they lack
intelligence - refer to their technical papers (Apple
even openly acknowledged and criticized this in a paper
that reads as if Steve Job's long forgotten last breath of reason
swept across Apple Park) - but they also understand that the market
doesn't care as long as the outputs appear convincing enough.
The prompt engineering priesthood that's emerged reveals the
absurdity.
If these systems were intelligent,
you wouldn't need
elaborate incantations to make them produce usable outputs.
You
wouldn't need "jailbreaks" and "system prompts" and carefully
crafted instructions.
You'd just communicate normally, as you would
with anything that actually understands language.
Instead, we have
an entire industry of prompt engineers - people whose job is to
trick probability calculators into producing specific patterns
through careful manipulation of input text - and then sell it to you
in spreadsheets for $60.
OpenAI's own researchers, in papers that barely make headlines,
acknowledge these limitations.
They write,
about "capability mirage"
- the tendency for large models to appear more capable than they are
because they've memorized vast amounts of training data.
About
"spurious correlation" - the machine's tendency to learn accidental
patterns rather than meaningful relationships.
About "distribution
shift" - how models fail catastrophically when encountering inputs
that differ from their training data.
But these admissions get buried under avalanches of hype about AGI,
about consciousness, about machines that think.
The reality - that
we've built very expensive random number generators - doesn't sell
enterprise licenses or justify billion-dollar valuations.
The Sora videos that OpenAI showcases are carefully curated from
thousands of generations, selected for the ones where the horses
have the right number of legs, where the coffee cups don't phase
through hands, where the physics accidentally looks correct.
For
every video they show, hundreds were discarded because they revealed
what the system actually is:
absolute and utter useless bullshit.
This is the seven-trillion-dollar scam: selling
pattern matching as intelligence, correlation as comprehension,
probability as understanding.
The machines,
don't think - they
perform statistical operations.
They don't create - they recombine.
They don't understand - they calculate correlations.
The environmental catastrophe, the resource waste, the misdirection
of human talent - it's all in service of building more sophisticated
probability calculators.
Machines that can generate text that sounds
meaningful without meaning anything.
Videos that look real without
representing reality.
Answers that seem correct without any
comprehension of the questions.
The population, meanwhile, grows steadily dumber from consuming
AI-generated content, losing the ability to distinguish between
genuine thought and statistical approximation.
We're being trained
to accept lower-quality everything - lower-quality writing,
lower-quality images, lower-quality thinking.
By the time OpenAI
declares AGI achieved, we'll be so accustomed to synthetic
approximations that we won't remember what real intelligence looks
like.
That's the end-stage brain rot at the core of the AI revolution:
we've convinced ourselves that sufficiently sophisticated pattern
matching equals intelligence, that large enough statistical models
equal understanding, that probable outputs equal thoughts.
We've
mistaken the map for the territory, the simulation for reality, the
pattern for the meaning.
And we're about to spend seven trillion dollars
on this...!
Thanks, Sam. Very cool...
|