A sprawling lecture by Professor Jiang, now circulating widely online under the title “The AI Apocalypse”, is not really about machine learning.
It is about power, religion, social control, and the growing suspicion that artificial intelligence is becoming something larger than a technology sector.
Over nearly two hours, Jiang moves from
OpenAI and ChatGPT to Plato’s Cave, the CIA’s Stargate program, self-driving cars, surveillance infrastructure, occult symbolism, and the idea that Silicon Valley is trying to “create God.” The lecture is chaotic, provocative, often conspiratorial, and frequently speculative. But its popularity says something important about the current moment in AI.
People are no longer arguing only about whether AI works.
They are arguing about what AI represents.
And increasingly, some of the loudest voices online are framing AI not as software infrastructure, but as ideology.
“This Is a Class About Intellectual Speculation”
One of the most revealing moments in the lecture arrives at the very beginning.
Rather than opening with AI itself, Jiang reads a private email sent by his former teacher and friend, Professor David Bromwich, the Yale literary critic and scholar.
Bromwich praises Jiang’s ability to simplify difficult ideas, but warns him about the dangers of certainty.
“The risk is simplification which your audience won’t quite recognize for what it is,” Bromwich wrote.
Jiang accepts the criticism almost immediately.
“This is a class about intellectual speculation here,” Jiang says. “We explore ideas that are not explored anywhere else and often I will wing it or I will make things up as I go along based on my intuition and based on my imagination.”
That disclaimer matters.
Because what follows is not a technical explanation of AI in any conventional sense.
It is a sweeping philosophical interpretation of the AI industry itself.
Jiang repeatedly returns to the same central idea: that OpenAI and other major AI companies are building not merely products, but belief systems.
At one point, he quotes OpenAI CEO Sam Altman’s old observation that successful founders often resemble religious founders more than traditional entrepreneurs.
“The most successful founders do not set out to create companies,” Jiang reads from Karen Hao’s book Empire of AI. “They are on a mission to create something closer to a religion.”
For Jiang, that line becomes the interpretive key for everything else.
OpenAI as Empire, Religion, and Infrastructure Project
Much of the lecture revolves around Karen Hao’s reporting on OpenAI and the larger AI industry.
Jiang uses Hao’s work as a launching point for a broader argument that AI companies increasingly behave like geopolitical infrastructure actors rather than software startups.
He argues that AI development now depends on:
- massive capital expenditure,
- global data-center expansion,
- state partnerships,
- surveillance-scale data collection,
- and continuous public dependency.
“It’s not really about making AI safe for humans,” Jiang says. “It’s about making the world safe for AI.”
That framing reflects a wider shift happening across the AI economy.
The conversation around AI has moved far beyond chatbots.
The biggest technology firms are now competing on compute access, energy infrastructure, semiconductor supply chains, sovereign AI policy, cloud dominance, and government contracts.
Microsoft,
Google, Amazon, Oracle, Meta, Nvidia, and OpenAI are all increasingly tied to the physical infrastructure layer of AI.
Jiang exaggerates parts of this dynamic into outright apocalypse rhetoric. But underneath the rhetoric is a recognizable observation: the AI race is becoming deeply entangled with state power.
That part is not speculative.
The lecture repeatedly references the Trump administration’s Stargate initiative and the expanding political consensus around AI infrastructure investment in the United States.
Jiang interprets this as evidence that AI firms can survive economically only through government alignment.
“AI by itself can’t make any money,” he claims. “So AI needs to work with the government in order to justify its existence.”
That argument oversimplifies the economics of the AI market, but it touches a real pressure point inside the industry.
Generative AI remains extraordinarily expensive to operate.
Inference costs, energy consumption, training compute, and datacenter construction continue to raise questions about long-term business sustainability.
“What Is AGI? The Answer Is God.”
The lecture becomes far more extreme when Jiang shifts from economics into metaphysics.
At one point, discussing OpenAI co-founders Greg Brockman and Ilya Sutskever, Jiang makes the leap that has driven much of the online attention around the video.
“What is AI? What is artificial intelligence? What is AGI? And the answer, of course, is it’s God.”
That statement is presented not metaphorically, but almost literally.
Jiang argues throughout the lecture that the language surrounding AGI increasingly resembles religious language.
He points to:
- “rapture” style thinking around superintelligence,
- existential risk narratives,
- bunker scenarios,
- salvation frameworks,
- and the belief that AGI could fundamentally reorder civilization.
He quotes passages from Empire of AI describing internal fears among some OpenAI researchers around AGI escalation and geopolitical conflict.
One passage in particular becomes central to Jiang’s argument:
“There’s a group of people who believe that building AGI will bring about a rapture,” the book states. “Literally a rapture.”
Jiang interprets this literally and pushes it further.
“The plan is to kill everyone so you can save the world,” he says during one section of the lecture.
There is no evidence that AI companies are pursuing anything remotely resembling that claim.
But the rhetoric itself is revealing.
The online AI debate is increasingly splitting into two parallel conversations.
One side discusses productivity gains, enterprise software, agents, inference optimization, and infrastructure economics.
The other discusses consciousness, godhood, simulation theory, transhumanism, apocalypse, and civilizational collapse.
The two conversations increasingly overlap online.
Why These Lectures Spread So Quickly Online
Jiang’s lecture matters less because of its factual accuracy and more because of what it reveals about public psychology around AI.
The lecture constantly oscillates between recognizable truths and highly speculative leaps.
For example:
- AI companies really are spending unprecedented amounts on datacenters.
- Governments really are becoming deeply involved in AI infrastructure.
- Large language models really can hallucinate.
- AI systems really do depend on massive amounts of human-labeled data.
- AI safety researchers really do discuss catastrophic scenarios.
But Jiang combines those realities with much more extreme claims about occultism, interdimensional portals, demons, and civilizational destruction.
That combination is precisely what makes the lecture effective online.
It transforms abstract infrastructure trends into a dramatic moral narrative.
At one point Jiang says:
“The real power behind AI are occultists who want to create God.”
Later he adds:
“AI is fundamentally an occult practice.”
Those claims have no evidentiary basis.
But culturally, they resonate because AI already feels opaque to much of the public.
Very few people understand how frontier models are trained.
Even fewer understand the economics, hardware stack, or governance structures behind them.
That opacity creates space for myth-making.
And myth-making tends to emerge whenever societies encounter systems powerful enough to reshape labor, communication, identity, or authority.
The Plato’s Cave Argument
Ironically, the most coherent part of Jiang’s lecture arrives when he leaves AI almost entirely and talks instead about attention.
Using Plato’s Cave as a framework, Jiang argues that power increasingly belongs to whoever controls human perception.
“The true wealth in society is consciousness,” he says.
He then reframes AI not as intelligence, but as an attention system.
That idea aligns more closely with mainstream critiques of modern platforms.
Large AI companies are competing not only for software dominance, but for cognitive dependency:
- AI copilots integrated into work,
- AI companions,
- AI search,
- AI-generated media,
- AI assistants embedded into operating systems,
- AI agents handling daily tasks.
The strategic goal is not necessarily consciousness.
It is ubiquity.
Jiang exaggerates this into theology.
But beneath the exaggeration is a recognizable concern: the more integrated AI becomes into daily life, the more influence its operators gain over information flows, productivity systems, and human behavior.
The Real Anxiety Underneath the Lecture
The lecture’s popularity ultimately reflects a broader collapse of trust around technological institutions.
For years, Silicon Valley presented itself as rational, scientific, and optimization-driven.
But AI has reintroduced language that sounds theological even inside the industry itself:
- alignment,
- existential risk,
- superintelligence,
- AGI,
- recursive self-improvement,
- civilization-scale transformation.
That language creates fertile ground for both utopianism and paranoia.
Jiang pushes that paranoia to its outer edge.
Still, the audience response reveals something deeper.
A growing number of people no longer believe AI is simply another software cycle.
They see it as a restructuring force.
And when technologies begin to feel large enough to reorganize labor, governance, communication, education, war, and human identity simultaneously, people stop discussing them like tools.
They start discussing them like belief systems.
The Bigger Story Is Not the Conspiracy
The easiest response to Jiang’s lecture is dismissal.
Large portions of it deserve strong skepticism.
The occult claims, interdimensional framing, and assertions about demons or “creating God” are not grounded in evidence.
But dismissing the entire phenomenon misses the more important signal.
The lecture is successful because it converts a complicated technological transition into a simple emotional narrative:
- elites are building systems they do not fully understand,
- those systems are becoming globally integrated,
- governments are backing them aggressively,
- and ordinary people increasingly feel powerless to shape the outcome.
That emotional structure now appears constantly across AI discourse.
Sometimes it appears in utopian form.
Sometimes in catastrophic form.
Jiang simply pushes it into explicit theological territory.
The more important question is why audiences are increasingly receptive to that framing in the first place.
And that answer likely has less to do with the occult than with opacity, concentration of power, and the extraordinary speed at which AI infrastructure is now reshaping the modern economy.
The original lecture, titled “
Game Theory #24: The AI Apocalypse,” has spread widely across YouTube and social media as debates around AGI, AI infrastructure, and government-backed compute expansion continue accelerating globally