By: Thom Fain

You wanna know what I think about sometimes: Remember when Axl Rose spent $13 million recording Chinese Democracy over fourteen years, and everyone assumed Geffen Records would never recoup the investment, and they were totally right, but it didn’t matter because the album had become a kind of financial abstraction, a line item that existed primarily to justify other line items in other parts of the company’s accounting? That’s sort of what’s happening with artificial intelligence right now, except instead of $13 million it’s $330 billion, and instead of Geffen Records it’s Oracle, Microsoft, and a company called CoreWeave that you’ve probably never heard of but which is somehow central to whether your 401(k) becomes worthless.

I realize this comparison is imperfect. Axl Rose, for all his faults, at least recorded something. OpenAI is losing $11.5 billion per quarter—that’s billion with a “b,” which I mention because my brain still cannot process numbers that large—in order to provide a service that costs them money every single time someone uses it. This is what economists call “negative unit economics,” which is a fancy way of saying “the more successful we become, the faster we go bankrupt.” It’s like if McDonald’s lost three dollars every time someone bought a Big Mac, but everyone agreed this was fine because maybe in five to ten years they’d figure out how to make hamburgers for free using some unspecified future technology that may or may not be possible under the known laws of physics.

Here’s the thing, though: I don’t think the interesting question is whether the AI bubble will pop. Of course it will pop. Every bubble pops. That’s definitionally what makes it a bubble. The interesting question is what happens when roughly 26% of the S&P 500’s market capitalization is built on seven companies that are essentially loaning money to each other in increasingly creative ways in order to collectively pretend that a company burning $46 billion a year is worth $500 billion.

Let me back up.

The Part Where I Explain the Circular Economy Thing (But Make It About Metallica)

There’s this diagram from Morgan Stanley that’s been making the rounds—it looks like one of those conspiracy theory cork boards with red string connecting different entities, except instead of connecting JFK’s assassination to the Illuminati, it’s connecting OpenAI to Nvidia to Microsoft to Oracle to AMD, with arrows labeled things like “vendor financing/favorable terms” and “repurchase agreement” and “$300 billion.” The arrows go in circles. Money flows from Company A to Company B, which uses that money to buy things from Company C, which invests in Company A. It’s like an M.C. Escher drawing, except instead of staircases it’s debt, and instead of being impossible it’s just highly improbable.

This is not, strictly speaking, illegal. But it reminds me of when Metallica sued Napster in 2000, and Lars Ulrich testified before the Senate, and everyone sort of understood that what Napster was doing felt wrong but couldn’t quite articulate why because the technology was new and the business models were unclear and maybe file-sharing was actually good for discovery and would lead to more concert ticket sales? Except twenty-five years later we know exactly how that ended—Napster died, streaming became a race to the bottom, and musicians now make most of their money from touring and merchandise while Spotify’s CEO builds a robot army or whatever.

The AI bubble is like that, but inverted. Instead of technology destroying an old business model, we’re using old business models (vendor financing, off-balance-sheet special purpose vehicles, asset-backed securities) to finance technology that might not have a business model at all.

Consider: OpenAI has signed $300 billion in contracts with Oracle. That’s billion. With a “b.” Oracle is going to build data centers for OpenAI to use. Except OpenAI can’t actually pay for these data centers with operating cash flow, because OpenAI’s operating cash flow is deeply, catastrophically negative. So where does the money come from?

This is where it gets interesting.

The 2008 Part (Which Is Really the 2026 Part)

There’s an editorial making the rounds—I won’t say who wrote it because that would be admitting I read editorials, which would undermine my carefully cultivated persona of someone who only consumes information through old Creem magazine articles and VH1 Behind the Music episodes—that argues the entire AI bubble is actually a 2008 problem wearing a 2025 Halloween costume.

The argument goes like this: After the financial crisis, central banks bailed out the financial system by buying long-term debt, which lowered interest rates and filled bank reserves with cash. The banks were supposed to lend this money to small businesses and homebuyers, thereby creating jobs and stimulating the real economy. But the real economy is boring and risky and involves things like “due diligence” and “expecting to be paid back.” So instead, banks lent to other financial institutions and threw it into capital markets, where it could generate fast returns backstopped by—and this is the important part—taxpayers and central banks.

Meanwhile, the rich people who caused the 2008 crisis didn’t lose their fortunes. They weren’t taxed. They borrowed easily. They took all this cheap money and put it into private equity funds and hedge funds and eventually into the remaining big tech companies that survived the dot-com crash—your Amazons, your Metas, your Microsofts. And what did these companies do with this nearly unlimited, nearly free capital?

They built the Metaverse. They promised self-driving cars. They shot cars into space. They investigated eternal life. They tried to link human brains to computers. And yes, they pivoted to artificial intelligence.

None of this was driven by market demand in the traditional sense. It was driven by having so much capital that you could make decade-long bets on technologies that might never generate profits, because so long as the stock price kept going up, who cares about profits? Profits are for people who have to worry about running out of money.

Here’s what I keep thinking about: This is the exact same structure as the housing bubble, except instead of subprime mortgages being packaged into collateralized debt obligations, we have data center leases being packaged into asset-backed securities that are then sold to pension funds and insurance companies. And instead of the underlying asset being houses that people need to live in (which have some residual value even in a crash), the underlying assets are GPU chips and data centers in the middle of nowhere Louisiana that have literally zero alternative use if the AI boom ends.

Oh, and also? The single largest customer for all of this infrastructure—OpenAI—loses billions of dollars per quarter and has no path to profitability that any credible analyst can identify.

So that’s fun.

The Bailout Question (Or: Why You Should Probably Oppose Them But Probably Won’t)

The editorial I mentioned earlier argues that whether the AI bubble ends in a manageable correction or a catastrophic collapse depends on you—the voter, the constituent, the person whose outrage gets performatively amplified on social media until politicians feel obligated to pretend to care about it. It argues that if regular people make enough noise about not bailing out tech billionaires who spent the last fifteen years laying off workers while buying mega-yachts and funding vanity space projects, then maybe, possibly, perhaps this time will be different.

I want to believe this. I really do. There’s something appealing about imagining a groundswell of populist fury preventing another round of socializing losses while privatizing gains. The political conditions are certainly different than 2008. Trump got elected partly by channeling rage at coastal elites. Bernie Sanders nearly won the Democratic primary twice. There’s a genuine left-right populist coalition that agrees on almost nothing except that maybe we shouldn’t give taxpayer money to billionaires who crashed the economy.

But here’s the thing: I think this underestimates how bailouts actually happen.

In September 2008, everyone agreed: No more bailouts. We’d already done Bear Stearns. Enough was enough. Lehman Brothers was going to be allowed to fail as a signal that the era of “too big to fail” was over. And then Lehman failed on September 15, and by September 17 the entire global financial system was approximately 48 hours from total collapse, and by September 20 we were all Keynesians again and within weeks Congress had passed a $700 billion bailout package.

The thing about bailouts is they don’t happen during normal times when people can have reasoned debates about moral hazard and long-term incentive structures. They happen during moments of utter panic when everyone’s retirement account has lost 40% of its value in three weeks and the ATMs might stop working and your congressman is getting calls from terrified constituents asking why their money market fund “broke the buck,” which is a thing that’s not supposed to be able to happen but which is definitely happening.

And here’s the other thing: The AI bailout won’t look like a bailout.

How the Bailout Will Actually Work (A Hypothetical That Is Probably Also a Prediction)

It’s April 2026. OpenAI announces massive Q1 losses. The stock market, which has spent eighteen months ignoring warning signs because ignoring warning signs is what stock markets do during bubble periods, suddenly decides to acknowledge reality all at once. Nvidia falls 40% in a day. Oracle’s debt gets downgraded to junk. A data center ABS tranche held by CalPERS is suddenly marked down by 60%, threatening the pensions of two million California public employees.

The Treasury Secretary—let’s say it’s Scott Bessent, or maybe it’s someone else, doesn’t really matter—goes on television and says the following things:

  1. “We are not bailing out billionaires.”
  2. “We are protecting American workers whose pensions are at risk.”
  3. “China is winning the AI race, and we cannot allow America to fall behind in this critical technology.”
  4. “The Federal Reserve is establishing a temporary liquidity facility to ensure orderly markets.”

What actually happens: The Fed creates a special purpose vehicle that buys data center ABS at 70 cents on the dollar, preventing pension funds from realizing catastrophic losses. Oracle gets emergency loans under the Defense Production Act because their data centers are deemed “critical national infrastructure.” OpenAI is restructured with government-backed financing, with the Treasury taking a stake under the guise of “ensuring American AI leadership.” Nvidia’s stock eventually stabilizes at 40% of its peak, which is presented as shareholders “taking losses” even though the alternative was 90% losses.

The total cost is $400 billion, but it’s all done through Fed facilities and off-budget Treasury vehicles, so it doesn’t technically count as a bailout. Republicans call it “protecting American innovation.” Democrats call it “saving working families’ retirements.” Everyone agrees China is bad.

And you know what? The constituent pressure will have mattered, but not in the way the editorial suggests. It will have made the bailout slightly smaller and slightly more punitive to shareholders. Equity holders will be mostly wiped out instead of partially wiped out. Some executives might face clawbacks. There might be congressional hearings where Sam Altman gets yelled at for six hours.

But the fundamental dynamic—profits are privatized, losses are socialized—will remain intact, because in the moment of crisis, the alternative is letting the entire financial system implode, and no elected official in a democracy can allow that to happen, at least not on purpose.

Wait, So Should I Short These Stocks or Not?

This is the part where I should probably include a disclaimer about how I’m not a financial advisor and this isn’t investment advice, but you already knew that because if you’re taking financial advice from someone whose main qualification is having published 13 professional wrestling magazines before doing a cup of coffee as a Market Analyst in Akihabara… you have larger problems than whether to short Oracle.

But okay, fine: Yes, probably short these stocks. Or buy puts. Or just avoid exposure. The math doesn’t work. Too much of the market is being held up by companies with ties to SPVs and AI financing. OpenAI burning $46 billion annually cannot be reconciled with any plausible revenue scenario. When a company loses money on every transaction, “making it up in volume” is not a business strategy, it’s a joke from the dot-com era that we apparently need to relearn.

The smart money is already leaving. Warren Buffett is sitting on $381 billion in cash—that’s more than the GDP of Norway—after selling stocks for twelve consecutive quarters. Michael Burry, who correctly predicted the housing crisis and got played by Christian Bale in a movie about it, put 80% of his fund into shorting Nvidia and Palantir before closing up shop and heading to SubStack so that he can write more openly about what’s happening than he could as a hedge fund manager in ’08. Peter Thiel’s fund sold its entire Nvidia stake. Masayoshi Son, who has been wrong about approximately everything for the past decade, sold $5.8 billion in Nvidia, which is either a contrarian indicator or a sign that even the optimists are giving up. 

When is this happening? Probably sometime between April 2026 and September 2026, based on when OpenAI has to report quarterly results and when the hyperscalers announce their capital expenditure plans. The Shiller P/E ratio is over 40, which has only happened three times in history: 1929, 1999, and 2007. Each time was followed by a severe crash.

As for me?

I might have some QQQ put options while writing about macroeconomics and underemployed, but obviously will continue investing in things I know will definitely increase in valuelike vinyl records.

Discogs, after all, has a copy of Chinese Democracy for around $115 shipped to Japan.