When Even the Optimists Start Talking About Risk: Dario Amodei on AI's Economic Gamble

The CEO of Anthropic admits what many won't: someone's going to lose big in the AI spending race
December 3, 2025 by
When Even the Optimists Start Talking About Risk: Dario Amodei on AI's Economic Gamble
Léa Rousseau
| No comments yet


The Bubble Question No One Wants to Answer


When the CEO of one of the world's most valuable AI companies says "there are some players who are yoloing" with their AI investments, you should probably pay attention. That's exactly what Dario Amodei, co-founder and CEO of Anthropic, told The New York Times in a remarkably candid conversation that touched on everything from AI economics to job displacement to why democracies need to win the AI race.


But the question we should be asking is this: if even the most optimistic voices in AI are admitting there's real risk of overextension, what does that tell us about the state of the industry?


Amodei, whose company Claude competes directly with OpenAI's ChatGPT and Google's Gemini, has been watching AI scaling laws for over a decade. He's one of the original researchers who documented how predictably AI models improve with more compute and data. So when he says he's confident about the technology but "concerned" about the economics, it's worth examining who benefits from this uncertainty—and who's going to get burned.


The Cone of Uncertainty (Or: How to Spend $50 Billion and Sleep at Night)


Here's the dilemma Amodei laid out, and it's one that every AI company faces right now: Anthropic's revenue has grown 10x year-over-year for three consecutive years. Zero to $100 million in 2023. $100 million to $1 billion in 2024. And they're projecting to land somewhere between $8-10 billion by the end of 2025.


That's explosive growth. But here's the catch—building the data centers to serve AI models takes one to two years. So Amodei has to decide *right now* how much compute to buy for early 2027, when he has no idea if his revenue will be $20 billion or $50 billion or somewhere wildly different.


He calls this the "cone of uncertainty," and it's a refreshingly honest admission of how much guesswork is involved in an industry that loves to project confidence.


"If I don't buy enough compute, I'll have to turn customers away," Amodei explained. "If I buy too much compute, I might not get enough revenue to pay for it. And in the extreme case, there's the risk of going bankrupt."



Data centers require massive upfront investment with 1-2 year lead times—a timing mismatch that creates enormous financial risk. Photo: The New York Times/YouTube 

This is where the circular financing deals come in. Companies like Anthropic don't have $50 billion lying around to build a gigawatt of compute infrastructure. So they make deals with the companies selling the chips—primarily Nvidia, but also cloud providers like Amazon, Google, and Microsoft (all three of which now back Anthropic).


Amodei defended these arrangements as sensible financing: "One player has capital and has an interest because they're selling the chips, and the other player is pretty confident they'll have the revenue at the right time but they don't have $50 billion at hand."


Fair enough. But when you start stacking these deals to the point where you need to make $200 billion a year by 2027 or 2028 to break even, you're not managing risk—you're manufacturing it.


The Players Who Are "Yoloing"


Amodei wouldn't name names when asked who's being reckless with their AI spending, but the subtext wasn't subtle. There are essentially two major competitors in the consumer AI space: OpenAI and Google.


OpenAI, led by Sam Altman, is projecting they'll be profitable by 2030—which would mean going from a reported $7.4 billion loss to profitability in just a few years. That's the kind of hockey-stick optimism that either looks genius in hindsight or catastrophic.


Meanwhile, Anthropic is taking what Amodei describes as a more conservative path, focusing on enterprise customers rather than consumers. "We have better margins. We're being responsible about it," he said, adding that they plan to break even by 2028.


The dig at competitors wasn't accidental. Anthropic's enterprise focus means they're not caught in the "code red" panic that reportedly hit OpenAI when Google's new model gained traction last week. "We've optimized our models more and more for the needs of businesses," Amodei said. "We don't have to do any code reds."


But here's what's worth questioning: is Anthropic's path genuinely more sustainable, or are they just better at managing the narrative? After all, they're still making massive capital commitments based on revenue projections that could be wildly off.


The Chip Depreciation Problem That Could Break Everything


One of the technical questions that could determine whether any of this pencils out is the depreciation schedule for AI chips. In other words: how long do these chips remain valuable before newer, faster, cheaper chips make them obsolete?


This matters enormously. If chips effectively depreciate in three to four years, the math gets very tight. If they're useful for eight to ten years, there's more breathing room.


Amodei's take? "We make very conservative assumptions here." Anthropic assumes that old chips will lose value quickly as new chips come out—which can happen within a year. They factor in aggressive continuation of the chip efficiency curve.


Translation: they're planning for a world where their expensive infrastructure becomes less valuable faster than they'd like. That's prudent, but it also means the pressure to generate revenue quickly is immense.


And if other players are making more optimistic assumptions about chip longevity to justify their spending? Well, that's how bubbles form.


National Security, China, and the Uncomfortable Truth About Export Controls


Amodei has been consistent—and sometimes controversial—in his stance that the U.S. should not sell advanced AI chips to China. This position hasn't made him popular with everyone, particularly Nvidia CEO Jensen Huang, who now happens to be a partner through their recent deal.


But Amodei is unapologetic. His reasoning is straightforward: as AI models become more capable, they'll eventually function like "a country of geniuses in a data center." Whichever nation hosts that country of geniuses gains an overwhelming advantage in intelligence, defense, economics, R&D—everything.


"If it's plopped down in an authoritarian country, I feel like they can outsmart us in every way," Amodei argued. "They'll be able to oppress their own people, have a perfect surveillance state."


This isn't a new argument, but it's one that carries more weight coming from someone who's actually building the technology rather than commenting from the sidelines. The question isn't whether AI has national security implications—it clearly does. The question is whether export controls can actually work when the underlying knowledge is increasingly diffused globally.


And here's the uncomfortable part: Amodei is right to worry about surveillance states, but he's also building technology that could enable exactly that kind of control in *any* country, including democracies. His principle—"we should aggressively use these models in every possible way except in the ways that would make us more like our authoritarian adversaries"—is sensible in theory. But who decides where that line is?


The Job Displacement Reality Check


Perhaps the most refreshing part of Amodei's interview was his willingness to discuss job displacement directly. While many tech leaders downplay or dodge this question, Amodei has been vocal about the risk that AI could eliminate a significant portion of entry-level jobs.


His solution comes in three levels:

Level one: Encourage companies to use AI to augment workers and create new value, not just cut costs. "If AI does 90% of the job, humans can be 10 times more leveraged, and sometimes you need 10 times more of them to do 100 times what you did before."


Level two: Government involvement in retraining programs and potentially fiscal intervention through tax policy. Amodei points out that if AI drives productivity growth to 5-10% per year (compared to the current ~1.6%), there's a "big pie" that can be redistributed to those who aren't direct beneficiaries.


Level three: Long-term restructuring of society itself. He invokes John Maynard Keynes' idea of "economic possibilities for our grandchildren"—a world where people might only need to work 15-20 hours a week, where work is about fulfillment rather than economic survival.


This is thoughtful, but it's also deeply idealistic. The idea that companies will voluntarily choose augmentation over cost-cutting, or that governments will successfully implement redistribution schemes, or that society will smoothly transition to a post-work paradigm—these aren't inevitable outcomes. They're political and economic choices that require power, organization, and collective will.


And let's be clear: the people building AI are the ones who will disproportionately benefit from it. When Amodei talks about a "big pie" that can be shared, he's speaking from the position of someone who's going to get a very large slice.


The Regulatory Capture Accusation


Amodei has also been outspoken about AI regulation, supporting measures like California's SB-53, which establishes requirements for developers of frontier AI models to ensure safety and transparency. This has earned him accusations of "regulatory capture"—the idea that Anthropic is pushing for regulations that would burden competitors while benefiting themselves.


David Sacks, currently serving as AI czar in the White House, said that "Anthropic is running a sophisticated regulatory capture strategy based on fear-mongering."


Amodei's response? That almost all the AI regulation they've supported includes exemptions for small players—SB-53, for instance, only applies to developers with annual gross revenues exceeding $500 million.


There's merit to both positions here. On one hand, it's true that regulations can create barriers to entry that favor established players. On the other hand, the "move fast and break things" approach to technology as powerful as AI is genuinely reckless.


The real question is whether the people closest to the technology should be the ones setting the rules for it. Amodei argues that actual AI researchers, not investors or general tech commentators, are the ones who understand the risks. But proximity to power doesn't equal neutrality, and researchers at AI companies have enormous financial incentives shaping their perspectives.


What Actually Happens Next


Amodei is adamant that there's no single moment when we "achieve AGI." Instead, we're on an exponential curve where models just keep getting better at everything—coding, science, mathematics, analysis.


"I've had internal people at Anthropic say, 'I don't write any code anymore. I don't open up an editor and write code. I just let Claude write the first draft and all I do is edit it,'" he said. "We had never reached that point before."


Models are now winning high school math olympiads and moving on to college-level competitions. They're starting to do original mathematics. The drumbeat continues.


If Amodei is right about the technology trajectory, then the economic question becomes paramount: who benefits, who pays, and what happens when the timing is off?


His honesty about the "cone of uncertainty" is valuable precisely because it cuts against the usual Silicon Valley narrative of inevitable progress and perfect foresight. The AI industry is making massive bets with other people's money, and some of those bets aren't going to pay off.


The question is whether we'll see that reckoning before or after the infrastructure is built, the jobs are displaced, and the social contracts are broken.


Beyond the optimistic projections and the scaling laws and the promises of a country of geniuses in a data center, there's a simpler truth: we're conducting a massive, uncontrolled experiment with the structure of our economy and society. Some companies will profit enormously. Others will fail spectacularly. And most people will have very little say in how it unfolds.


At least Amodei is willing to admit that someone's going to lose.


References and Sources:

"Anthropic C.E.O. Dario Amodei Says Massive A.I. Spending Could Haunt Some Companies" - The New York Times DealBook Summit

Watch the full interview


About the Author:

Léa Rousseau is a digital tech reporter for Digiall, covering artificial intelligence, tech industry practices, and the intersection of technology and power. She previously worked for French media outlets covering technology and digital culture.


Join the Conversation


What do you think about the massive AI infrastructure spending? Is it a calculated bet or reckless gambling? 

Share your thoughts in the comments below.


Stay informed: Subscribe to the Digiall blog and follow our podcast Digiall Tech News for more analysis on AI and emerging technologies.


#ArtificialIntelligence #AIEconomics #Anthropic #DarioAmodei #TechBubble #AIRegulation #OpenAI #AGI #MachineLearning #TechIndustry #AIInfrastructure #FutureOfWork #AIGovernance #TechPolicy #Innovation #StartupEcosystem #AIChips #Nvidia #DataCenters #AIRevolution #TechLeadership #ResponsibleAI #DigitalTransformation #Digiall

When Even the Optimists Start Talking About Risk: Dario Amodei on AI's Economic Gamble
Léa Rousseau December 3, 2025
Share this post
Tags
Archive
Sign in to leave a comment