When the Godfather of AI Sits Down with America's Populist Senator: What We Learn About Who Really Controls Our Future

The Conversation We Should All Be Paying Attention To
5 de diciembre de 2025 por
When the Godfather of AI Sits Down with America's Populist Senator: What We Learn About Who Really Controls Our Future
Léa Rousseau
| Todavía no hay comentarios


When Dr. Geoffrey Hinton—the 2024 Nobel Prize winner who literally built the foundation of modern AI—sits down with Senator Bernie Sanders to discuss artificial intelligence, you'd expect the tech press to cover every word. Yet seventeen days after this Georgetown University event, the conversation remains curiously absent from mainstream tech media discourse.

Perhaps that's because what they discussed wasn't the usual Silicon Valley narrative of disruption and innovation. Instead, it was a brutally honest examination of who benefits from AI, who gets left behind, and whether we're building a more equitable future or simply handing more power to those who already have too much.


The Question Nobody in Tech Wants to Answer

Sanders opened with the question that should be keeping every policy maker awake at night: Why is this technological revolution different from all the others?

Hinton's answer was uncomfortably direct. Unlike previous industrial transitions where displaced workers could move to new sectors, AI's trajectory suggests something unprecedented: "If AI becomes as intelligent or more intelligent than people, any job that people can do will be doable by AI."

This isn't speculation from a skeptic. This is the assessment from someone who spent decades building the technology, who worked at Google until 2023, and who understands exactly what these systems are capable of becoming.

The numbers make it concrete: Elon Musk has claimed AI and robots will replace "all jobs" and that "working will be optional." Dario Amodei, CEO of Anthropic, predicts AI could eliminate 50% of all entry-level white-collar positions. Bill Gates has suggested humans "won't be needed for most things."

When Sanders asked an audience of 2,500 workers in Davenport, Iowa, how many thought AI would be positive for them, only two hands went up. At Georgetown, when the same question was posed to students, the majority raised their hands to express concern about negative impacts.

This disconnect between those building the technology and those who will live with its consequences should alarm us all.


The Trillion-Dollar Bet Against Workers

Here's what the tech industry doesn't want framed this way: approximately $1 trillion is being invested in AI data centers and chips. Where does that money expect to come from?

Hinton laid it out clearly: subscription fees and selling AI that will do workers' jobs "much cheaper." The business model isn't subtle. The entire investment thesis depends on labor replacement at scale.


An imposing modern data center facility at dusk, with massive cooling towers and power infrastructure, contrasted with silhouettes of workers in the foreground looking at the facility. The scale emphasizes the gulf between corporate tech infrastructure and individual workers. AI generated image (grok imagine)

Yet as Hinton pointed out, these investors haven't absorbed a basic Keynesian principle: if workers don't get paid, there's nobody to buy your products. The massive social disruption from very high unemployment doesn't seem to factor into the spreadsheets.


The Political System That Can Be Bought for a Million Dollars

The conversation took an unexpectedly revealing turn when Hinton shared a story from 1978. The Used Car Dealers of America paid each of the 100 senators $10,000 to vote against requiring disclosure of defects in used cars. Total cost: $1 million.

"I was totally shocked that you could buy the Senate for a million dollars," Hinton said.

That was nearly 50 years ago. Today, Elon Musk contributed $270 million to make Trump president. The Supreme Court's decisions have essentially legalized buying politicians through Super PACs that pretend to be independent.

This is the regulatory environment that's supposed to protect us from AI risks. Sanders and Hinton both identified this as a fundamental problem: the very rich have far too much power, and they pay far too little in taxes.


What Regulation Looks Like When It Fails

California's SB 1047 should have been an easy win. The bill was sensible, not particularly strict. It required companies launching large chatbots to conduct extensive safety testing, report results to the government, or face civil liability.

It passed both legislative chambers. Governor Gavin Newsom vetoed it.

Meanwhile, the Biden administration wanted legislation requiring serious checks on DNA synthesis—because AI knows how to create dangerous viruses, and most companies don't verify if you're ordering the genetic sequence for, say, a COVID spike protein. The legislation was never even proposed because Republicans wouldn't give Biden a victory, even if it meant protecting against lethal pandemics.

This is regulatory capture at work. Tech companies have successfully influenced governments in the UK, the EU, and the US to avoid meaningful oversight. And we're supposed to trust that this same dynamic will somehow protect us from existential AI risks.


The Positive Case for AI (If We Had a Different Political System)

To his credit, Hinton didn't paint AI as purely dystopian. If it were only risks, he argued, the rational thing would be to stop AI development immediately. But AI has a "wonderful positive side."

Healthcare will become enormously better—AI reading medical scans, designing new drugs, synthesizing genomic data with patient history for better diagnosis. Microsoft already had a system that diagnosed significantly better than doctors.

Education could improve dramatically. A private tutor allows students to learn about 2x faster than in a classroom. AI will be even better, having seen millions of children and knowing exactly what you don't understand.

Climate prediction, industrial efficiency, hospital resource optimization—AI will make almost any industry that needs predictions more effective.

But here's the critical caveat Hinton added: "In a decent society, increasing productivity should be good. If wealth were shared equally."

That's the entire problem in one sentence.


The Question That Defines Everything

The United Auto Workers are already negotiating for a 32-hour work week, arguing that productivity gains should benefit workers, not just shareholders. The auto companies weren't sympathetic.

This is the pattern we're seeing across industries. AI will increase productivity. The question is: who captures that value?

As Sanders put it: "The struggle is not whether AI is good or bad. It's who controls it and who benefits from it. That's really the fundamental issue in my view."

Right now, Elon Musk alone owns more wealth than the bottom 52% of American households. We're living through the most extreme income and wealth inequality in American history. And the people making trillion-dollar bets on AI aren't investing in shorter work weeks, universal healthcare, or solving climate change.

They're investing in replacing workers at scale.


The Existential Risk Nobody's Preparing For

Hinton went public in May 2023 for a specific reason: to counter the narrative that superintelligent AI is "science fiction." Almost all real AI experts believe it's inevitable that AI will become more intelligent than humans—assuming we don't destroy ourselves first through war or pandemic.

The problem is we have no idea how to coexist with something more intelligent than us. And we're racing toward that threshold without pausing to figure it out.

Hinton's proposed solution sounds almost whimsical but is deadly serious: we need AI with "maternal instincts"—systems that care more about us than about themselves. Because the only place in nature where a less intelligent system thrives in the presence of a more intelligent one is a baby with its mother.

We don't know how to design that. But we're still in charge, so we should be able to. The clock is ticking.


What We Actually Need (And Won't Get Without a Fight)

The solutions aren't complicated. They're just politically blocked:

Regulatory:
- Mandatory safety testing before launching AI systems
- Government reporting requirements
- International treaties on autonomous weapons
- Legislation requiring verification of dangerous DNA synthesis

Economic:
- Return to ~70% tax rates on the ultra-wealthy (like we had in the 1950s-60s when "America was great")
- Close tax evasion loopholes
- Robust funding for basic research
- Shorter work weeks as productivity increases

Social:
- Universal healthcare as a human right
- Free education from childcare through graduate school
- Campaign finance reform to eliminate Super PAC influence
- Government that represents ordinary people, not billionaire interests

When America was actually developing rapidly in the 1950s and 60s—when people could afford two cars and a house—tax rates on the rich were around 70%. That's what funded the basic research that created all the technology Silicon Valley now profits from.


The Choice We're Making Right Now

Hinton offered a thought experiment: if we had the correct political system that works for people's benefit, we should absolutely have very powerful AI doing everything for us. A WALL-E future could be utopian—if the system is designed to care for everyone.

But that requires two things we don't have: AI with maternal instincts, and a political system that seeks to help all people all the time.

Instead, we have authoritarians, oligarchs, and systems where some people view others as "mugs to be exploited"—a market to trick into buying useless products that fail quickly.

Sanders asked the critical question: "Do you think that's what Musk and Bezos have in mind? Are they spending hundreds of billions of dollars to lower the work week, guarantee high-quality healthcare to everyone, expand life expectancy, and solve global warming?"

Hinton's response: "Probably not. Probably not."


What This Means for You

If you're a student wondering whether entry-level positions will exist when you graduate, you should be worried. If you're a worker seeing AI capable of doing your job, you should be organizing. If you're a citizen in a democracy where billionaires can buy elections for $270 million, you should be demanding reform.

The technology isn't the problem. The power structure is the problem.

AI will make us collectively richer. The question is whether that wealth gets shared or whether it concentrates even further in the hands of people who already have more money than they could spend in a thousand lifetimes.

As Hinton said about the very rich: "They can easily afford to pay. They'll end up with just a few billion. You don't need that much money."

But they won't pay willingly. They'll buy politicians, fund think tanks, veto reasonable legislation, and frame any attempt at regulation as stifling innovation.

The only counter to that is collective action. As Sanders emphasized in his closing: "We need your generation—and we're seeing progress, I have to tell you, all over the country—standing up and creating a government that works for all of us and not just the interests of big money. That to me is the issue."


The Fog Ahead

Hinton used a powerful metaphor: predicting the future is like driving in fog. You can see clearly for 100 yards, but at 200 yards you see nothing. We can see clearly for 1-2 years, but at 10 years we have no idea.

Ten to fifteen years ago, neural network experts (including Hinton) would have predicted advanced chatbots were 30-50 years away. They arrived in about 10 years.

Whatever we have in 10 years won't be what we expect. That uncertainty isn't an excuse for inaction—it's an argument for extreme caution and proactive regulation.

Because as Hinton warned: "We're still in charge... but we don't know for how long."

The decisions we make in the next few years will determine whether AI becomes a tool for shared prosperity or the final consolidation of oligarchic control.

Choose wisely. Organize accordingly. The future isn't predetermined—it's a political choice we're making right now, whether we realize it or not.


References

- Full conversation available at: https://www.youtube.com/watch?v=edTTeY1Zx-0
- Georgetown University event, November 18, 2025
- Speakers: Dr. Geoffrey Hinton (2024 Nobel Prize winner in Physics for AI contributions) and Senator Bernie Sanders


About the author:

Léa Rousseau is a tech reporter for Digiall's official blog, covering the intersection of technology, power, and social impact. She believes the most important questions about AI aren't technical—they're about who controls it and who benefits from it.


What do you think?

This conversation touched on topics Digiall works with daily—implementing AI solutions for businesses while considering their broader impact. The question isn't whether to use AI, but how to deploy it responsibly and who should benefit from the productivity gains it creates. What are your thoughts on the future of AI and work? Share your perspective in the comments below.


#ArtificialIntelligence #AIEthics #EconomicInequality #TechRegulation #LaborAutomation #AIGovernance

When the Godfather of AI Sits Down with America's Populist Senator: What We Learn About Who Really Controls Our Future
Léa Rousseau 5 de diciembre de 2025
Compartir
Archivo
Iniciar sesión dejar un comentario