AI isn’t winning hearts, and the reason may have less to do with what companies are saying about it—and more to do with what people are being asked to become in order for it to work.
That’s the core through-line of a recent Decoder episode from The Verge, hosted by Nilay Patel, which argues that the backlash to AI is not primarily a communications failure. Instead, it’s a mismatch between two ways of understanding the world: one rooted in “software brain,” and another rooted in lived experience. In this framing, software brain is the habit of treating reality as something that can be captured in algorithms, databases, and repeatable loops—then controlled through structured instructions. It’s a powerful worldview, one that helped build the modern internet economy. But it has limits, especially when it collides with human institutions, human ambiguity, and human privacy.
The episode doesn’t claim AI has no value. It claims something more specific: that many deployments of AI are experienced by ordinary people not as empowerment, but as flattening—an attempt to make human life legible to systems that optimize, predict, and automate. And once you see AI through that lens, the growing negativity in public opinion starts to look less mysterious.
To understand why, it helps to start with the data points the episode highlights. Polling cited in the show suggests that large segments of the public are increasingly concerned about AI, even among people who use it. The episode points to results where Gen Z appears especially negative—more skeptical, more angry, and less hopeful than earlier cohorts. It also notes that usage is widespread: many respondents report using tools like ChatGPT or Copilot in the last month. That combination—high exposure and low enthusiasm—matters. If people were only hearing about AI secondhand, you could blame messaging. But when people encounter AI directly every day, the story changes. You can’t simply “market” your way out of what users repeatedly experience in search results, feeds, and assistant features.
Nilay Patel’s argument is that tech leaders often misdiagnose this gap. They assume the problem is persuasion. The episode contrasts that assumption with the lived reality of users. It includes remarks attributed to Sam Altman suggesting that better marketing would improve AI’s popularity, even comparing AI to an unpopular political candidate. The episode pushes back hard: it insists AI doesn’t have a marketing problem because it already has massive distribution and visibility. People aren’t avoiding AI because they haven’t been told what it can do. They’re reacting to what it does to their attention, their expectations, and their sense of control.
So what is “software brain,” exactly? The episode offers a simple definition: seeing the world as a series of databases that can be controlled with the structured language of code. Once you adopt that worldview, it becomes natural to believe that if you can control the data, you can control the outcome. This is how many successful companies operate. Zillow is a database of houses. Uber is a database of cars and riders. YouTube is a database of videos. Even a news site like The Verge can be understood as a database of stories. The pattern is familiar: collect information, structure it, query it, and then automate decisions based on it.
But the episode argues that this approach breaks down when the “database” stops matching reality. Anyone who has run a real system knows this: at some point, you don’t fix the world—you tweak the model. You adjust the database to better fit what you wish were true. That’s not inherently evil; it’s just how modeling works. The problem, the episode suggests, is that the AI industry has lost sight of the boundary between representation and reality. Because AI thrives on data, it encourages ever more conformity: more of life should be captured, categorized, and fed into systems so the systems can act.
The episode illustrates this with an example from government. It describes how Elon Musk and DOGE allegedly moved quickly to take control of databases in the public sector, only to run into the reality that governance isn’t software. The databases weren’t reality; they were partial representations. When the representation fails, the attempt to “control” the system can produce chaos rather than clarity. The takeaway is not that data is useless. It’s that data is not the world, and institutions aren’t obedient machines.
This is where the episode’s most distinctive move comes in: it compares software brain to lawyer brain.
Lawyers, like engineers, rely on structured language. They work with statutes, citations, precedent, and formal procedures. Both professions use structured systems to guide complicated outcomes. Both can feel deterministic from the outside: law looks like a set of rules, and code looks like a set of instructions. But the episode emphasizes a crucial difference. Law is not deterministic. Ambiguity is not a bug; it’s part of the design. The legal system is built to handle contested facts, competing interpretations, and shifting context. That ambiguity is why lawyers exist, and why people often dislike them: because there’s always another argument, another gray area, another plausible reading.
In other words, the episode argues that the “computer-like certainty” people assume about formal systems is often an illusion. The formality makes it feel predictable, but the outcome depends on interpretation, power, and context. That’s why the episode says society and courts aren’t computers—even if they look like they might be. The show even references proposals for automated AI arbitration systems, including an argument that people might accept worse outcomes from automation if they feel heard. Whether such systems are workable is left open, but the underlying point is clear: software brain wants the world to behave like a computer, while human systems are fundamentally interpretive.
If that sounds abstract, the episode brings it back to everyday life by describing how AI is being used in business and beyond. It argues that any process that resembles code talking to a database in a repetitive way is “up for grabs.” That’s why enterprise AI is attractive: businesses already run on software loops—collect data, analyze it, take action, repeat. Businesses also control their data, and they can demand integration across internal systems. In that environment, software brain feels productive. AI can automate tasks, generate reports, and accelerate decision cycles.
But the episode draws a line: not everything is a business, and not everything is a loop. The entire human experience cannot be captured in a database. That limit is presented as the reason people hate AI. Not because AI is inherently malicious, but because it flattens people—reducing complex lives into measurable inputs and optimizing outputs.
This is where the episode’s title becomes more than a slogan. “The people do not yearn for automation” is not a claim that people never want convenience. It’s a claim that people don’t want to be reorganized around automation, surveillance, and legibility. The episode uses smart home automation as an analogy. Many households have automated lights and climate controls, but major tech companies have struggled to make regular people care deeply about smart home automation as a concept. The show suggests that the reason is emotional and social, not technical. People don’t automatically want their lives to be instrumented and managed by systems they don’t fully control.
AI, in this framing, doesn’t fix that. Most people aren’t collecting comprehensive data about their lives. Even when they are, it’s scattered across incompatible systems: email in Gmail, messages in iMessage, schedules in Outlook, workouts in Peloton. Those systems don’t talk to each other, and there may be little incentive to connect them. Asking people to integrate everything can feel invasive. It can also feel like a threat to autonomy, because the more your life becomes data, the more power accrues to whoever controls the data pipeline.
The episode argues that this is not just a privacy concern; it’s a psychological one. Even considering how much of your life could be captured in databases can make people unhappy. No one wants constant surveillance, especially not surveillance that increases corporate power. And yet, the AI industry’s default path often assumes that more integration is better: more access, more context, more memory, more continuous background processing.
The show reinforces this with a quote attributed to Ezra Klein describing Silicon Valley’s push toward making people “legible” to AI. The idea is that AI assistants become more valuable when they can access files, email, calendar, and messages—building persistent memory of preferences and patterns. The episode acknowledges the cybersecurity risks, but it emphasizes the logic: the more of your life you open to AI, the more valuable the AI becomes. That logic is coherent from a product standpoint. It is also coherent from a software brain standpoint: if you can model the user as a dataset, you can optimize the interaction.
But for ordinary people, the tradeoff can feel wrong. It can feel like the price of usefulness is surrendering boundaries. It can feel like the system is not helping you do things; it’s reshaping you into something the system can digest.
This is why the episode’s critique lands differently than typical “AI will replace jobs” narratives. Job displacement is one fear, and the episode references executives who warn about employment crises. But the show suggests that the deeper emotional driver is not only economic. It’s existential in a quieter way: people are being asked to accept a new relationship between themselves and technology, one where their lives are flattened into inputs.
That flattening shows up in subtle product design choices. The episode mentions meeting tools with AI note takers, design tools that connect to corporate email systems, and the broader trend of integrating AI into workflows. Each feature can be framed as helpful. But collectively, they create a world where more activity is captured, summarized, and routed through systems that can be queried and acted upon. Even when the intent is benign, the effect can be to reduce human complexity into machine-readable traces.
The episode’s most pointed line is essentially that asking people to adapt to computers is a failure mode. Computers should adapt to people. Asking people to become more legible to software—turning themselves into a database—is
