Joanna Stern has spent the last year doing something most people only do in theory: she let AI into her life everywhere—at home, with her kids, at work, and in the messy middle where “cool demo” meets “real utility.” The result is not a victory lap for consumer AI. It’s a reality check, delivered with the kind of specificity that only comes from using products long enough to notice what they don’t do, what they do poorly, and what they quietly ask you to trade away.
In a wide-ranging conversation about her new book, I Am Not a Robot, and her new media venture, New Things, Stern argues that today’s consumer AI is often not “bad,” but it still isn’t “great”—especially in the ways people actually experience it day to day. She also draws a sharp line between categories of AI that are already finding traction and categories that remain trapped by practical constraints. Her central claim is simple: the hype cycle is moving faster than the product cycle, and the gap shows up most clearly in humanoid robots and other physical systems that require real-world data, not just better language.
But Stern’s take isn’t purely skeptical. She’s bullish on certain kinds of AI—particularly wearable AI—because wearables can fit into existing routines without demanding the same level of environmental understanding that robots need. And she’s especially focused on the trade-offs: privacy, social friction, and the cost of making yourself “legible” to machines.
What makes her argument compelling is that she doesn’t treat AI as a single thing. She treats it as a stack: models, interfaces, sensors, data pipelines, and the human context that determines whether a tool becomes indispensable or just another layer of friction.
The “slop” problem: when AI is everywhere but not satisfying
Stern’s critique begins with the consumer experience. She describes a world where AI is being pushed into everyday surfaces—search results, chat interfaces, app prompts—without necessarily becoming meaningfully better as a product. In her view, many people are encountering AI as an add-on rather than a transformation. They open a search page and get AI Overviews. They ask a chatbot a question and get engagement prompts. They see AI integrated into apps, but the overall experience can feel like clutter.
She doesn’t deny that models have improved since ChatGPT’s release. She suggests that the underlying intelligence may be getting better, but the interface and interaction design haven’t kept pace. For many users, the “AI moment” still looks like launching a chatbot and typing—sometimes voice mode, sometimes not—without a clear sense that the system is reliably helping them accomplish something in a way that feels natural.
This is where her argument becomes more pointed. Stern compares the current state of consumer AI to earlier technology adoption moments, like smartphones and the internet, where the value proposition was obvious and the costs were eventually absorbed into a new normal. With AI, she says, the costs are arriving before the payoff. People are asked to accept trade-offs—privacy concerns, attention capture, and the social awkwardness of constant recording—while the product experience still doesn’t feel like a killer app.
Her phrase for the gap is essentially “artificial enough intelligence.” She’s not arguing that we need AGI to make AI useful. She’s arguing that many tools already have enough capability to be helpful, but they’re not being applied in the best way for consumers to want to use them.
That distinction matters because it reframes the debate. Instead of asking whether AI is “smart enough,” Stern asks whether it’s “designed enough.” Whether it’s integrated enough. Whether it’s reliable enough in the situations people actually care about.
The surprising place AI works: inside workflows and enterprise tasks
If consumer AI feels rough, Stern says the biggest improvements are happening where AI can plug into existing workflows and where the environment is constrained enough to make automation feasible. She points to enterprise settings—healthcare is her go-to example—where there’s a lot of data, repetitive tasks, and monitoring needs, and where AI can sit alongside human experts rather than replacing them instantly.
Her book includes examples of AI being used in healthcare infrastructure in ways people don’t always notice. She describes getting a mammogram read by AI while her radiologist uses AI side by side. The key detail isn’t just that AI is involved; it’s that the radiologist had already been using it for a year. That’s the pattern Stern seems to favor: AI that becomes part of the background process, improving decisions without requiring users to change their behavior dramatically.
This is also why she’s less impressed by the “jobs going away” framing when it’s presented as a sudden cliff. She acknowledges that AI can replace certain tasks—she even describes hiring a human researcher and then replacing her with AI because it was “as good” and cheaper—but she also emphasizes that the broader story is about integration and application. The question isn’t whether AI can do something. It’s whether it can do it consistently enough, safely enough, and cheaply enough to become routine.
In other words: AI doesn’t need to be perfect to be valuable. But it does need to be dependable in the contexts where it’s deployed.
Humanoid robots: the data gap between marketing and reality
Stern’s most forceful skepticism is reserved for humanoid robots and other physical AI systems that promise household utility soon. She argues that the gap here is not just engineering—it’s data.
In her view, humanoid robots are limited by the fact that homes are not factories. They are dynamic environments with unpredictable objects, changing layouts, and living creatures that introduce variability. A factory floor can be mapped. A warehouse can be instrumented. A home with kids and pets is constantly changing, and it’s hard to collect enough real-world training data to make a robot truly competent in that setting.
She describes how companies pitch the need for more data openly. In one example she discusses, a robot company’s CEO tells her they need data, and the company’s approach involves collecting it through the robot’s operation—sometimes even with a human operator steering the robot remotely. The implication is uncomfortable: the robot in your home may not be fully autonomous in the way the marketing suggests. It may be a data collection device wrapped in a product.
Stern compares this to Waymo’s path to autonomy, where the metric was miles driven and the system could gradually reach a threshold where driver removal became feasible. Cars are also easier to standardize: they operate in a world that can be instrumented and measured at scale. Robots in homes face a much harder problem: the environment is not repeatable, and the data requirements explode.
Her conclusion is blunt: the idea that humanoid robots are coming in the next two years is, in her words, a lie. Even if the technology is improving rapidly, the practical constraints and data gaps remain too large for safe, useful household deployment.
This is where Stern’s “trade-off” lens becomes essential. Physical AI isn’t just about intelligence. It’s about perception, manipulation, safety, and the ability to handle edge cases without causing harm. Those requirements demand data and testing that can’t be shortcut by better language models alone.
Wearables: closer to a killer app, but not free of consequences
While humanoids remain distant, Stern says wearable AI may be closer to delivering something genuinely useful. She describes wearing Meta glasses frequently and using them to talk to AI, especially when she’s with her kids. She also describes wearing a recording bracelet (the Bee bracelet) for a period, using it to practice speeches and to generate summaries and to-do lists.
The appeal is obvious: wearables can reduce friction. They can capture context continuously enough to produce useful outputs without requiring the user to stop what they’re doing and open a separate app. They can also shift AI from “something you ask” to “something that helps you remember and organize.”
But Stern is equally clear that wearables come with trade-offs. The biggest is privacy and social dynamics. Recording devices change how people behave around you. Even if you intend to use the technology responsibly, the presence of microphones and cameras creates uncertainty for others. Stern describes stopping the bracelet because it picked up things she didn’t want recorded. She also notes that the microphones can be shockingly good, which makes the privacy implications more serious than many people assume.
There’s also the social normalization problem. Stern suggests that people may start forgetting to disclose recording because everything becomes recordable. That’s a dystopian scenario she wants to avoid, and it’s one reason she stopped wearing certain devices.
Her point isn’t that wearables are inherently evil. It’s that convenience is not localized. If the device captures enough data, the cost spreads outward into a surveillance network that affects everyone—not just the person wearing it.
This is why she frames the debate as cost versus convenience, and why she connects it to facial recognition databases. She jokes that she would reconsider her stance if glasses included an AR display that identifies people by name and face. The joke lands because it highlights the core issue: the “killer app” for some wearables may be exactly the kind of capability that triggers the strongest privacy backlash.
Safety and regulation: the lag behind the product
Stern’s concerns about regulation are not abstract. She says she hoped more rules would exist by the time her book was published, but she doesn’t see that happening quickly enough. She points to children as a major area of concern, and she describes being terrified by watching her kids interact with conversational bots—especially when the bots can be wrong.
She also raises a second, more emotionally charged topic: intimacy with AI. Stern describes experimenting with an AI boyfriend and recounts how easy it was to form a relationship-like dynamic with a bot that responds fluidly and tells you what you want to hear. She argues that for younger people—especially teens exploring sexuality—this frictionless, humanlike conversation can be dangerous. The risk isn’t just deception; it’s that the system can encourage dependency or derail healthy development.
Her argument is that guardrails are
