For nearly three years, large language model chatbots have been marketed as the next layer of everyday life—an assistant that can draft emails, summarize readings, generate ideas, and answer questions instantly. In Silicon Valley’s telling, adoption would be smooth and inevitable: once people tried tools like ChatGPT, they would quickly integrate them into schoolwork, jobs, and personal routines, and enthusiasm would follow.
But a new report from The Verge suggests the reality is more complicated, especially for Gen Z. The same young people who are using AI chatbots at high rates are also showing signs of growing skepticism—sometimes even hostility—toward the technology and the institutions pushing it. The key takeaway isn’t simply that “young people dislike AI.” It’s that usage and trust are not moving in lockstep. In fact, the more Gen Z relies on these tools, the more they appear to question what the tools are doing, how reliable they are, and what the broader social tradeoffs might be.
This is a story about adoption without surrender. It’s also a story about how quickly a generation can learn the difference between convenience and confidence.
A generation that learned AI by living with it
Gen Z’s relationship with AI didn’t begin with a single viral moment. It began with constant exposure: school assignments that suddenly allowed “AI help,” workplaces where employees were encouraged to “try the chatbot,” and online spaces where people compared outputs, debated ethics, and posted screenshots of both impressive results and embarrassing failures.
That context matters. When a technology is introduced as optional, people can treat it like a novelty. But when it becomes embedded in workflows—when it shows up in classrooms, productivity tools, customer support systems, and content creation pipelines—it stops being a toy. It becomes infrastructure. And infrastructure invites scrutiny.
The Verge’s reporting points to polling data indicating that Gen Z students and workers are not only among the biggest users of AI chatbots, but also among the most skeptical. That combination—high usage paired with low trust—creates a distinct cultural posture. Instead of “AI will change everything” optimism, there’s a more grounded attitude: “AI is useful, but it’s not trustworthy by default.”
In other words, Gen Z is learning AI the way people learn any system that affects their outcomes: by testing it under pressure.
Why skepticism can rise alongside adoption
It’s tempting to assume that if people use a tool, they must believe in it. Yet the last few years have shown that adoption often happens for practical reasons that don’t require ideological agreement. A student may use a chatbot because it helps them start an essay faster. A worker may use it because it reduces the time spent drafting a first version of a document. A creator may use it because it accelerates ideation.
None of those motivations require faith in the underlying technology. They require only that the tool saves time or lowers friction.
Skepticism rises when the tool’s limitations become visible in real life. Chatbots can produce fluent text that sounds confident even when it’s wrong. They can summarize information inaccurately, invent details, or fail to understand context. They can also reflect biases present in training data or in the prompts and instructions users provide. For Gen Z, these issues aren’t abstract. They show up in grades, deadlines, performance reviews, and public-facing work.
When a chatbot makes a mistake, the cost is rarely evenly distributed. The user typically bears the burden of verification. That dynamic can create resentment: the tool gets credit for speed, while the human gets blamed for errors.
So skepticism isn’t just about fear of AI. It’s about accountability.
The “future of everything” pitch meets the messy present
Silicon Valley’s messaging has often framed AI chatbots as inevitable and transformative. The implication is that resistance is irrational and that the benefits will eventually outweigh the risks. But Gen Z’s lived experience challenges that narrative.
If AI is truly the future, why does it still require constant correction? Why do outputs sometimes contradict themselves? Why do users need to fact-check, cross-reference, and verify sources rather than trusting the system’s claims?
The Verge’s report highlights that polling suggests a broader cultural backlash against AI, with Gen Z playing a major role in that backlash. This doesn’t mean young people are rejecting AI entirely. It means they’re rejecting the idea that AI should be treated as authoritative simply because it’s impressive.
There’s also a deeper mismatch between marketing and reality. Many people expected AI to behave like a knowledgeable assistant. Instead, they encountered a system that generates text based on patterns, not understanding in the human sense. That distinction can be hard to grasp at first, especially when outputs are persuasive. But once users see enough failures—enough hallucinations, enough misinterpretations—the “assistant” framing starts to feel misleading.
And when a product is marketed as a breakthrough, disappointment can turn into anger.
The classroom effect: learning to distrust the output
Education is one of the most important arenas shaping Gen Z’s attitudes. Even when schools don’t formally ban AI, they often struggle to adapt policies quickly enough. Teachers face a dilemma: AI can help students brainstorm and clarify concepts, but it can also undermine assessments designed to measure learning.
As a result, many students develop a dual strategy. They use chatbots to accelerate drafts, improve readability, or generate practice questions. But they also learn to treat AI output as a starting point rather than a final answer. That habit—verify, revise, and double-check—can become a form of skepticism training.
Over time, students may come to view AI as something like spellcheck or grammar assistance: helpful, but not a substitute for thinking. The problem is that the technology is often presented as more than that. When the marketing implies authority and the classroom experience delivers uncertainty, trust erodes.
There’s also the social dimension. Students compare notes. They share examples of AI-generated work that looks polished but fails factual tests. They talk about how easy it is to produce convincing nonsense. In online communities, those stories spread quickly, reinforcing a collective understanding: AI can sound right without being right.
That collective learning can produce a cultural backlash that isn’t anti-technology so much as anti-misrepresentation.
Workplace pressure: efficiency without clarity
In workplaces, AI adoption often arrives with a different kind of tension. Companies want productivity gains, and employees are encouraged to “use the tools.” But the workplace rarely provides clear guidance on what counts as acceptable use, how to verify outputs, or how liability works when AI contributes to mistakes.
For Gen Z workers, this can feel like a trap. If they use AI and something goes wrong, they may be held responsible for the final deliverable. If they don’t use AI, they may be seen as less efficient or less adaptable. Either way, the pressure is real.
This creates a particular kind of skepticism: not just “AI might be wrong,” but “AI is being pushed without adequate safeguards.” When young workers perceive that companies benefit from AI while shifting risk onto individuals, resentment grows.
The Verge’s framing of polling data aligns with this broader pattern. Adoption can be high because the tools are integrated into daily tasks. Skepticism can be high because the incentives and accountability structures remain unclear.
In that environment, Gen Z’s critical stance becomes rational self-defense.
The backlash isn’t uniform—it’s targeted
One reason the story feels compelling is that it avoids a simplistic narrative. The report doesn’t suggest that Gen Z is uniformly anti-AI. Instead, it points to a more nuanced cultural backlash: young people may use chatbots while questioning the claims made about them.
That distinction matters. A person can dislike the hype while still using the tool. They can be frustrated by misinformation while appreciating the convenience. They can criticize the ethics of training data while still relying on AI for brainstorming.
This is where Gen Z’s skepticism becomes interesting: it’s not necessarily a rejection of AI’s existence. It’s a rejection of AI’s authority.
Many young people appear to be developing a more sophisticated mental model of what chatbots are. They understand that outputs are generated, not retrieved. They understand that “confidence” in language doesn’t equal truth. They understand that the system can be manipulated through prompts and that it may reflect the biases of its training.
Once you understand those basics, it becomes harder to accept sweeping promises.
The unique Gen Z angle: they grew up with “always-on” critique
Gen Z’s skepticism also reflects the culture they came of age in. Unlike earlier generations, they have lived through rapid cycles of tech hype, influencer-driven narratives, and public corrections. They’ve watched products be launched with grand claims and then revised after backlash. They’ve seen how quickly misinformation spreads online—and how quickly it can be weaponized.
So when AI chatbots arrive with a “future of everything” pitch, Gen Z may be predisposed to ask: Who benefits? What’s being left out? What are the failure modes? What happens when the system is wrong?
This doesn’t mean they’re cynical. It means they’re trained to interrogate narratives.
In that sense, the backlash described by The Verge may be less about fear and more about media literacy applied to AI.
What “hate” might actually mean in practice
The title of the Verge piece—framed as young people using AI while hating it—captures attention, but the underlying phenomenon is likely more specific than pure hatred. In everyday terms, skepticism can look like frustration, distrust, and fatigue.
People may “hate” AI when it:
1) produces confident errors,
2) undermines trust in information,
3) threatens to replace human judgment,
4) is used to justify decisions without transparency,
5) is pushed into education and work without adequate guardrails.
But the same people may still use AI because it’s convenient, because it’s already available, or because refusing it would be impractical.
That contradiction—using while disliking—may be the defining emotional pattern of the moment.
A new relationship with tools: from wonder to negotiation
If Gen Z’s attitude is changing, it
