Graduation season has always been a strange mix of celebration and forecasting. Families cheer, schools reflect, and students—often for the first time—try to imagine what their lives will look like after the cap and gown come off. In 2026, that forecasting is happening in a world where artificial intelligence is no longer a distant concept or a niche research topic. It’s embedded in search, customer support, creative tools, hiring workflows, fraud detection systems, and the everyday software people use to write, design, code, and organize.
And yet, according to recent reporting and conversations across campuses, the presence of AI in students’ minds doesn’t automatically translate into excitement. Awareness is high. Confidence is mixed. Enthusiasm is uneven. The result is a graduation conversation that feels less like a victory lap and more like a calibration exercise: students are trying to understand what’s real, what’s hype, what’s controllable, and what they can actually prepare for before the future arrives.
This shift matters because commencement speeches are built on momentum. They’re supposed to lift graduates toward possibility. They’re meant to be memorable, inspiring, and—ideally—useful. But when the future being described is dominated by a technology that many students experience as both powerful and unpredictable, the usual rhetorical approach can backfire. If the speech leans too hard into inevitability, it can sound like surrender. If it avoids the topic entirely, it can feel out of touch. And if it oversells, it risks turning a once-in-a-lifetime moment into a reminder that the world moves faster than promises.
What students appear to want instead is not a single grand narrative about AI, but clarity about how to live and work inside an AI-shaped economy without losing agency.
AI is everywhere—so why doesn’t excitement follow?
One reason excitement doesn’t automatically follow awareness is that AI is already “in the room,” but not always in a way that feels personal. Students may recognize that AI is changing industries, but the changes can be diffuse. A student might see AI-generated content online, watch automation creep into customer service, or notice that certain tasks are easier than they used to be. But those observations don’t necessarily answer the questions that matter most right now: Will my job exist? Will my skills still be valuable? How do I prove I can do the work? What does “good” look like when tools can generate drafts, summaries, and even code?
In other words, AI can be visible without being legible. It’s present in outcomes, but not always explained in a way that helps individuals map their own path forward.
There’s also a psychological factor. When a technology is framed as transformative, people often expect a corresponding emotional response—wonder, optimism, or at least curiosity. But students are living through a period where transformation is paired with uncertainty. They’ve watched headlines swing between utopian productivity and dystopian disruption. They’ve seen rapid product rollouts followed by policy debates, lawsuits, and sudden changes in what’s allowed or expected. That volatility makes it harder to feel stable enthusiasm. Instead, students tend to respond with caution and pragmatism.
The “readiness” conversation is replacing the “hype” conversation
Across campus discussions, the dominant theme is readiness. Students aren’t ignoring AI; they’re trying to translate it into actionable preparation. That means asking questions like:
What skills should I build that won’t be easily replaced?
How do I work effectively with AI tools rather than competing against them?
How do I evaluate whether AI outputs are correct, biased, or useful?
What does ethical use mean in my field?
How do I communicate my value when parts of the workflow can be automated?
This is a different kind of conversation than the one that dominated earlier waves of tech optimism. It’s less about “the future will be amazing” and more about “the future will require judgment.” Students are increasingly focused on the human capabilities that remain difficult to automate: problem framing, domain understanding, critical thinking, collaboration, and the ability to make decisions under uncertainty.
That emphasis on judgment is also why some students react skeptically to broad claims about AI replacing entire roles. Even when automation is real, the lived experience of students is that work is changing in messy ways. Tasks get reorganized. Responsibilities shift. New tools appear. Some jobs shrink; others expand. Many roles become hybrids—part human, part machine-assisted. The result is that students don’t just want to know whether AI will “take jobs.” They want to know how their specific career track will evolve.
The abstractness problem: “AI-shaped futures” can feel far away
Another reason excitement is hard to generate is that “AI-shaped futures” can feel abstract. Graduation is a milestone, but it’s also a deadline. Students are making near-term decisions about internships, entry-level roles, graduate school, and first apartments. When AI is discussed in sweeping terms—global economic transformation, existential risk, superintelligence—it can feel like a distant weather system rather than something that affects tomorrow’s resume.
Even students who are curious about AI may struggle to connect it to their immediate goals. A student studying literature might wonder how AI changes publishing without turning their passion into a commodity. A student in healthcare might ask how AI affects clinical decision-making while still emphasizing patient trust and safety. A student in business might want to know how AI changes marketing strategy without reducing creativity to prompt engineering.
When AI is presented as a monolith, it becomes harder to see where individuals fit. When it’s presented as a set of tools and tradeoffs, students can start to imagine themselves inside the system.
This is where commencement messaging often stumbles. Speeches are designed to be coherent and uplifting. But AI is not coherent in the way a traditional “future of work” narrative is. It’s uneven across industries, inconsistent across companies, and dependent on regulation, data quality, and organizational culture. The future isn’t one road; it’s a network of branching paths.
So students respond by seeking specificity. They want examples, not slogans. They want to hear how people in real roles are adapting—not just how the technology works in theory.
The credibility gap: students have learned to distrust certainty
Students are also navigating a credibility gap. They’ve grown up in an era of constant prediction. Every year seems to bring a new “this will change everything” claim. Some predictions come true; others fade. Meanwhile, the pace of change creates a sense that today’s certainty could be tomorrow’s embarrassment.
That doesn’t mean students are cynical. It means they’re careful. When someone tells them “AI will do X” without acknowledging uncertainty, they may interpret it as marketing. When someone tells them “AI will never do X,” they may interpret it as denial. The middle ground—“here’s what we know, here’s what we don’t, and here’s how to stay adaptable”—is more likely to land.
Commencement speakers, especially high-profile ones, often have a challenge: they’re expected to deliver confidence. But confidence without nuance can feel like a mismatch for a generation that has learned to live with partial information.
A unique take on the “don’t mention AI” idea
The headline premise—maybe don’t mention AI—sounds counterintuitive in a year when AI is arguably the defining technology conversation. But the underlying logic is less about avoidance and more about timing and tone. If a speech is delivered in a way that treats AI as the central character of everyone’s future, it can overwhelm the message. Graduates may leave feeling like their lives are being narrated by forces they can’t control.
The alternative is not silence; it’s framing. If AI is mentioned, it should be treated as context rather than destiny. It should be used to illustrate a broader point about learning, ethics, adaptability, and responsibility—without implying that the technology itself is the main event.
In practice, that means shifting from “AI will change everything” to “the world will keep changing, and you can choose how you respond.” It also means recognizing that students are already thinking about AI. They don’t need a lecture; they need a perspective that helps them act.
That perspective can be delivered through themes that resonate regardless of whether AI is named: the importance of lifelong learning, the value of skepticism, the power of curiosity, and the responsibility that comes with influence. AI becomes one example among many of how tools can amplify both good and harm.
What students are actually asking for in the workplace
Beyond speeches, students are looking for guidance on how to operate in organizations where AI is being adopted unevenly. Many workplaces are experimenting with AI tools, but adoption is rarely uniform. Some teams use AI for drafting and summarization. Others use it for analytics. Some integrate it into customer-facing products. Others restrict it due to compliance concerns. Students are trying to understand what “responsible use” looks like in real settings.
They’re also asking how to demonstrate competence when AI can produce outputs quickly. In many fields, speed is no longer the differentiator. The differentiator becomes the ability to ask the right questions, verify results, and apply domain knowledge to decide what to do next.
That’s why students often emphasize skills like:
Critical evaluation: knowing how to test outputs, spot errors, and detect bias.
Communication: translating complex ideas into clear decisions for humans.
Ethics and governance: understanding privacy, consent, and accountability.
Systems thinking: seeing how AI fits into workflows, incentives, and risk management.
Collaboration: working with cross-functional teams that include legal, security, product, and operations.
These are not glamorous skills, but they are the skills that make AI useful rather than dangerous. They also align with what employers say they want—though employers sometimes struggle to articulate it clearly.
The “future” is not just technical; it’s institutional
One of the most overlooked aspects of AI’s impact is that technology doesn’t operate in a vacuum. Institutions shape outcomes. Policies determine what data can be used. Procurement rules determine which tools are allowed. Legal frameworks influence liability. Company culture influences whether employees
