Uber CTO Praveen Neppalli Naga Joins StrictlyVC San Francisco Speakers for April 30 TechCrunch Event

StrictlyVC San Francisco is starting its 2026 TechCrunch events calendar with a theme that feels almost inevitable now: how do you build, run, and scale real-world technology when AI is no longer a feature but an operating condition?

On April 30 at the Sentro Filipino Cultural Center, the StrictlyVC SF lineup will welcome Uber CTO Praveen Neppalli Naga, adding another high-profile operator to a roster that’s increasingly focused on execution rather than theory. The event—positioned as a conversation about “operating at scale in the age of AI”—isn’t just a typical leadership panel. It’s a signal that the center of gravity in tech is shifting toward the unglamorous work: reliability, cost control, systems design, governance, and the organizational mechanics required to keep complex platforms moving while models evolve.

For founders, investors, and engineers alike, the question behind the headline is straightforward: what does “scale” actually mean when your product depends on probabilistic outputs, rapidly changing model capabilities, and infrastructure that can be expensive in ways traditional software never was? And perhaps more importantly, how do you scale without turning your organization into a patchwork of one-off fixes?

Praveen Neppalli Naga’s presence matters because Uber is one of the clearest examples of large-scale systems operating under constant demand variability. Ride-hailing is not a static workload; it’s a living system shaped by geography, time, seasonality, and human behavior. That means the engineering challenge isn’t only building models or deploying services—it’s orchestrating them inside a platform that must remain dependable even when the world doesn’t cooperate. In other words, the “AI era” doesn’t replace the fundamentals of scaling; it intensifies them.

At StrictlyVC SF, the discussion is expected to focus on what it takes to operate at scale when AI becomes embedded across the stack—whether that’s in routing, matching, forecasting, fraud detection, customer support, or internal tooling. But the deeper value for attendees will likely come from the operational lens: how teams decide what to automate, how they measure success, and how they prevent AI from becoming a black box that’s impossible to debug when something goes wrong.

One of the most interesting shifts happening across the industry is that AI has changed the definition of “production readiness.” In classic software, you could often reason about correctness deterministically: if the code is right, it behaves right. With AI, correctness is probabilistic, and performance can drift as models update, prompts change, data distributions shift, or user behavior evolves. That forces organizations to treat model behavior like a living system—something that needs monitoring, evaluation, and guardrails as rigorously as any other critical component.

This is where the “operating at scale” framing becomes more than a buzz phrase. Scaling AI isn’t only about throughput and latency. It’s about building feedback loops that catch failures early, designing fallbacks that preserve user trust, and creating governance processes that allow teams to move quickly without losing control. When you’re running at Uber-like scale, even small inefficiencies become massive costs. Even minor reliability issues become visible to millions of users. So the operational discipline required is unusually demanding—and unusually instructive.

The StrictlyVC SF audience—comprised of founders, investors, and builders—will likely be looking for practical guidance rather than generic optimism. The most useful answers in this kind of setting tend to come from the tension between speed and safety. AI systems can be iterated quickly, but production environments punish instability. Teams need a way to ship improvements without constantly destabilizing core workflows. That often means investing in evaluation frameworks, establishing clear ownership boundaries between model development and platform operations, and building tooling that makes it easy to diagnose issues across the full pipeline.

Another angle that makes this event timely is the way AI is reshaping organizational structure. Many companies started their AI journeys by bolting models onto existing products. But as AI becomes more central, the architecture of teams changes too. You see new roles emerge—ML engineers, applied scientists, evaluation specialists, prompt and workflow designers, and platform engineers who understand both distributed systems and model behavior. The challenge is coordination: how do you ensure that the people optimizing model quality aren’t inadvertently increasing operational risk, and that the people optimizing reliability aren’t throttling innovation?

In large organizations, this coordination problem becomes a scaling problem in itself. It’s not enough to have talent; you need processes that translate experimentation into stable deployment. That includes deciding how to version models and prompts, how to manage dependencies, how to handle incident response when failures are non-deterministic, and how to define metrics that reflect both business outcomes and user experience.

StrictlyVC’s format—bringing together operators and investors—has historically been strongest when it connects these operational realities to funding and strategy. Investors often ask founders about differentiation, but in the AI era, differentiation can be fragile if it’s purely model-based. Models commoditize faster than teams do. What tends to endure is the ability to integrate AI into workflows, maintain quality over time, and reduce the cost of inference while improving outcomes. That’s why an operator like Uber’s CTO is a compelling addition: the story isn’t just about AI adoption; it’s about sustaining performance under real constraints.

Cost is one of those constraints that rarely gets enough attention in early-stage conversations. AI can be expensive, and the cost curve can surprise teams that assume “more usage” automatically means “more value.” At scale, inference costs, latency budgets, and compute availability become strategic variables. Companies must decide whether to use smaller models, caching strategies, retrieval augmentation, batching, or hybrid approaches that combine deterministic logic with AI where it adds the most value. The best teams don’t just chase accuracy—they optimize the entire system for cost-performance tradeoffs.

That optimization mindset is likely to be a key thread in the April 30 conversation. Operating at scale means making decisions that are invisible to end users but decisive for sustainability. For example, if an AI feature is used frequently, even a small per-request cost can balloon into a major expense line. If latency increases, user engagement can drop. If model outputs are inconsistent, support costs rise. These are not theoretical concerns; they’re the day-to-day economics of running AI in production.

There’s also the question of governance and safety, which has become unavoidable as AI capabilities expand. Governance isn’t only about compliance; it’s about building trust. When AI is part of customer-facing experiences, the cost of a bad output isn’t just reputational—it can be operational, legal, and financial. Organizations need policies for what the system should and shouldn’t do, mechanisms for detecting unsafe behavior, and escalation paths when the system fails. At scale, governance must be engineered into the workflow, not appended after the fact.

This is where the “age of AI” framing becomes especially relevant. Traditional software governance focuses on code changes and access controls. AI governance must also account for data sources, model updates, prompt templates, and evaluation results. It requires a different kind of documentation and a different kind of accountability. The teams that get this right tend to move faster over time because they reduce uncertainty and avoid repeated rework.

Another unique take on this event is the implied shift from “AI as a capability” to “AI as an operational layer.” Many companies are learning that AI isn’t a single component; it’s a set of behaviors distributed across services. A recommendation system might rely on embeddings, ranking models, and business rules. A customer support assistant might combine retrieval, summarization, and policy checks. A fraud detection system might blend anomaly detection with supervised models and human review. Each piece has its own failure modes, and the overall system can fail in ways that are difficult to isolate.

Operating at scale therefore requires observability that goes beyond standard logging. Teams need to track not only whether requests succeed, but also whether outputs meet quality thresholds, whether the system is drifting, and whether user interactions are trending in the wrong direction. They need evaluation pipelines that can run continuously, not just during pre-launch testing. They need to know when to roll back, when to adjust prompts, when to retrain, and when to change the workflow entirely.

For founders attending StrictlyVC SF, the most valuable takeaway may be that scaling AI is less about finding the “best model” and more about building the best system around the model. That includes data strategy, evaluation discipline, and the ability to iterate safely. It also includes the ability to communicate internally—aligning product goals with engineering constraints and ensuring that teams share a common understanding of what “good” looks like.

Investors, meanwhile, will likely be watching for signals of maturity. In early-stage AI startups, it’s common to see impressive demos but unclear operational plans. The market is increasingly rewarding teams that can articulate how they will measure quality, control costs, and maintain reliability as usage grows. A CTO-level operator joining the lineup suggests that the event will emphasize these operational realities rather than treating AI as a magic ingredient.

The choice of venue—Sentro Filipino Cultural Center—also hints at the broader intent of StrictlyVC SF: to create a community space where technical leaders and venture stakeholders can talk candidly. Events like this tend to work best when they encourage questions that go beyond surface-level curiosity. Attendees often want to know what breaks first at scale, what tradeoffs were made, and what lessons were learned the hard way. Those are the questions that turn a panel into a useful conversation.

As April 30 approaches, the addition of Uber’s CTO to the StrictlyVC SF lineup raises expectations for a discussion grounded in real operational constraints. The age of AI has created a new kind of scaling challenge—one where reliability, cost, governance, and evaluation are inseparable from product strategy. If the event delivers on its promise, it won’t just be about how AI changes technology. It will be about how AI changes the way technology organizations function—how they plan, ship, monitor, and improve.

And for anyone building in this moment, that’s the real