The AI boom has a familiar rhythm: first comes the flash of breakthrough models, then the scramble to build the infrastructure that can actually run them at scale. For a while, the market narrative seemed to split neatly into two camps—new AI-first companies and the “old IT” incumbents that were supposed to be left behind. But recent reporting and industry signals point to a different reality. The pendulum is swinging back. Established vendors in servers, semiconductors, and enterprise software are making a more deliberate bid for AI relevance, not by trying to out-invent the newest accelerators, but by repositioning themselves around the parts of AI deployment that are hardest to get right: reliability, integration, cost control, security, and operations.
This shift matters because the center of gravity in AI is moving. The early phase of adoption—hackathons, prototypes, and proof-of-concept deployments—can often be done with whatever compute is available. The next phase is different. When AI becomes embedded in customer-facing products, internal workflows, and mission-critical decision systems, the requirements change. Organizations need predictable performance, governance, observability, and repeatable deployment pipelines. They also need to manage heterogeneous workloads: training jobs, inference services, batch processing, retrieval-augmented generation (RAG), and long-running agent systems. In that world, “AI relevance” is less about owning the most famous model and more about owning the stack that keeps AI running.
Old IT’s renewed push is therefore not just a marketing campaign. It reflects a practical recognition: the bottleneck is increasingly operational. Even when the best chips exist, they must be fed by the right servers, connected by the right networking, scheduled by the right software, and monitored by the right tooling. And those layers—often boring to demo, but essential to scale—are where incumbents have deep experience.
Servers: from generic compute to AI-ready platforms
One of the clearest areas of renewed investment is servers designed specifically for AI workloads. The term “AI server” can sound like a rebrand of standard hardware, but the differences are real. AI workloads are sensitive to memory bandwidth, interconnect latency, storage throughput, and power efficiency. They also tend to stress systems in ways traditional enterprise applications don’t: large-scale parallelism, frequent data movement between CPU and accelerator, and heavy reliance on high-speed networking for distributed training.
Incumbent server vendors are responding by offering platforms that are easier to deploy for AI teams. That includes better support for multi-GPU configurations, improved thermal design for sustained loads, and more robust firmware and management tooling. But the deeper value proposition is integration: validated configurations that reduce the time between “we bought hardware” and “we can train or serve reliably.”
In practice, many organizations discover that the hardest part of scaling AI isn’t acquiring GPUs—it’s building a stable environment where drivers, firmware, networking, and orchestration tools work together. Old IT vendors are leaning into this by packaging reference architectures and offering tighter lifecycle management. Instead of selling components that customers must assemble and troubleshoot, they’re increasingly selling “known-good” stacks.
There’s also a cost angle. AI infrastructure spending is under scrutiny as budgets tighten and ROI expectations rise. Companies want to avoid overprovisioning and reduce downtime. Server vendors that can deliver higher utilization—through better scheduling support, faster provisioning, and more efficient resource management—can become strategic partners even if they aren’t the headline-grabbing accelerator suppliers.
General-purpose chips: the quiet bet on flexibility
Another major theme is the renewed emphasis on more general-purpose chips. This doesn’t mean the market is abandoning specialized AI accelerators. Specialized chips still offer compelling performance per watt for certain training and inference patterns. But the industry has learned that “one chip to rule them all” is rarely true. Workloads vary. Models evolve. Software stacks change. And the economics of AI depend on matching the right compute to the right job.
General-purpose chips—especially CPUs—remain central because they handle orchestration, preprocessing, data pipelines, and parts of inference that don’t map cleanly to accelerators. They also provide a compatibility layer when teams need to run multiple frameworks, integrate with existing systems, or support legacy applications alongside AI services.
What’s changing is how these general-purpose chips are being positioned. Rather than being treated as mere controllers for accelerators, they’re increasingly marketed as capable participants in AI workloads. That includes improvements in memory subsystems, vector capabilities, and support for modern instruction sets that accelerate parts of inference and data handling. It also includes better platform-level design so that CPU-to-accelerator communication doesn’t become a hidden tax.
The unique twist in the current moment is that general-purpose chips are becoming a hedge against volatility. AI teams face constant churn: new model architectures, shifting optimization strategies, and evolving compiler toolchains. If an organization’s entire AI strategy depends on a single accelerator ecosystem, it can become locked into a narrow set of software assumptions. General-purpose compute offers a path to maintain flexibility—especially for inference workloads that may not justify the highest-end accelerators or for hybrid deployments where some tasks run on accelerators and others run on CPUs.
In other words, the “old IT” bid is partly about resilience. It’s about ensuring that AI systems can keep operating even as models and tooling change. That’s a value proposition enterprises understand deeply, even if it’s less exciting than a new benchmark.
Software and platforms: the real battleground is deployment and operations
If servers and chips are the visible layer, software is where the incumbents can differentiate most meaningfully. AI adoption fails more often due to operational friction than due to raw model capability. Teams struggle with versioning, reproducibility, monitoring, access control, and cost tracking. They also struggle with the messy reality of production: noisy inputs, unexpected user behavior, latency spikes, and the need for human oversight.
Old IT vendors are therefore focusing on software platforms that help companies deploy and manage AI. This includes orchestration layers, model management, security controls, and observability tooling. It also includes integration with enterprise identity systems, compliance workflows, and data governance policies.
A key insight is that AI is not a single application. It’s a set of interacting components: data sources, embedding pipelines, retrieval indexes, prompt templates, model endpoints, post-processing logic, and sometimes agent tools that call external systems. Managing this ecosystem requires more than a simple “deploy model” button. It requires a lifecycle approach: build, test, validate, deploy, monitor, update, and retire.
Incumbents have an advantage here because they already sell into environments where governance and uptime matter. Their enterprise software ecosystems are built around permissions, audit trails, change management, and standardized operations. AI teams can benefit from these capabilities without reinventing them from scratch.
But there’s a risk too. Enterprise software can become a bottleneck if it’s too rigid or if it doesn’t keep pace with fast-moving AI frameworks. The incumbents’ challenge is to make their platforms flexible enough for modern AI development while still delivering the reliability enterprises expect. That means supporting multiple model providers, integrating with popular open-source tooling, and enabling teams to move quickly without sacrificing controls.
The pendulum swing is also visible in how vendors talk about “AI readiness.” Instead of only emphasizing compute capacity, they emphasize end-to-end readiness: data connectivity, workflow integration, and operational guardrails. This is a subtle but important shift. It reframes AI from a novelty project into a managed capability.
Why the timing is right: AI scaling exposes the gaps
The renewed push by old IT vendors aligns with a broader market transition. Many organizations are now past the stage where they can treat AI as an isolated experiment. They’re moving toward scaling across departments, integrating AI into business processes, and meeting regulatory and security requirements.
At this stage, the limitations of purely AI-first stacks become clearer. New entrants may excel at specific model hosting or developer experiences, but enterprises often need deeper integration with existing infrastructure and operational practices. They need to connect AI systems to data warehouses, streaming platforms, customer relationship management tools, and internal knowledge bases. They need to ensure that AI outputs are logged, that sensitive data is handled correctly, and that systems can be audited.
Old IT vendors are positioning themselves as the bridge between AI innovation and enterprise reality. Their pitch is essentially: you can keep experimenting with models, but you need a stable foundation to run them safely and efficiently.
This is also where cost becomes a decisive factor. AI infrastructure costs can balloon quickly due to inefficient utilization, redundant deployments, and lack of visibility into where compute is being spent. Incumbents can offer tools that track usage, optimize scheduling, and enforce policies that prevent runaway costs. They can also help standardize deployments so that teams don’t rebuild the same pipeline repeatedly.
A unique take: “AI relevance” is becoming synonymous with operational leverage
There’s a temptation to interpret the incumbents’ return as a simple defensive move—an attempt to regain market share from newer AI-focused companies. But the more interesting interpretation is that “AI relevance” is evolving into something else entirely: operational leverage.
In earlier waves of technology adoption, enterprises often bought new tools and then struggled to integrate them. With AI, the integration burden is heavier because AI systems are probabilistic and dynamic. They require continuous monitoring and iterative improvement. They also require careful handling of data quality and drift. That means the operational layer isn’t optional; it’s part of the product.
Old IT vendors are therefore trying to own the operational layer of AI. Servers and chips are necessary, but they’re not sufficient. The differentiator is the ability to run AI workloads predictably across environments—on-premises, in private clouds, and in hybrid setups. It’s the ability to manage upgrades, patching, and compatibility. It’s the ability to provide security controls that match enterprise standards.
This is why the renewed focus on “more general-purpose chips” is significant. It suggests a strategy that values flexibility and compatibility, not just peak performance. It’s a bet that enterprises will prefer systems that can
