Project Maven Shows How the US Military Rapidly Adopted AI-Powered Targeting

When people talk about “AI in warfare,” they often jump straight to the dramatic end of the pipeline: autonomous drones, swarms, and systems that decide and act with minimal human involvement. But the story of Project Maven—and the broader shift it represents inside the U.S. military—has been less about sci‑fi autonomy and more about something far more consequential: speed.

In the first 24 hours of the reported U.S. assault on Iran, the military struck more than 1,000 targets, a pace described as nearly double the scale of the early “shock and awe” campaign in Iraq two decades ago. That kind of acceleration doesn’t come from a single breakthrough weapon. It comes from compressing time across many steps—finding, identifying, prioritizing, and routing information—until the targeting process becomes fast enough to match the tempo of modern operations. AI systems have been used to help with parts of that pipeline, particularly where large volumes of sensor data must be sifted quickly.

At the center of this evolution is the Maven Smart System, a program that began as an experiment in computer vision applied to drone footage and grew into a tool that helped reshape how targeting information is processed. A new book, Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare by journalist Katrina Manson, traces how the project moved from a technical pilot to a capability that became embedded in the military’s approach to using machine learning for operational decision-making. The book also revisits the controversy that accompanied the program’s development—especially the internal and contractor-side protests that erupted when the system’s purpose became clearer.

What makes Maven worth studying isn’t only what it did. It’s how it did it, and what it reveals about the way institutions adopt AI: not through a single leap, but through iterative deployments, shifting definitions of “human in the loop,” and a gradual reallocation of attention from the battlefield to the data pipeline.

A program that started as a vision problem

Project Maven’s origin story is often summarized as “computer vision for drone video.” That’s accurate, but incomplete. The real challenge wasn’t simply recognizing objects in images; it was turning messy, high-volume streams of imagery into actionable intelligence at a pace that humans alone could not sustain.

Drone footage produces enormous amounts of visual data. Even when analysts are highly trained, the bottleneck is rarely raw capability—it’s time. Someone has to watch, label, interpret, and connect what’s seen to a broader operational context. Maven’s early work focused on automating parts of that interpretation: detecting and classifying objects or events in video frames so that analysts could spend less time on routine scanning and more time on verification and judgment.

This is a crucial distinction. Maven was not introduced as a system that would replace analysts. It was positioned as a way to make analysts faster and more effective—by surfacing relevant moments and reducing the amount of footage that needed to be reviewed manually. In practice, that means the system can highlight potential targets or areas of interest, then pass those leads to human operators for confirmation and downstream decisions.

That design choice—automation as assistance rather than replacement—has been a recurring theme in military AI adoption. It also helps explain why programs like Maven can move from experimentation to operational use even amid public debate. If the system is framed as a tool that supports human decision-making, it can be easier to justify technically and politically than a system that claims to “decide” on its own.

Still, assistance can be transformative. When you accelerate the front end of targeting, you change the entire rhythm of operations. Faster identification and triage can lead to faster escalation, faster coordination, and faster execution. In other words, even if the final decision remains human, the environment in which humans decide can become dramatically different.

The tempo shift: why “speed” is the real capability

The reported scale of strikes in the first day of the Iran operation underscores a point that often gets lost in debates about AI: the most important effect may be timing. Modern conflicts are characterized by rapid changes in location, posture, and intent. Targets move, communications shift, and opportunities can vanish quickly. If the targeting process takes too long, the “best” information arrives after the moment it matters.

Maven’s contribution, as described in coverage and in the book’s framing, is tied to accelerating the targeting process by helping process imagery more quickly. That acceleration can mean fewer hours spent reviewing footage and more minutes spent validating leads. It can also mean that the system’s outputs are available sooner to the teams that compile intelligence packages and coordinate strike planning.

This is where AI becomes more than a technical feature. It becomes an operational lever. Once you can reduce the time between sensing and actionable intelligence, you can increase the number of cycles you run in a given period. That can translate into more targets addressed, more frequently, and with less delay between discovery and action.

The analogy isn’t perfect, but it’s useful: think of how search engines changed research. They didn’t replace reading; they changed how quickly you could find what mattered. Maven similarly changes how quickly analysts can locate candidate information within a flood of sensor data.

From pilot to program: the institutional path

Project Maven’s development is described as beginning in 2017 as an experiment. From there, it evolved. That evolution matters because it shows how AI programs become durable inside large bureaucracies.

In many organizations, pilots remain pilots. They demonstrate feasibility but never become core infrastructure. Maven’s trajectory suggests a different pattern: once a system proves it can reduce workload and improve throughput, it can be pulled into operational workflows. The key is not only performance metrics; it’s integration—how the system fits into existing command structures, how outputs are delivered, and how users learn to trust and verify them.

The book’s focus on a Marine colonel and his team highlights another aspect of adoption: champions. Large-scale AI programs often require advocates who can translate technical capability into operational value. They also need to navigate procurement, contracting, and the politics of risk. In defense contexts, “risk” isn’t just about whether the model works; it’s about whether it can be used safely, reliably, and in ways that satisfy oversight requirements.

As Maven matured, it became part of a broader ecosystem of tools and processes. That ecosystem includes not only the AI model itself but also the labeling pipelines, the data management practices, and the human review steps that determine whether the system’s suggestions are treated as credible leads.

This is also where the “human in the loop” concept becomes complicated. Human review is often described as a safeguard, but the nature of review can change. If the AI reduces the amount of footage humans must examine, reviewers may see fewer cases overall. That can reduce fatigue and improve consistency. But it can also create new failure modes: if the AI’s confidence thresholds are tuned aggressively to maximize speed, humans may be presented with a narrower set of candidates that reflect the model’s biases. Reviewers then validate what the system surfaces, not everything that exists.

In other words, the AI doesn’t just accelerate work; it shapes the menu of options humans see.

The controversy: when contractors and employees push back

One of the most striking elements of Maven’s story is the controversy that emerged around its development. Coverage of the program has included references to employee protests at Google, the initial contractor involved in building the system. Those protests were significant because they reflected a tension between corporate innovation and the ethical implications of military applications.

The protests weren’t merely symbolic. They were part of a broader debate about whether companies should participate in defense projects that involve targeting and surveillance. For employees, the concern often centered on the idea that their work could contribute to harm, even if the system is framed as assisting analysts rather than making final decisions.

For leadership, the argument typically revolves around national security, contractual obligations, and the belief that the technology can be developed responsibly with appropriate safeguards. The Maven case illustrates how these positions collide inside the same organization.

This contractor-side controversy also matters for understanding how AI spreads. Many AI systems used in government contexts are built by private companies. That means the adoption of AI in warfare is not only a military decision; it’s also a corporate governance question. Who decides what projects to pursue? What internal review processes exist? How are employees heard? And what happens when internal dissent meets external demand?

The Maven story suggests that even when a program continues, the social friction around it can influence how it is managed—through changes in staffing, transparency, documentation, and sometimes the scope of what is deployed.

The unique take: AI warfare as pipeline engineering

It’s tempting to describe Maven as “AI for targeting.” But the deeper story is that it’s AI for pipeline engineering—turning a complex, multi-step intelligence workflow into something that can be scaled.

Targeting is not a single act. It’s a chain of tasks: collect data, detect relevant features, classify and contextualize, verify, prioritize, and then integrate into planning. Each step has its own uncertainties. Humans handle uncertainty differently than models do. Humans can reason with incomplete information, but they are limited by attention and time. Models can process patterns quickly, but they can be brittle when conditions change.

Maven’s approach—using computer vision to assist with detection and classification—addresses one part of the chain. But once that part is accelerated, the rest of the chain must adapt. If the AI produces leads faster than downstream teams can verify them, the bottleneck shifts. If verification teams cannot keep up, the system’s outputs may be used differently, potentially with higher reliance on automated confidence signals.

This is why pipeline changes can be more consequential than model changes. A small improvement in detection speed can cascade into a larger operational shift. Conversely, a model that performs well in controlled settings may fail in the messy reality of varied lighting, angles, weather, and sensor quality. In that case, the pipeline might still run fast—but with more false positives, requiring more human correction later.

So the question isn’t only “Can the AI detect objects?” It’s “How