Prophecy and Power: Carissa Véliz on Why Forecasting Algorithms Shape Real-World Decisions

Oxford philosopher Carissa Véliz has a way of making familiar technology feel newly strange. In her latest survey of forecasting, she argues that predictions—especially those produced by modern statistical and algorithmic systems—are rarely just attempts to describe the world as it is. More often, they are instruments for arranging the world as someone wants it to be. Forecasting, in this view, is not merely about truth; it is about power.

The argument lands at a moment when predictive tools have moved from the margins of research and finance into the everyday infrastructure of public life. Governments use risk scores to decide who gets extra scrutiny. Insurers price policies based on predicted likelihoods. Employers and platforms increasingly rely on models that estimate future performance, churn, fraud, or “engagement.” Even where the language is careful—“probabilistic,” “scenario-based,” “decision support”—the practical effect can be blunt: a forecast becomes a gate, a lever, or a justification.

Véliz’s central provocation is that forecasting systems often claim epistemic humility while performing political certainty. They present themselves as neutral translators of data into likelihoods, yet they frequently end up authorizing decisions with real consequences. The uncertainty that is inherent in prediction does not disappear; instead, it is redistributed. It may be acknowledged in documentation, but it is converted into operational categories—approved or denied, flagged or cleared, prioritized or deprioritized. In other words, the model’s uncertainty can be treated as an engineering detail, while the forecast’s outputs become governance.

What makes her critique distinctive is not simply that algorithms can be biased or wrong—those points are now widely discussed. Her focus is more structural: what forecasting does to authority, how it changes the relationship between institutions and individuals, and why the act of predicting can itself become a form of control.

Forecasting as a quiet form of governance

Forecasting has always been part of human decision-making. Economists forecast growth; meteorologists forecast storms; clinicians forecast risk. The difference today is scale and automation. Predictive systems can be deployed across millions of cases, updated continuously, and integrated into workflows so tightly that the forecast becomes the workflow. When that happens, forecasting stops being a separate activity—something you consult—and becomes the mechanism by which decisions are made.

Véliz’s work draws attention to the way these systems can create a new kind of institutional voice. A forecast can sound like knowledge because it is expressed in numbers, probabilities, and dashboards. But the authority of the output is not the same thing as the reliability of the underlying inference. A model can be statistically sophisticated and still be misaligned with the real-world process it is trying to capture. It can also be trained on historical data that reflects past policies, past discrimination, or past enforcement patterns. In such cases, the forecast is not predicting an objective future; it is extending an existing system into tomorrow.

This is where “power” enters the story. Forecasting can shape behavior by shaping incentives. If a model predicts that certain people are likely to default, then lending terms may change, which affects repayment behavior. If a model predicts that certain neighborhoods are likely to experience crime, then policing resources may shift, which affects reported crime rates. The forecast does not merely anticipate outcomes; it helps produce them. The future becomes partly a function of the prediction itself.

Véliz’s critique therefore points to a feedback loop: forecasts can become self-fulfilling or self-defeating, and even when they are not, they can still steer institutions toward particular actions. The result is a subtle transformation of causality into administration. Instead of asking what should be done, institutions can ask what the model says is likely, and then treat that likelihood as a reason to act.

The problem is not only error; it is the conversion of uncertainty into legitimacy

A common defense of predictive systems is that they are probabilistic. They do not claim certainty; they provide estimates. Véliz challenges the comfort this language can offer. Probabilities can be used to justify decisions that are effectively deterministic. A risk score might be described as a likelihood, but if the institution uses thresholds—above this number, deny; below it, approve—the system behaves like a binary gate.

Even when thresholds are justified as “risk management,” the moral and political question remains: who gets to decide what level of risk is acceptable, and based on what values? A probability is not value-neutral once it is tied to consequences. The same forecast can lead to different actions depending on institutional priorities, legal frameworks, and resource constraints. Yet the forecast’s numerical form can obscure those choices. It can make a policy decision look like a technical necessity.

Véliz’s framing suggests that forecasting can manufacture legitimacy. When a model produces a number, it can be treated as evidence. When it is treated as evidence, it can displace deliberation. The institution can say: we did not choose to exclude; the model indicated elevated risk. That shift matters because it changes accountability. Responsibility migrates from human judgment to machine output, even though humans designed the model, selected the features, defined the target variable, and set the thresholds.

In this sense, forecasting is not only about predicting the future. It is about building a chain of justification that can be difficult to contest. If the system is complex, if the training data is proprietary, and if the model is updated frequently, then challenging the forecast can become a technical exercise rather than a democratic one. The forecast becomes a barrier to meaningful appeal.

Profiles, categories, and the politics of measurement

Another theme in Véliz’s critique is the way forecasting depends on profiling. To predict, systems must represent people, markets, or behaviors in measurable terms. That representation is never neutral. It selects which aspects of life matter and which do not. It also determines how individuals are categorized.

When profiling is used for forecasting, categories can harden. A person is not only observed; they are classified into a risk group, a propensity segment, a likelihood tier. Those categories can persist across time and contexts, even when the underlying circumstances change. The forecast then becomes a kind of identity—an administrative label that follows someone through systems.

This is particularly consequential when the categories are used in high-stakes domains. In healthcare, a risk score can influence treatment intensity. In employment, predicted performance can affect hiring or promotion. In finance, predicted default can determine access to credit. In public services, predicted compliance can determine surveillance intensity. In each case, the forecast can become a substitute for individualized understanding.

Véliz’s argument implies that the politics of measurement are inseparable from the politics of prediction. If you measure the wrong things, or if your measurements reflect historical inequities, your forecasts will reproduce those inequities while appearing to be objective. The system can claim to be “data-driven,” but data is not destiny; it is a record shaped by prior decisions, prior enforcement, and prior opportunities.

There is also the issue of what forecasting systems do not measure. Many models rely on proxies—variables that correlate with the target outcome but are not the outcome itself. Proxies can be convenient, but they can also embed assumptions. A proxy for “risk” might actually be a proxy for exposure to surveillance, or for access to resources, or for differences in reporting. When those proxies are used to forecast, the system can treat the consequences of inequality as if they were natural signals.

The target variable is a choice, not a fact

One of the most underappreciated aspects of forecasting is that the “thing being predicted” is not given by nature. It is defined. Institutions choose the target variable: default, recidivism, fraud, churn, disease progression, job performance. Each target variable encodes a theory of what matters and a definition of what counts as failure or success.

Véliz’s critique emphasizes that these definitions are political. If the target is “fraud,” then the system is trained on cases labeled as fraud, which may reflect enforcement practices. If the target is “recidivism,” then the system depends on arrest and conviction patterns, which vary by jurisdiction and policy. If the target is “engagement,” then the system depends on what platforms choose to count as engagement, which can incentivize certain behaviors over others.

Once the target is chosen, the model learns patterns in the historical data. But historical patterns are not neutral. They reflect the institution’s past decisions and the broader social context. Forecasting then becomes a method for turning those patterns into future expectations. The future is not discovered; it is inferred from the past under a particular definition of relevance.

This is why Véliz’s critique can feel both scathing and oddly clarifying. It suggests that many forecasting systems are less like crystal balls and more like mirrors—mirrors that reflect institutional priorities back at themselves, then dress those priorities in the language of prediction.

Why forecasting feels irresistible

If forecasting is so fraught, why does it keep expanding? Véliz’s work implicitly answers: because forecasting offers a compelling blend of convenience, authority, and cost control. Predictive systems promise to reduce uncertainty, allocate resources efficiently, and standardize decisions. They also offer a way to manage responsibility: if outcomes are bad, the institution can point to the model’s probabilistic nature and argue that no system could guarantee perfect results.

There is also a psychological and organizational appeal. Forecasts compress complexity into a single number or a ranked list. They allow managers to compare cases quickly and to justify decisions in a consistent format. In bureaucracies, consistency is often treated as a virtue. But consistency can become a mask for injustice when the underlying categories are flawed.

Forecasting also aligns with a broader cultural shift toward quantification. Numbers can appear more objective than narratives. A forecast can be presented as “evidence,” even when it is an estimate built from incomplete information. The model’s output can become a substitute for explanation, and the institution can treat the forecast as a final answer rather than a starting point for deliberation.

Véliz’s critique therefore targets not