AI Needs Stronger Oversight to Protect Against Tech and Government Power

Artificial intelligence is often discussed as if it were a technical race: better models, faster chips, more data, sharper benchmarks. But the most urgent question raised by a growing body of policy work is less about what AI can do than about who gets to decide what it should be allowed to do—and how quickly those decisions can be reversed when something goes wrong.

A new report highlighted in the Financial Times argues that the critical challenge in the AI era is not simply building advanced systems. It is building institutions that can protect people from the combined pressures of two powerful actors: technology companies with strong incentives to move fast, and governments that may deploy AI in ways that are not always aligned with public interest. The report’s central claim is blunt: without adult supervision—meaning enforceable oversight, accountability mechanisms, and real constraints—AI progress will repeatedly outrun safety, rights, and democratic control.

That framing matters because it shifts the debate away from vague assurances. “Responsible AI” has become a familiar phrase, but it often functions like a promise rather than a safeguard. Companies can publish model cards, run internal red-teaming exercises, and adopt voluntary standards. Governments can issue guidelines. Yet when the incentives are misaligned—when speed brings market advantage, when secrecy protects competitive position, when enforcement is weak—voluntary measures tend to degrade into public relations. The report’s emphasis on institutions is essentially an argument for turning ethics into infrastructure: rules that can be audited, penalties that can be imposed, and oversight that can intervene before harm becomes irreversible.

The problem is not that every company or every agency is malicious. It’s that power—whether corporate or governmental—creates predictable failure modes. In both cases, decision-making can become insulated from those affected. In both cases, the people with the most influence may not bear the full costs of mistakes. And in both cases, the temptation is to treat AI deployment as a one-way door: once systems are integrated into hiring, policing, credit scoring, education, healthcare, or public services, rolling them back is politically and operationally difficult.

What makes AI different from earlier waves of automation is the scale and ambiguity of its impact. Many AI systems are not just tools; they are decision engines that can shape outcomes while remaining difficult for outsiders to fully understand. Even when a model is technically explainable in narrow terms, the broader system—data pipelines, feedback loops, human workflows, and downstream uses—can be opaque. That opacity creates a governance gap. If the public cannot see what is being done, and regulators cannot reliably test what is being claimed, then oversight becomes performative.

The report’s “two-direction” warning—protection from tech companies and the state—reflects a realistic view of how AI is likely to be governed. Corporate power is obvious: companies control the models, the training data, the deployment interfaces, and often the evaluation metrics. They also control the pace at which new capabilities reach the market. But state power is equally relevant. Governments can use AI for surveillance, border control, welfare eligibility, and law enforcement. They can also set procurement standards that effectively determine which systems get deployed. Even when governments act with good intentions, the incentives can still produce harm: national security pressures, bureaucratic risk aversion, and political incentives to demonstrate effectiveness can all encourage rapid adoption.

In other words, the report is not arguing that one actor is inherently worse. It is arguing that both actors have the capacity to move faster than accountability. And when accountability lags, the result is not merely technical risk; it is institutional risk—risk to rights, due process, and the ability of individuals to contest decisions.

One of the most important insights in this kind of argument is that governance is not a single lever. It is a stack. Oversight requires multiple layers that reinforce each other: transparency requirements, independent testing, documentation standards, audit rights, incident reporting, procurement constraints, and meaningful penalties. If any layer is missing, the system can be gamed. For example, transparency without auditability can become selective disclosure. Auditability without enforcement can become a checkbox exercise. Enforcement without due process can become arbitrary or politicized. The report’s institutional focus implicitly recognizes that AI governance must be designed as a system, not a slogan.

Consider how harm typically emerges in AI deployments. It rarely appears as a single catastrophic event. More often, it accumulates through small decisions: a model used to rank applicants, a tool that flags “risk” in a welfare case, a system that recommends interventions in healthcare, a predictive policing workflow that influences patrol patterns. Each step may be justified as incremental improvement. But the cumulative effect can be discriminatory outcomes, reduced access to services, chilling effects on speech, or procedural unfairness. When these harms are discovered, the systems are already embedded. That is why the report’s emphasis on institutions that can intervene early is so consequential. Oversight that only reacts after widespread harm is detected is too late.

The “adult supervision” metaphor also points to a deeper governance challenge: AI is moving into domains where legitimacy matters. In many technical fields, performance improvements can be evaluated by experts. In social domains, legitimacy depends on values—fairness, privacy, autonomy, and due process. Those values are not automatically captured by accuracy metrics. A model can be statistically “good” while still violating rights or undermining trust. Institutions are needed to translate societal values into enforceable requirements and to ensure that those requirements are applied consistently.

This is where the report’s concern about both corporate and state incentives becomes especially relevant. Corporate incentives often prioritize competitiveness and growth. State incentives can prioritize security, efficiency, and political optics. Both can lead to underinvestment in safeguards, especially when safeguards slow down deployment or reduce flexibility. Institutions can counterbalance these incentives by making safety and rights compliance a condition of operation, not an optional feature.

So what would “institutional protection” look like in practice? While the report itself is summarized here rather than reproduced in full, the logic it advances suggests several concrete directions that policymakers and regulators are increasingly discussing across jurisdictions.

First, there is the need for enforceable risk classification and licensing-like regimes for high-impact AI. Not all AI systems pose the same level of risk. Some are low-stakes tools; others can affect fundamental rights. Institutions should be able to distinguish between these categories and impose stronger obligations on higher-risk uses. This could include requirements for pre-deployment assessments, independent evaluations, and ongoing monitoring. The key is that the obligations must be enforceable and tied to consequences for noncompliance.

Second, there is the need for independent testing and audit rights that are not controlled solely by the deployer. If the same entity that builds or sells the system controls the evaluation, the process can become biased toward favorable outcomes. Independent oversight bodies—whether regulators, accredited labs, or third-party auditors—can provide a check. But independence alone is not enough; auditors need access to sufficient information to test claims. That means institutions must define what documentation must be provided, what data access is required (within privacy constraints), and how results are reported.

Third, there is the question of transparency that actually helps affected individuals. Public disclosure of model details is not always feasible or appropriate, especially for proprietary systems. But affected people need meaningful explanations when AI is used to make or influence decisions about them. Institutions can require that individuals receive notice when automated systems are used, that they can request human review, and that they can contest decisions. Without these procedural rights, transparency becomes a technical exercise rather than a civic safeguard.

Fourth, there is the need for incident reporting and learning loops. AI systems can fail in unexpected ways, especially when deployed in changing environments. Institutions should require reporting of significant failures, near misses, and harmful outcomes. They should also mandate corrective actions and allow regulators to update requirements based on emerging evidence. This turns governance into a feedback system rather than a one-time approval.

Fifth, procurement is a governance lever that is often underestimated. Governments do not just regulate; they buy. If public agencies procure AI systems without strong contractual safeguards, they can become conduits for unsafe or rights-violating technologies. Institutional protection therefore includes procurement standards: requirements for documentation, auditability, privacy protections, and liability allocation. Procurement rules can also prevent “race to the bottom” dynamics where vendors offer the cheapest solution that meets minimal compliance.

Sixth, there is the question of liability and accountability. If harm occurs, who is responsible? The answer cannot be “everyone and no one.” Institutions need clear allocation of responsibility across developers, deployers, and integrators. Liability frameworks can create incentives for safer design and more careful deployment. They can also ensure that victims have pathways to remedy.

These elements are not revolutionary. Many are already being discussed in AI governance debates worldwide. What the report adds—at least in the way it is framed—is a sharper insistence that governance must be institutional, not merely aspirational. It is a call to treat AI oversight as a permanent feature of society, akin to food safety, aviation regulation, financial supervision, or consumer protection—not as a temporary phase while technology matures.

A unique angle in the report’s framing is the symmetry of the threat. People often talk about AI risk as if it were primarily a corporate problem: companies release powerful tools, and regulators scramble to catch up. But the report emphasizes that the state can also be a source of risk, particularly when AI is used to expand surveillance or automate decisions that should be subject to human judgment and legal safeguards. This symmetry is important because it prevents governance from becoming partisan. It also encourages a more balanced approach: oversight should constrain power wherever it resides.

That balance is not easy. Institutions that constrain corporate power can be captured by industry lobbying. Institutions that constrain state power can be undermined by political pressure or secrecy. The report’s underlying message is that adult supervision must be designed to resist capture. That means transparency about regulatory processes, independence of oversight bodies, clear conflict-of-interest rules, and judicial review mechanisms. It also means that enforcement must be credible. If regulators cannot impose meaningful penalties, the institution becomes symbolic.

There is