From Paper Policies to Real Oversight: How AI Governance Is Becoming Insurable

January 14, 2026
5 min read

AI governance is entering a new phase.

At the Mozilla Ventures Portfolio Convening, leaders from AI governance, assurance, and insurance came together to challenge a familiar pattern: governance that looks robust on paper but fails under real-world pressure. The resulting discussions, captured in Mozilla Ventures’ Portfolio Convening Report 2025, makes one thing clear — AI governance is no longer a theoretical exercise. It is becoming a prerequisite for trust, deployment, and insurance.

Armilla AI was proud to participate in this conversation, with Philip Dawson, Head of AI Policy at Armilla AI, joining the panel to explore how governance intersects with financial risk, enterprise accountability, and AI insurance.

Governance That Lives Beyond the Policy Binder

Section 2.1 of the report, AI Governance: From Paper Policies to Real Oversight, focuses on a core problem facing enterprises today: governance is often treated as documentation rather than infrastructure.

The panel rejected the idea that governance lives in slide decks, certifications, or compliance checklists. Instead, they emphasized that real governance must be embedded directly into how AI systems are designed, deployed, monitored, and insured.

As Philip Dawson explained during the session, governance is frequently positioned as a blocker, a function that slows innovation and adds cost. In practice, the opposite is true.

“Governance is a reactive necessity… when actually there’s a much broader array of objectives and other reasons, commercial as well.”
Philip Dawson, Head of AI Policy, Armilla AI

This reframing is critical. When governance is treated as an operational capability rather than an afterthought, it enables organizations to deploy AI systems that would otherwise be too risky to launch.

Why AI Governance Now Determines Insurability

One of the most important contributions from Armilla’s perspective was the explicit link between governance maturity and insurability.

From an insurer’s point of view, governance gaps are not abstract concerns, they are risk signals.

The report highlights a recurring issue we encounter when working with enterprises: many organizations cannot clearly answer basic questions about their AI footprint. Where are AI systems deployed? Who owns them? What are they doing in production? How are incidents detected and escalated?

When those answers are missing, financial exposure becomes impossible to quantify.

Dawson described how this lack of visibility directly affects underwriting outcomes. Applications for AI insurance have been declined not because the models were inherently unsafe, but because governance and monitoring processes were too thin to support risk transfer.

In other words, AI insurance is forcing governance to move from theory into practice.

As insurers, risk committees, and procurement teams increasingly demand evidence, not intentions, governance is becoming a hard requirement rather than a best practice.

The Limits of Standards Alone

The discussion also addressed a growing tension in the AI ecosystem: the gap between certification and outcomes.

Frameworks such as NIST AI RMF and ISO 42001 play an important role, but the panel warned against treating them as guarantees of safety or fairness. It is possible to pass a conformity assessment while leaving material uncertainties unresolved at the system level.

Dawson emphasized that most assessments today focus on whether processes exist, not whether systems perform acceptably under real conditions. Performance thresholds, bias tolerance, and system-level behavior are often left undefined.

This creates the risk of what the panel described as “certification washing” — relying on standards as reputational shields rather than tools for accountability.

For Armilla, this gap reinforces why insurance underwriting cannot rely on checklists alone. Insurable AI requires evidence that systems are tested, monitored, and governed over time, not just certified at a single point.

Key Takeaways for Enterprises and Risk Leaders

  • Governance cannot be reduced to paperwork. It must be embedded in product design, deployment, and monitoring.

  • AI governance is an enabler of innovation, not an obstacle. It unlocks higher-stakes deployment and accelerates insurance approval.
  • Inventory and visibility are non-negotiable. Organizations must know where AI is used and how it behaves in production.

  • Standards are necessary but insufficient. Outcomes, thresholds, and system-level impacts matter.

  • Governance must be collaborative and cross-functional, spanning builders, risk teams, insurers, and policymakers.

  • Regulation alone will not solve AI risk. Insurance, procurement pressure, and industry-driven best practices will play a decisive role.

As Dawson noted toward the end of the discussion, the industry must also prepare for what comes next:

“In 2030… the biggest topic will be incident response and systemic risks.”
Philip Dawson, Head of AI Policy, Armilla AI

Armilla AI and Mozilla Ventures

Armilla AI participated in the conference as one of Mozilla Ventures’ portfolio companies, sharing a commitment to responsible, trustworthy AI.

Mozilla Ventures exists to support companies building technology that advances human agency, transparency, and accountability, particularly in areas where markets alone fail to protect the public interest. Its mission aligns closely with Armilla’s own goals: ensuring that AI systems can be deployed confidently, responsibly, and with clear mechanisms for accountability when things go wrong.

By connecting governance, evaluation, and insurance, Armilla works to make trust in AI operational, not aspirational.

Read the full Mozilla Ventures Portfolio Convening Report 2025 here:
https://mozilla.vc/wp-content/uploads/2026/01/Mozilla-Ventures-Portfolio-Convening-Report-2025.pdf

Read the CNBC report on Mozilla Ventures building an AI ‘rebel alliance’ - deploying its reserves to support "mission driven" companies, particularly focused on AI.

Share this post

Ready to Insure Your AI?

Armilla’s Affirmative AI Coverage is your fail-safe against fast-evolving AI risks. We combine deep technological insight with robust insurance solutions so you can focus on innovation, without interruption.
Get in Touch