A Framework for Responsible AI in Practice
Governments and enterprises around the world are discovering the same thing: AI governance principles are the easy part. The hard part is translating them into operational frameworks that teams can actually execute. Through my research at the Center for AI and Digital Policy (CAIDP), I have identified the patterns that separate governance architectures that work from those that sit on a shelf.
This paper introduces the Nested Governance Architecture™, a design pattern for embedding AI governance within broader organizational transformation so that governance operates as a built-in standard rather than an isolated compliance exercise. It includes the MPBP Framework™ (Map, Prioritize, Build, Pilot), the implementation methodology for moving from governance principles to operational frameworks, and a Risk-Proportional Governance™ model that matches governance intensity to actual impact.
To demonstrate how the architecture applies in practice, this paper uses Missouri's AI Executive Orders (26-02 and 26-03) as its primary case study.
On January 13, 2026, Missouri Governor Mike Kehoe signed Executive Order 26-02, directing four state departments to develop frameworks for the safe and effective integration of Artificial Intelligence within state government operations. The same day, he signed Executive Order 26-03, establishing the Missouri GREAT initiative, a government-wide efficiency and transformation program that explicitly ties AI adoption to its safety and security standards.
Together, these orders did something distinctive. Rather than treating AI as a standalone policy problem, Missouri embedded AI governance inside a broader government transformation agenda. The departments tasked with building Missouri's AI future (the Office of Administration, Economic Development, Natural Resources, and Higher Education and Workforce Development) now have until November 30, 2026 to deliver recommendations.
That's an ambitious timeline. And the question facing each department is no longer whether Missouri should govern AI, but how to translate five governance principles into operational frameworks that state employees, citizens, and businesses can actually use. The Nested Governance Architecture™ and MPBP Framework™ provide one structured answer to that question.
Most governments that have taken action on AI governance have done so through a single mechanism: an executive order focused narrowly on AI, or a legislative bill addressing specific AI applications. This creates a structural weakness. When AI governance exists as a standalone layer, it competes for attention with operational priorities, and operational priorities usually win.
The Nested Governance Architecture™ solves this by embedding AI-specific governance principles within a broader operational transformation agenda rather than standing them up as a separate compliance function. The architecture creates accountability at multiple levels: the outer layer sets the organizational mission and transformation goals, while the inner layer establishes AI-specific guardrails that operate within that mission context. This design prevents AI governance from becoming an isolated function that organizational leaders can deprioritize or route around.
Missouri's executive orders exhibit this architecture.
Executive Order 26-02 establishes the AI-specific governance principles (the inner layer). It directs four departments to develop frameworks organized around five pillars:
Beyond these five pillars, EO 26-02 also addresses AI's infrastructure demands, directing the Department of Natural Resources and the Public Service Commission to ensure data center energy demands don't drive up rates for residential and small business customers. And it directs the Department of Higher Education and Workforce Development to evaluate AI education programs and launch workforce training initiatives. These additions demonstrate systems-level thinking: AI governance isn't just about the technology, it's about the infrastructure, the workforce, and the economic ecosystem around it.
Executive Order 26-03 then wraps this inside the broader Missouri GREAT initiative (the outer layer). Its Section III.3.c explicitly states that departments exploring AI applications must adhere to the safety and security standards established in EO 26-02. This is the Nested Governance Architecture™ at work: the GREAT initiative sets the operational transformation agenda, and EO 26-02's AI governance principles function as guardrails embedded within that agenda.
Nested Governance Architecture™: AI governance is not a siloed compliance exercise. It's embedded within how Missouri intends to operate. Departments encounter AI governance as a built-in standard, not an afterthought.
The Nested Governance Architecture™ is what makes Missouri's approach structurally sound. In jurisdictions where AI governance exists as a standalone policy, it competes for attention with operational priorities. When governance is nested within the transformation agenda itself, departments working on efficiency, business partnerships, and modernization encounter AI governance as a built-in operating standard, not an afterthought bolted on later.
But architecture alone does not produce outcomes. The next question is implementation: how do Missouri's departments translate this structure into operational governance that their teams can execute against a November 2026 deadline? That requires an implementation methodology and an operational decision framework, both of which sit within the Nested Governance Architecture™.
The distance between governance principles and operational frameworks is significant. Every jurisdiction that has committed to AI governance discovers this: the principles are the easy part. The hard part is translating them into something people can actually use when they're deciding whether to deploy an AI tool in a state agency next Tuesday morning.
The Nested Governance Architecture™ provides the structural foundation. But architecture without implementation is just a diagram. Missouri's four departments, like any organization implementing AI governance, now face the execution question: How do you operationalize it?
Through my research at CAIDP, several patterns have emerged about what accelerates this transition and what creates friction:
Jurisdictions that start by mapping what already exists move faster. Before building new frameworks, understanding which AI tools are already in use, which decisions they support, and what data they access creates the foundation everything else builds on. Jurisdictions that skip mapping end up building frameworks for theoretical scenarios while real AI use grows ungoverned.
Jurisdictions that prioritize by impact rather than trying to govern everything at once get further. Not all AI applications carry the same risk. An AI tool that automates scheduling is different from one that supports eligibility determinations. Prioritizing which use cases need robust governance first, and which can operate under lighter standards, prevents the framework from becoming so comprehensive that nothing gets implemented.
Jurisdictions that learn from existing frameworks rather than starting from scratch avoid unnecessary delay. The NIST AI Risk Management Framework, the OECD AI Principles, and the governance structures emerging from the EU AI Act have all been tested at scale. They offer structures, terminology, and assessment methods that Missouri's departments could adapt rather than reinvent, freeing time and energy for the Missouri-specific questions that no external framework can answer.
Jurisdictions that build governance around real deployments, not hypothetical ones, produce frameworks that survive contact with reality. A governance framework developed in isolation, then handed to agencies for implementation, almost always requires significant revision once it meets actual workflows. A framework developed alongside a real pilot deployment embeds practical wisdom from the start.
These four patterns, Map, Prioritize, Build, and Pilot, are not random observations. They represent a consistent implementation sequence that I have codified into the MPBP Framework™, the implementation methodology within the Nested Governance Architecture™.
The MPBP Framework™ is the implementation methodology within the Nested Governance Architecture™. It provides a structured, four-phase pathway from governance principles to operational governance. Applied to Missouri, it offers one approach the departments could consider as they build toward the November 30, 2026 deadline, designed to be practical, phased, and aligned with the five governance pillars in EO 26-02.
Without mapping: Frameworks get built for theoretical scenarios while real AI use grows ungoverned.
Without prioritization: Governance becomes so comprehensive that nothing gets implemented.
Without leverage: Teams reinvent what others have solved, consuming time on solved problems.
Without pilots: Frameworks don't survive contact with reality, producing documents that sit on a shelf.
Before building governance, understand what you're governing.
Each department conducts an inventory of its current and planned AI use:
This mapping doesn't need to be exhaustive to be useful. A working inventory, even an imperfect one, is dramatically more valuable than a theoretical governance framework built without visibility into actual AI use. The inventory also naturally surfaces the use cases that most urgently need governance attention.
The OECD AI Policy Observatory maintains a repository of how member countries have approached AI use inventories within government. Korea's Framework Act on AI requires ongoing monitoring of AI deployment. The EU AI Act mandates registration of high-risk AI systems. Missouri's departments could draw on these models as reference points for designing their own inventory approach.
Not all AI use cases need the same level of governance.
The Prioritize phase of the MPBP Framework™ introduces the Risk-Proportional Governance™ model, a three-tier AI impact classification that determines how much governance each AI application requires. Once the landscape is mapped, departments categorize AI applications by the level of impact they have on Missourians:
AI that automates internal administrative tasks with no direct citizen impact.
Lightweight. Department-level governance, standard quality checks, scheduled review cycles.
Missouri example: The Office of Administration's Division of Purchasing processes procurement for most state agencies through MissouriBUYS. AI summarizing RFP submissions or flagging vendor registration gaps would streamline high-volume internal operations with no citizen decision impact.
AI that informs human decisions affecting citizens or resource allocation.
Moderate. Human-in-the-loop requirements, documented accountability at the decision point, and privacy review before deployment.
Missouri example: The Department of Natural Resources processes hundreds of permit applications annually across air, water, and land reclamation programs, many still via PDF forms submitted through the MoGEM portal. AI pre-screening applications for completeness before a permit writer's review would support, not replace, the human decision on each permit.
AI that directly affects citizen outcomes, rights, or access to services.
Robust. Full governance architecture applies: all five pillars active. Oversight at the decision point, citizen-facing transparency, validated data quality, and a documented mechanism for contesting automated outcomes.
Missouri example: The Department of Higher Education and Workforce Development processes Access Missouri Grant eligibility for over 40,000 students annually, disbursing more than $81 million in need-based aid. AI introduced into eligibility verification, fraud detection, or outreach targeting would directly affect whether individual Missourians can afford higher education, requiring all five governance pillars.
This tiered approach allows departments to allocate governance resources proportionally. Tier 1 applications can move quickly under lightweight standards. Tier 3 applications get the full governance architecture. This prevents the common failure mode where governance becomes so comprehensive that even routine automation requires months of review, which drives agencies to adopt AI informally and ungoverned.
Missouri's departments don't need to invent AI governance from scratch.
Several established frameworks offer tested structures that can be adapted for Missouri's context:
A voluntary framework organized around four functions (Govern, Map, Measure, Manage) with practical implementation guidance for each.
Provides a structured methodology Missouri's departments could adapt for their governance framework design. Particularly useful for operationalizing EO 26-02's five pillars into actionable processes.
Five principles adopted by 46 countries: inclusive growth, human-centered values, transparency, robustness/security, accountability.
EO 26-02's five pillars align closely with OECD principles. Missouri can benchmark its frameworks against an internationally recognized standard, which also supports the federal alignment goal.
A tiered approach classifying AI systems by risk level (unacceptable, high, limited, minimal) with corresponding governance requirements.
The Risk-Proportional Governance™ model in Phase 2 draws on this approach. Departments could study the EU's classification criteria to inform how Missouri categorizes its own AI use cases.
A governance design pattern that embeds AI-specific governance within broader organizational transformation agendas, with the MPBP Framework™ providing structured implementation and Risk-Proportional Governance™ providing the operational decision mechanism.
Missouri's dual executive order structure already exhibits this architecture. The NGA™ provides the analytical framework for understanding why this structure works and the implementation methodology for operationalizing it across all four departments.
Adapting these frameworks is not about importing foreign regulation. It's about learning from the substantial investment other jurisdictions have already made in solving the same operational questions Missouri's departments now face. The Missouri-specific questions, how these principles apply to Missouri's agencies, workforce, data systems, and citizen expectations, are what the departments are uniquely positioned to answer.
Governance frameworks designed alongside real deployments outperform frameworks designed in isolation.
Rather than developing a comprehensive governance framework in theory and then implementing it, each department could identify one well-scoped AI pilot: a practical deployment where the governance framework is built and tested in real time.
The pilot approach has several advantages for Missouri's timeline:
A good pilot candidate for each department would be a Tier 1 or Tier 2 use case (routine automation or decision support) that serves a clear operational need, uses data the department already manages, and can be implemented with existing or readily available tools. High-impact applications (Tier 3) are better served by a more mature governance framework developed after the pilot provides lessons learned.
A governance framework tells departments what to do. A governance operating system tells them how to do it, who owns each step, and what evidence to produce.
The Nested Governance Architecture™ provides the design pattern. The MPBP Framework™ provides the implementation sequence. The Risk-Proportional Governance™ model provides the decision mechanism. What remains is the execution infrastructure: the operational layer that program managers, governance leads, and department staff would use when an AI use case arrives on their desk next Tuesday morning.
This operational layer is the bridge between governance on paper and governance in practice. It specifies who owns what, at what threshold oversight activates, and what evidence accumulates as use cases move through the system. Designed well, it becomes self-reinforcing. Each decision produces documentation that improves the next.
Building this layer effectively requires the context only the implementing organization can provide: its existing approval chains, its data governance posture, its tolerance for friction, and the specific use cases it is most urgently trying to govern. That context shapes every design choice. A governance operating system built without it is, at best, a generic template.
Missouri was among the first states to take comprehensive executive action on AI governance in 2026. Being early is an advantage, but only if the execution matches the ambition.
If the four departments execute the MPBP Framework™ across its four phases, by the November 30, 2026 reporting deadline Missouri could have:
This outcome is ambitious but achievable. The executive orders provide clear direction. The Nested Governance Architecture™ and MPBP Framework™ offer a structured path. The existing global knowledge base, from NIST, OECD, and jurisdictions worldwide, provides tested tools. What Missouri's departments bring is the context, the operational knowledge, and the proximity to the citizens who will ultimately be served by AI governance done well.
It's about ensuring that when organizations adopt AI, they do so with the governance architecture that protects the people they serve.
The Nested Governance Architecture™ provides a repeatable design pattern for any government or enterprise navigating this transition. The MPBP Framework™ provides the implementation sequence. The Risk-Proportional Governance™ model provides the decision mechanism. And the operating layer provides the execution infrastructure. Together, they form an integrated system for building governance that operates, not just complies.
Missouri's case demonstrates what this looks like in practice. Governor Kehoe's Executive Orders 26-02 and 26-03 set the direction. The governance principles are clear. The departments are tasked. The deadline is set. What happens between now and November 30, 2026 will determine whether Missouri builds an AI governance architecture that serves as a national model, or produces frameworks that sit on a shelf.
The difference comes down to execution. And execution, in governance as in technology, benefits from structure, learning from what others have done, and the willingness to build governance in practice rather than in theory.
The architecture is here. The methodology is here. The next step belongs to the organizations, and the people they serve, who will live with the governance they build.
Executive Order 26-02: Establishing Frameworks for the Safe and Effective Integration of Artificial Intelligence within State Government Operations. State of Missouri, January 13, 2026. Available at sos.mo.gov.
Executive Order 26-03: Establishing the Missouri GREAT Initiative. State of Missouri, January 13, 2026. Available at sos.mo.gov.
Missouri Office of Administration. Division of Purchasing, Division of Personnel, Information Technology Services Division. oa.mo.gov.
Missouri Department of Natural Resources. Permits, Certifications, Registrations and Licenses; Missouri Gateway for Environmental Management (MoGEM); Air Pollution Control Program; Water Protection Program; Land Reclamation Program. dnr.mo.gov.
Missouri Department of Economic Development. Division of Business and Community Solutions; Missouri One Start; Regional Engagement. ded.mo.gov.
Missouri Department of Higher Education and Workforce Development. Access Missouri Financial Assistance Program; Division of Workforce Development; Coordinating Board for Higher Education. dhewd.mo.gov.
National Institute of Standards and Technology. AI Risk Management Framework (AI RMF 1.0). NIST AI 100-1, January 2023. nist.gov/ai.
Organisation for Economic Co-operation and Development. OECD Principles on AI. OECD, May 2019, updated 2024. oecd.ai.
European Parliament and Council of the European Union. Regulation (EU) 2024/1689 (EU AI Act). Official Journal of the European Union, July 2024.
OECD AI Policy Observatory. National AI policies and strategies repository. oecd.ai/en/dashboards.
The White House. Executive Order 14179: Removing Barriers to American Leadership in Artificial Intelligence. January 23, 2025.
The White House. Executive Order 14365: Ensuring a National Policy Framework for Artificial Intelligence. December 11, 2025.
Office of Management and Budget. Memorandum M-25-21: Accelerating Federal Use of AI Through Innovation, Governance, and Public Trust. April 2025.
Center for AI and Digital Policy (CAIDP). Artificial Intelligence and Democratic Values Index. 2025 Edition. caidp.org.
CAIDP AI Policy Clinic. Research methodology and comparative governance analysis across OECD member states.
Autonomous AI is already operational. The organizational architecture to scale it does not exist.
The Nested Governance Architecture™ is built for this, embedding AI governance inside your organizational transformation so it functions as a built-in operating standard, not a compliance layer that gets bypassed the moment it creates friction.
When governance runs, every AI question in your organization has an answer.
When your board asks: "What is our AI strategy?"
You present an architecture, not an aspiration. Risk registers, accountability structures, and regulatory positioning built for your organization's context. The board question becomes a demonstration of organizational maturity, not a gap you are managing around.
When a vendor pitches an AI solution
Every pitch lands inside a framework that already exists. Classification criteria, consequence tier, governance requirements: defined before the vendor called. You evaluate on fit, not on exposure you are discovering in real time.
When your team asks: "What does this mean for our roles?"
Your people get a structural answer: what AI can and cannot do in your context, where human authority is protected, and what oversight looks like for decisions that affect them. Not a town hall that manages anxiety. A governance framework that tells people where the lines are.
When regulators ask: "Show us your AI governance."
Documentation that reflects what actually operates, not what was intended when the policy was written. EU AI Act classification, accountability structures, audit trails. Evidence of governance, not evidence that governance was planned.
When the CEO asks: "Show me the ROI."
AI governance built inside your transformation mandate produces measurable outcomes from day one: success metrics, tracking frameworks, and a board reporting structure that connects AI initiatives to business value. Governance as an enablement function, not a cost center.
The Governance Readiness Assessment identifies which of the five operational layers is your actual breaking point. Eight questions, immediate score, no login required. Takes three minutes.
Organizations that complete the assessment and identify material gaps are offered a Governance Debrief: 60 minutes with Dr Adetayo, working through what your score means for your specific deployment context, where governance sits structurally in your organization, and what needs to happen first. A written diagnostic is delivered within 48 hours.
There is no proposal required to begin. The assessment is the entry point.