Home Methodology Principal Insights Take the Assessment Strategy Session Book a Debrief AI Ready Leaders ↗
Home
Methodology

From AI principles to operational reality

We design governance architectures that make responsible AI enforceable, train leaders to drive AI transformation, and build the execution infrastructure organizations need to adopt AI at scale.

Architecture Reference

The intellectual foundation behind this methodology — the Nested Governance Architecture™, MPBP Framework™, and Risk-Proportional Governance™ in full.

Read the NGA™ White Paper →
Level 01 · Diagnose

Find the layer where your governance
actually breaks down: not where you think it does.

There are five operational layers where governance either holds or fails: Risk-Tiered Capability Boundaries, Constraint Encoding, Human-Authority Mechanisms, Behavioral Monitoring, and Policy Versioning & Testing. Most organizations discover they have one or two that look functional and two or three with nothing structural behind them.

The assessment identifies which. Eight questions, immediate score, no login required. The score tells you where you stand. It does not tell you what that means for your organization, because that requires a conversation.

The Governance Debrief is that conversation: 60 minutes with Dr. Adetayo, working through your layer results and completing the second diagnostic that no self-service tool can produce: where does governance sit structurally in your organization, and does it carry any real authority to enforce what you build? Or does that need to be addressed before anything else will stick?

That distinction — governance with structural authority versus governance that exists in name only — is why organizations spend money building things that get bypassed the first time a team moves fast on a deployment. The debrief names it. The written output documents it.

Regulatory & Risk Alignment
  • NIST AI RMF — Govern and Map functions: establishes the baseline for identifying where AI risk accountability is absent or unassigned
  • ISO/IEC 42001 — Gap analysis against AI management system requirements, surfacing where governance documentation diverges from operational practice
  • EU AI Act — Preliminary risk classification: identifying whether use cases in scope fall under prohibited, high-risk, or limited-risk categories before formal compliance work begins
  • OECD AI Principles — Accountability and transparency baseline: where structural gaps undermine the organization's ability to explain or justify AI decisions
Level 02 · Design

A roadmap that starts with what AI you actually have,
not what you assumed you had.

Before anything can be governed, it has to be inventoried. Most organizations discover during this engagement that AI use is more widespread than leadership knew, including tools not labeled as AI, approved by no one in particular, touching data with no classification. The inventory that should exist does not. That is the starting point.

The Gap Report maps that reality using the MPBP Framework: we Map what actually exists across the organization, Prioritize by actual consequence level rather than perceived risk, Build controls adapted from tested frameworks calibrated to what your organization can operationalize, and identify which use cases are ready to Pilot governance in real conditions.

The primary deliverable is the Executive Heatmap: a portfolio-level view that shows which AI use cases can move forward now, which need redesign before they do, which are creating regulatory concentration risk that no one has named yet, and which are suitable governance pilots.

Regulatory & Risk Alignment
  • EU AI Act (Annex III) — High-risk use case classification and conformity assessment requirements, mapped against your actual AI inventory rather than assumed scope
  • Colorado SB24-205 — Developer and deployer obligations for algorithmic discrimination risk, including impact assessment triggers tied to deployment context
  • NIST AI RMF — Measure and Manage functions: consequence-based prioritization aligned to the RMF's risk treatment hierarchy
  • ISO/IEC 42001 — Implementation planning against management system requirements: controls mapped to what the organization can operationalize at current capacity
Level 03 · Operationalize

Governance that runs in your workflows,
not in a document your team forgot about.

The failure mode we see most often is not organizations that ignore AI governance. It is organizations that build something real — a policy, a committee, a framework — and then watch it get bypassed the first time a team moves fast on a deployment. Not because people are careless. Because governance had no operational home, no structural authority, and no enforcement condition that did not depend on someone remembering.

The Governance Architecture Engagement is built around a live deployment, not a hypothetical. That is the only way to test whether a governance architecture survives contact with how your organization actually operates. We design the control structure alongside real AI work: use case classification by consequence, control design proportionate to actual risk level, and operating model construction with defined ownership and enforcement conditions.

The output is a governance operating model your team owns. Not a policy update. Not a set of recommendations filed in a shared drive. A functioning architecture with a structural home, accountability at every layer, and enforcement conditions that do not require good intentions to hold.

Regulatory & Risk Alignment
  • EU AI Act — Operational requirements for high-risk AI systems: post-market monitoring obligations, human oversight mechanisms, and logging requirements embedded into the governance architecture
  • NIST AI RMF — Govern and Manage functions at the implementation layer: defined roles, enforcement conditions, and incident response pathways aligned to the RMF operating model
  • ISO/IEC 42001 — Continual improvement requirements: governance versioning, audit readiness, and management review cycles built into the operating model from day one
  • Colorado SB24-205 — Deployer obligations: impact assessment processes, consumer notification conditions, and risk management documentation requirements operationalized within the governance architecture