About This Tool
AI Governance Readiness Assessment
Who Built This
This tool was designed and built by Krystal Martinez, an AI governance and data systems professional based in the United States. Krystal works at the intersection of AI adoption, compliance risk, and organizational readiness — helping organizations understand their exposure and build practical governance programs.
This assessment was created to fill a gap: most governance checklists are either vendor-gated, require consultants, or produce generic output. This tool is free, takes about five minutes, and gives you a scored, actionable result grounded in real regulatory requirements.
Methodology
The assessment evaluates AI governance readiness across 5 categories using 18 weighted questions:
- AI Inventory — Can your organization see all the AI tools in use, including unsanctioned ones?
- Policy — Do you have documented governance that covers approval, roles, and communication?
- Risk Management — Are AI-specific risks formally assessed and controlled?
- Compliance — Are you tracking and addressing applicable regulatory requirements?
- Human Oversight — Are consequential AI decisions subject to meaningful human review?
Each question is scored 0–3 (no capability → partial → substantial → full capability), weighted by governance importance, and aggregated into a category score (0–100%). The overall score is the average of all category scores. Risk levels:
- Critical (0–25%) — Immediate action required
- High (26–50%) — Material gaps; prioritized remediation needed
- Moderate (51–75%) — Partial controls in place; improvement plan recommended
- Low (76–100%) — Solid foundation; focus on maintenance and continuous improvement
Governance Frameworks Referenced
EU AI Act (Regulation (EU) 2024/1689)
The European Union's comprehensive AI regulation. High-risk system requirements take effect August 2, 2026. Covers conformity assessments, technical documentation, human oversight, transparency, and post-market monitoring for AI systems in high-risk categories.
Colorado AI Act (SB 24-205)
The first US state AI law, effective June 30, 2026 (subject to legislative amendment). Requires risk assessments, disclosure obligations, and appeal mechanisms for algorithmic decision-making in consequential decisions (employment, credit, education, housing).
NIST AI Risk Management Framework (AI RMF 1.0)
Published by the National Institute of Standards and Technology in January 2023. Provides a voluntary, flexible framework for organizations to manage AI risks across the GOVERN, MAP, MEASURE, and MANAGE functions. Used in this assessment as a structural reference for governance maturity.
OWASP LLM Top 10 (2025)
The Open Worldwide Application Security Project's top 10 security risks for large language model applications. Covers prompt injection, sensitive information disclosure, training data poisoning, model denial of service, and supply chain vulnerabilities — the most commonly exploited attack surfaces in production AI systems.
Privacy by Design
This tool is 100% client-side. Your assessment responses, scores, and PDF report are processed entirely within your browser. Nothing is transmitted to any server. No account is required. No cookies are set by this application.
Limitations & Disclaimer
This assessment is a self-reported diagnostic tool, not a formal audit or legal compliance determination. Results reflect your answers as provided. The tool cannot verify claims, audit systems, or account for jurisdiction-specific variations in regulatory interpretation.
This tool does not constitute legal advice. Organizations facing significant AI regulatory exposure should engage qualified legal counsel and compliance professionals.