The EU AI Act Is Now Enforceable — Does Your AI System Have the Documentation to Prove Compliance?

The EU AI Act moved from proposal to enforcement in 2026. High-risk AI systems now require risk assessments, bias audits, transparency documentation, and human oversight mechanisms — with penalties up to 7% of global revenue. NIST AI Risk Management Framework and ISO 42001 provide complementary voluntary frameworks increasingly expected by enterprise buyers and investors. This guide maps regulatory requirements to engineering tasks, provides the audit checklist, and explains what documentation you actually need — distinguishing between what regulations require, what’s considered best practice, and what’s unnecessary compliance theater.

Regulatory Landscape — The Three Frameworks

Framework Comparison

DimensionEU AI ActNIST AI RMFISO 42001
TypeMandatory regulationVoluntary frameworkCertifiable standard
JurisdictionEU (affects global companies serving EU)US (influences global practices)International
EnforcementGovernment authority + penaltiesSelf-assessmentThird-party audit
PenaltyUp to 7% global revenueNone (voluntary)Loss of certification
Risk classification4 levels (Unacceptable, High, Limited, Minimal)Custom risk assessmentOrganization-defined
Primary focusSafety, fairness, transparencyRisk identification and managementAI management system
Documentation burdenHigh for high-risk; low for minimal riskModerate (proportional to risk)High (certification requirement)
MaturityEnforcement 2026Version 1.0 (2023)Published 2023, certifications 2024+

Which Frameworks Apply to Your System

Your situationEU AI ActNIST AI RMFISO 42001
Serving EU usersMandatoryRecommendedOptional (but signals maturity)
US-only, no regulated industryNot applicableRecommendedOptional
US-only, regulated industry (finance/healthcare)Not applicableStrongly recommendedRecommended
Selling to enterprise customersMay be contractually requiredOften referenced in RFPsIncreasingly requested
Startup pre-revenueAssess applicability now; comply before scalingGood foundationToo expensive until Series A+

EU AI Act — The Compliance Requirements

Risk Classification Decision Tree

QuestionIf YesIf No
Does the system manipulate behavior, exploit vulnerabilities, or conduct social scoring?Unacceptable risk — bannedContinue
Is it used for: biometric identification, critical infrastructure, education/employment decisions, law enforcement, migration, or access to essential services?High risk — full compliance requiredContinue
Does the system interact directly with users (chatbot, content generation)?Limited risk — transparency obligationsMinimal risk — voluntary codes of practice

High-Risk AI System Requirements

RequirementEU AI Act ArticleWhat it means in practiceAudit evidence needed
Risk management systemArt. 9Documented process for identifying, evaluating, and mitigating AI-specific risksRisk register, mitigation log, residual risk assessment
Data governanceArt. 10Training data documented, relevant, representative, and as error-free as possibleData card, quality metrics, representativeness analysis
Technical documentationArt. 11Complete description of system: design, development, testing, performanceModel card, architecture document, test reports
Record keepingArt. 12Automatic logging of system operation enabling traceabilityLog retention policy, audit trail, event logging architecture
TransparencyArt. 13Users can interpret outputs and understand the systemExplanation mechanism, user-facing documentation
Human oversightArt. 14Humans can understand, monitor, and override the systemOverride mechanism, monitoring dashboard, intervention process
Accuracy, robustness, cybersecurityArt. 15System performs consistently and is protected against adversarial threatsTest results, security assessment, robustness testing
Bias testingArt. 10(2)(f)Measures to detect and mitigate bias, especially regarding protected groupsBias audit report, fairness metrics, mitigation documentation

Limited-Risk AI System Requirements

RequirementWhat it meansImplementation effort
AI disclosureUsers must know they’re interacting with AI1-2 engineering days (add disclosure text/badge)
Deepfake labelingAI-generated content must be labeled1-3 engineering days (metadata + visual label)
Chatbot disclosureUsers must be informed they’re chatting with AI, not human1 engineering day (disclosure banner)

NIST AI Risk Management Framework — The Practical Guide

NIST AI RMF is organized around four functions:

GOVERN — Organizational Risk Culture

PracticeWhat to documentWho’s responsible
AI governance policyRoles, responsibilities, decision rights for AI systemsExecutive leadership
Risk tolerance thresholdsWhat level of AI risk the organization acceptsRisk committee or equivalent
Impact assessment processHow AI systems are evaluated before deploymentProduct + legal + engineering
Stakeholder engagementHow affected parties are consultedProduct management

MAP — Risk Identification

PracticeWhat to documentOutput
System purpose and contextWhat the AI does, who it affects, in what contextContext document
Known limitationsWhat the AI cannot do reliablyLimitation inventory
Potential harmsHow the AI could cause harm (direct, indirect, systemic)Harm taxonomy
Stakeholder impactsWho is affected and how, including underserved populationsImpact assessment

MEASURE — Risk Assessment

PracticeWhat to measureMethod
Accuracy and reliabilityTask-specific quality metrics on representative dataTask-specific evaluation (see evaluation guide)
Fairness and biasFairness metrics across protected groupsBias audit (see bias detection guide)
RobustnessPerformance under adversarial and out-of-distribution inputsRed team testing (see safety testing guide)
Transparency and explainabilityCan decisions be explained to stakeholdersExplainability assessment
PrivacyData handling, consent, retention, minimizationPrivacy impact assessment

MANAGE — Risk Treatment

PracticeWhat to documentFrequency
Risk mitigation actionsWhat was done to reduce each identified riskPer risk
Residual risk acceptanceWhat risk remains after mitigation and why it’s acceptablePer risk
Monitoring planHow risks are tracked in productionOngoing
Incident responseHow AI-caused incidents are handledEvent-driven
Decommission planHow the AI system is safely retiredEnd-of-life

ISO 42001 — The Management System Standard

ISO 42001 requires an AI Management System (AIMS) — a formal, documented management system specifically for AI:

Required Documentation

DocumentPurposeApproximate effort
AI policyOrganization’s commitment to responsible AI2-5 pages, executive sign-off
Scope statementWhich AI systems are covered1-2 pages
Risk assessment methodologyHow AI risks are identified and evaluated5-10 pages
Statement of applicabilityWhich ISO 42001 controls apply and which don’t3-5 pages
AI impact assessmentImpact on individuals, groups, and society10-20 pages per high-risk system
Data management proceduresTraining data governance5-10 pages
Testing and validation proceduresHow AI systems are tested before deployment5-10 pages
Monitoring and measurement proceduresHow AI performance is tracked in production5-10 pages
Incident management procedureHow AI incidents are handled3-5 pages
Internal audit procedureHow compliance is verified internally3-5 pages
Management review recordsEvidence of leadership engagementMeeting minutes, quarterly

Total documentation effort: 60-120 pages for the management system, plus per-system documentation (model cards, test reports, risk assessments).

Certification cost: Third-party ISO 42001 audit typically costs $15,000-50,000 depending on organization size and number of AI systems. Annual surveillance audits: $8,000-25,000.

The Audit Checklist — Cross-Framework

This checklist maps requirements across all three frameworks. Items marked “Required” are mandatory under the applicable framework; “Recommended” items are best practice.

#Audit itemEU AI Act (High-Risk)NIST AI RMFISO 42001
1Risk classification documentedRequired (Art. 6)Required (MAP)Required (§6.1)
2Risk management system establishedRequired (Art. 9)Required (GOVERN)Required (§6.1)
3Training data documentedRequired (Art. 10)Required (MAP 2.3)Required (Annex B)
4Bias testing performedRequired (Art. 10.2f)Required (MEASURE 2.6)Required (Annex B)
5Technical documentation completeRequired (Art. 11)RecommendedRequired (§7.5)
6Automatic logging operationalRequired (Art. 12)RecommendedRequired (§8.1)
7Transparency mechanism in placeRequired (Art. 13)Required (MEASURE 2.11)Required (Annex B)
8Human oversight mechanismRequired (Art. 14)Recommended (MANAGE 4.1)Required (Annex B)
9Accuracy tested on representative dataRequired (Art. 15)Required (MEASURE 2.5)Required (§8.1)
10Robustness/adversarial testingRequired (Art. 15)Required (MEASURE 2.7)Recommended
11Cybersecurity assessmentRequired (Art. 15)Required (MANAGE 2.3)Required (§6.1)
12Incident response planRequired (Art. 62)Required (MANAGE 4.2)Required (§10.2)
13Post-market monitoringRequired (Art. 61)Required (MANAGE 1.1)Required (§9.1)
14Conformity assessmentRequired (Art. 43)N/AThird-party audit
15EU database registrationRequired (Art. 49)N/AN/A

Evidence Collection — What Auditors Actually Look For

Evidence typeWhat it provesHow to collect it
Model cardSystem is documented per Art. 11Maintain in version control, update with every model change
Test reportsAccuracy and robustness tested per Art. 15Automated test pipelines with saved results
Bias audit reportFairness testing performed per Art. 10Scheduled bias evaluation with saved metrics
Risk registerRisks identified and managed per Art. 9Maintained document with risk owners and status
Monitoring dashboardsProduction monitoring in place per Art. 61Screenshots or exports showing ongoing measurement
Incident logsIncident response functional per Art. 62Incident tickets with timeline, resolution, root cause
Override logsHuman oversight functional per Art. 14Logs showing human interventions and overrides
Change logTraceability per Art. 12Version control history for model, prompts, guardrails

How to Apply This

Use the token-counter tool to estimate evaluation pipeline costs — bias testing, accuracy measurement, and robustness testing all require inference calls.

Start with risk classification — determine which EU AI Act risk level applies to your system. This determines the scope of compliance required.

If high-risk: work through the 15-item cross-framework checklist systematically. Items 1-4 (risk classification, risk management, data documentation, bias testing) are the highest priority and the most commonly missing.

If limited-risk: implement transparency obligations (AI disclosure) — these are low effort and high impact.

Build evidence collection into your development process — post-hoc evidence gathering for an audit is 5-10x more expensive than continuous documentation.

Budget for ISO 42001 certification only after Series A+ — the documentation burden is real and premature certification diverts resources from building the AI system itself.

Honest Limitations

EU AI Act implementation guidance is still being published by the European Commission — specific requirements may be refined. NIST AI RMF is voluntary and self-assessed, meaning there’s no external validation of compliance claims. ISO 42001 certification costs are estimates based on early certification bodies; market pricing is still stabilizing. The cross-framework checklist covers the most common requirements but is not exhaustive — legal counsel should verify jurisdiction-specific obligations. Regulatory requirements apply to AI providers and deployers differently — this guide primarily addresses deployer obligations. The documentation effort estimates assume a single AI system; organizations with multiple AI systems face additional coordination overhead.