A comprehensive 58-control checklist across 9 compliance domains to help organisations achieve full conformity with the EU AI Act (Regulation (EU) 2024/1689). Covers AI system classification, prohibited practice screening, high-risk requirements, transparency obligations, data governance, human oversight, GPAI model compliance, risk management, and documentation requirements - mapped to specific Articles and Annexes of the regulation.
A 58-control compliance checklist for EU AI Act (Regulation 2024/1689) covering all 4 risk tiers. Includes prohibited practice screening, high-risk requirements, GPAI obligations, and documentation.
The EU AI Act imposes fines of up to 35 million EUR or 7% of global annual turnover (whichever is higher) for deploying prohibited AI practices under Article 5 - making compliance a board-level financial priority, not just a legal formality.
Enforcement is phased: prohibited practices have been enforceable since 2 February 2025, GPAI model obligations apply from 2 August 2025, and the full high-risk AI system requirements under Annex III take effect on 2 August 2026 - organisations need to be working toward compliance now, not waiting for deadlines.
High-risk AI systems listed in Annex III (covering employment, education, law enforcement, critical infrastructure, and more) must satisfy requirements across Articles 8-15 including risk management, data governance, technical documentation, record-keeping, transparency, human oversight, and accuracy - this checklist maps each obligation to its specific Article.
General-purpose AI (GPAI) model providers face distinct obligations under Articles 51-56, including technical documentation, downstream information sharing, copyright compliance, and - for models posing systemic risk - adversarial testing and incident reporting to the EU AI Office.
Every deployer of high-risk AI must register their system in the EU database (Article 49), maintain logs for at least six months (Article 12), conduct a fundamental rights impact assessment (Article 27), and ensure meaningful human oversight (Article 14) - requirements that demand operational processes, not just documentation.
58 actionable controls across 9 compliance domains to achieve full conformity with Regulation (EU) 2024/1689.
Catalogue every AI system your organisation develops, deploys, or uses, and classify each by the EU AI Act's four risk tiers.
Screen all AI systems against the Article 5 prohibited practices list. Enforceable since 2 February 2025 with penalties up to 35M EUR or 7% of global turnover.
Map EU AI Act obligations to existing compliance programmes and establish cross-functional accountability for AI regulation conformity
Implement technical controls for high-risk AI systems including logging, access control, cybersecurity resilience, and incident reporting to the EU AI Office
Assess legal exposure across the EU AI Act risk tiers, review conformity assessment procedures, and prepare for market surveillance authority engagement
Build compliant AI development pipelines with proper technical documentation, data governance, bias testing, and human oversight mechanisms per Articles 8-15
Operationalise the risk management system required under Article 9, establish post-market monitoring per Article 72, and maintain the EU database registration under Article 49
AI systems intended for use as safety components of medical devices (Annex III, Section 5) are classified as high-risk. This includes AI-driven diagnostic tools, treatment recommendation engines, and clinical decision support systems. These must also comply with the Medical Devices Regulation (MDR 2017/745) and In Vitro Diagnostic Regulation (IVDR), with the AI Act conformity assessment integrated into the existing CE marking process under Article 43.
AI used for creditworthiness assessment and credit scoring of natural persons is explicitly high-risk under Annex III, Section 5(b). AI systems evaluating insurance pricing, fraud detection affecting individuals, and algorithmic trading decisions also fall within scope. Financial institutions must align AI Act compliance with the Digital Operational Resilience Act (DORA) and MiFID II algorithmic trading requirements, particularly around human oversight (Article 14) and transparency (Article 13).
AI systems used in the administration of justice and democratic processes (Annex III, Section 6) and those influencing access to essential services (Section 8) are high-risk. Legal tech AI for contract analysis, case outcome prediction, and dispute resolution must meet transparency obligations under Article 13 and human oversight under Article 14. Where AI produces legal effects on individuals, GDPR Article 22 automated decision-making protections apply in parallel.
AI systems used by public authorities for law enforcement (Annex III, Section 6), migration and border control (Section 7), and critical infrastructure management (Section 2) face the strictest requirements. Real-time remote biometric identification in public spaces is prohibited under Article 5 with narrow law enforcement exceptions. Government deployers must conduct fundamental rights impact assessments (Article 27) and register in the EU database (Article 49) before putting any high-risk system into service.
Catalogue every AI system your organisation develops, deploys, or uses, and classify each by the EU AI Act's four risk tiers: unacceptable (prohibited), high, limited, and minimal. Article 6 and Annex III define high-risk categories, while Article 5 sets out prohibited practices. Accurate classification is the foundation of all downstream compliance obligations.
Screen all AI systems against the Article 5 prohibited practices list. These have been enforceable since 2 February 2025 and carry the highest penalties under the regulation - up to 35 million EUR or 7% of global annual turnover. There are no grace periods or exceptions for organisations that were unaware of the prohibitions.
For each AI system classified as high-risk under Annex III or Article 6, implement the full set of mandatory requirements from Articles 8-15. These requirements are legally binding for providers placing systems on the EU market and for deployers using them. Full compliance is required by 2 August 2026 for Annex III systems.
Article 50 imposes transparency obligations on providers and deployers of certain AI systems regardless of risk classification. These requirements ensure that natural persons are informed when they interact with AI, when content is AI-generated, and when emotion recognition or biometric categorisation is used. Transparency obligations apply to all organisations using covered systems.
Take our 2-minute assessment and get a personalised AI governance readiness report with specific recommendations for your organisation.
Start Free AssessmentArticle 10 establishes binding data governance requirements for high-risk AI systems. Training, validation, and testing datasets must meet strict quality criteria. These obligations aim to prevent biased, inaccurate, or discriminatory AI outputs and apply to both initial development and ongoing model updates.
Article 14 requires high-risk AI systems to be designed for effective human oversight, including the ability to fully understand the system, monitor its operation, and override or reverse outputs. Human oversight is not a formality - it must be meaningful, informed, and operationally effective. Deployers bear primary responsibility for ensuring oversight is actually exercised.
Articles 51-56 establish a distinct compliance regime for general-purpose AI models (foundation models, large language models). Obligations apply primarily to GPAI model providers but also affect downstream deployers who integrate GPAI into their systems. GPAI rules apply from 2 August 2025, with a transitional period for models already on the market.
The EU AI Act requires an ongoing, iterative risk management process throughout the AI system lifecycle (Article 9) and a post-market monitoring system proportionate to the nature of the technology and risks (Article 72). Providers of high-risk AI must also report serious incidents to market surveillance authorities (Article 73). These are continuous obligations, not one-time activities.
The EU AI Act requires comprehensive documentation, registration in the EU database, and record-keeping obligations that enable regulatory oversight and public accountability. Providers must prepare technical documentation before placing systems on the market (Article 11, Annex IV), register high-risk systems in the EU database (Article 49), and deployers must maintain logs for defined retention periods (Article 12).
Build a complete AI governance programme with these complementary templates.
A structured 48-item risk register across 8 risk domains with a 5x5 scoring matrix to help CISOs identify, assess, treat, and track AI-specific risks. Covers data privacy, model reliability, bias, security, compliance, operational, and reputational risk categories with board-ready reporting dashboards.
Download FreeA ready-to-customise 52-provision AI acceptable use policy template covering 8 policy domains. Built for CISOs and compliance teams who need a professional, board-ready policy document that employees actually understand and follow. Maps to HIPAA, SOC 2, GDPR, EU AI Act, ISO 42001, and NIST AI RMF.
Download FreeA 56-control gap analysis checklist for ISO/IEC 42001:2023 AI Management Systems covering all normative clauses (4-10) plus Annex A controls. Designed for organisations preparing for AIMS certification, this checklist provides clause-by-clause conformity assessment, certification readiness scoring, remediation priority planning, and Stage 1/Stage 2 audit preparation guidance - mapped to specific sub-clauses and Annex A control objectives throughout.
Download FreeThe EU AI Act creates binding obligations for AI systems in the European market. This guide covers risk tiers, compliance timelines, documentation requirements, and practical steps for mid-market companies.
A comprehensive guide to every major AI regulation in effect or pending in 2026, including the EU AI Act, NIST AI RMF, Colorado AI Act, UK principles, Australia Privacy Act amendments, and Singapore's Agentic AI framework. Comparison tables, enforcement dates, and penalties included.
The definitive AI compliance checklist for enterprises: 50 essential controls mapped across 12 regulatory frameworks including EU AI Act, NIST AI RMF, ISO 42001, GDPR, Colorado AI Act, and more. Prioritized by risk level with implementation guidance.
Fill in your details below for instant access to the full 20-page checklist.
“This framework saved us 3 months of policy development. We went from zero AI governance to audit-ready in under 2 weeks.”
— Security Leader, Mid-Market Healthcare Organisation
Need more than a checklist?
See how Areebi automates and enforces every control in this checklist across your entire organisation.
Book a DemoThe checklist tells you what to do. Areebi does it for you - automated DLP, audit logging, policy enforcement, and compliance reporting across every AI interaction.