AI Compliance Checklist for Startups (2025)
Quick Facts About This Checklist
- Purpose: Comprehensive checklist for AI/ML compliance with EU AI Act, GDPR, and emerging US regulations
- Key Laws: EU AI Act (Regulation 2024/1689), GDPR, NIST AI RMF, US Executive Orders on AI, state AI laws
- Who Needs This: Any company developing, deploying, or using AI systems (especially ML models, automated decision-making, LLMs)
- Last Updated: September 2025 (reflects EU AI Act effective dates, GDPR AI guidance, ISO/IEC 42001:2023)
- Penalty Risk: Up to €35 million or 7% of global annual turnover for EU AI Act violations; €20 million or 4% for GDPR violations
- Format: Actionable checklist with Yes/No questions and action items
Why You Need an AI Compliance Checklist
AI compliance is critical for:
- Avoiding Massive Fines: EU AI Act fines up to €35M or 7% of global revenue; GDPR fines up to €20M or 4%
- Market Access: Required to deploy AI systems in EU (EU AI Act) and use personal data (GDPR)
- Trust and Reputation: Demonstrates responsible AI development and ethical practices
- Risk Management: Identifies and mitigates AI-specific risks (bias, fairness, safety, security)
- Product Quality: Robust AI governance improves model performance and reliability
- Competitive Advantage: Early compliance positions you ahead of competitors
Critical 2025 Deadlines:
- February 2, 2025: EU AI Act prohibitions on unacceptable-risk AI systems (already in effect)
- February 2, 2025: AI literacy requirements for all AI system providers and deployers
- August 2, 2026: EU AI Act requirements for general-purpose AI (GPAI) models
- August 2, 2027: EU AI Act full compliance required for high-risk AI systems
1. EU AI Act Risk Classification
The EU AI Act uses a risk-based approach with four risk levels:
Step 1: Determine Your AI System's Risk Level
- [ ] Unacceptable Risk (Prohibited): AI systems that pose a clear threat to safety, livelihoods, and rights (see Section 2)
- [ ] High Risk: AI systems used in critical sectors or that significantly impact fundamental rights (see Section 3)
- [ ] Limited Risk (Transparency Obligations): AI systems with specific transparency risks (chatbots, deepfakes, emotion recognition, biometric categorization) (see Section 5)
- [ ] Minimal/No Risk: All other AI systems (no specific AI Act requirements, but GDPR may still apply)
Risk Classification Decision Tree
Does your AI system fall into any of the categories below?
Prohibited (Unacceptable Risk):
- [ ] Social scoring by governments or public authorities?
- [ ] Exploiting vulnerabilities of children, elderly, or disabled persons?
- [ ] Subliminal manipulation causing harm?
- [ ] Biometric categorization to infer sensitive attributes (race, political opinions, religion, sexual orientation)?
- [ ] Real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions)?
- [ ] Predictive policing based solely on profiling or location?
- [ ] Emotion recognition in workplace or education (with narrow exceptions)?
→ If YES to any above: Your AI system is PROHIBITED. Do not deploy. See Section 2.
High Risk:
- [ ] Safety component of product subject to EU product safety legislation (machinery, medical devices, aviation, automotive)?
- [ ] Biometric identification or categorization?
- [ ] Critical infrastructure management (transport, energy, water)?
- [ ] Education or vocational training (admissions, exam scoring)?
- [ ] Employment decisions (recruitment, promotion, termination, task allocation, monitoring)?
- [ ] Access to essential services (credit scoring, insurance pricing, emergency services dispatch)?
- [ ] Law enforcement (crime prediction, profiling, evidence evaluation, truth assessment)?
- [ ] Migration, asylum, border control (visa applications, authenticity verification)?
- [ ] Justice and democratic processes (judicial decisions, election outcomes)?
→ If YES to any above: Your AI system is HIGH-RISK. See Section 3 for compliance requirements.
General-Purpose AI (GPAI) Model:
- [ ] Model trained on broad data and capable of wide range of tasks (e.g., LLMs like GPT, Claude, Llama)?
- [ ] Model has systemic risk (>10^25 FLOPs training compute, or designated by EU Commission)?
→ If YES: See Section 4 for GPAI compliance requirements.
Limited Risk (Transparency):
- [ ] Chatbot or conversational AI interacting with humans?
- [ ] AI-generated content (deepfakes, synthetic images/audio/video)?
- [ ] AI system that categorizes individuals based on biometric data?
- [ ] Emotion recognition system?
→ If YES: See Section 5 for transparency obligations.
Minimal/No Risk:
- [ ] None of the above apply
→ If YES: No specific EU AI Act requirements, but GDPR compliance still required if processing personal data.
2. Prohibited AI Systems (Unacceptable Risk)
Effective Date: February 2, 2025 (already in effect)
The following AI systems are BANNED in the EU:
- [ ] Have you confirmed your AI system does NOT:
- [ ] Deploy subliminal techniques to manipulate behavior causing physical or psychological harm?
- [ ] Exploit vulnerabilities of specific groups (age, disability, social/economic situation) causing harm?
- [ ] Enable social scoring by public authorities leading to detrimental treatment?
- [ ] Assess risk of individuals committing criminal offenses based solely on profiling or personality traits (predictive policing)?
- [ ] Create/expand facial recognition databases through untargeted scraping of internet or CCTV?
- [ ] Infer emotions in workplace or educational institutions (except safety or medical reasons)?
- [ ] Categorize individuals based on biometric data to infer sensitive characteristics (race, political opinions, trade union membership, religious beliefs, sexual orientation)?
- [ ] Perform real-time remote biometric identification in public spaces for law enforcement (except narrow exceptions)?
If your AI system falls into any prohibited category:
→ Action Required: DO NOT deploy this AI system in the EU. Consult legal counsel immediately.
3. High-Risk AI Systems Compliance
Effective Date: August 2, 2027 (full compliance required)
If your AI system is classified as high-risk, you must comply with ALL of the following:
3.1 Risk Management System
- [ ] Have you established a continuous risk management system that:
- [ ] Identifies and analyzes known and foreseeable risks?
- [ ] Estimates and evaluates risks that may emerge during AI system use?
- [ ] Evaluates risks based on intended use and reasonably foreseeable misuse?
- [ ] Adopts risk mitigation measures (design changes, safeguards, user information)?
- [ ] Tests risk mitigation measures for effectiveness?
- [ ] Monitors and updates risk management throughout AI system lifecycle?
- [ ] Documents all risk management activities?
3.2 Data Governance and Quality
-
[ ] Is your training, validation, and testing data:
- [ ] Subject to appropriate data governance practices (quality assessment, bias detection, data gaps analysis)?
- [ ] Relevant, sufficiently representative, and to the best extent possible free of errors?
- [ ] Complete (covering all relevant scenarios)?
- [ ] Statistically appropriate for intended purpose?
- [ ] Examined for possible biases and mitigated?
- [ ] Protected against data poisoning attacks?
-
[ ] Have you documented:
- [ ] Data sources and data collection methodology?
- [ ] Data labeling procedures and quality assurance?
- [ ] Data preprocessing, feature engineering, data augmentation?
- [ ] Assumptions, limitations, and known biases in training data?
3.3 Technical Documentation
-
[ ] Have you created comprehensive technical documentation including:
- [ ] General description of AI system (intended purpose, architecture, development process)?
- [ ] Detailed design specifications (algorithms, data, training methodology)?
- [ ] Data governance and management practices?
- [ ] Risk management system and identified risks?
- [ ] Conformity assessment procedures and results?
- [ ] EU declaration of conformity?
- [ ] Instructions for use?
- [ ] Human oversight measures?
- [ ] Cybersecurity measures?
- [ ] Performance metrics (accuracy, robustness, cybersecurity)?
- [ ] Validation and testing results?
-
[ ] Is technical documentation:
- [ ] Kept up-to-date throughout AI system lifecycle?
- [ ] Available to national competent authorities upon request?
3.4 Record-Keeping and Logging
- [ ] Does your AI system automatically generate logs that:
- [ ] Record events relevant to identifying risks and analyzing AI system performance?
- [ ] Enable traceability throughout AI system lifecycle?
- [ ] Are protected by appropriate cybersecurity measures?
- [ ] Are retained for period appropriate to intended purpose (minimum as required by law)?
3.5 Transparency and Information to Users
-
[ ] Have you provided clear, concise, and easily accessible information to users/deployers about:
- [ ] AI system's intended purpose, capabilities, and limitations?
- [ ] Level of accuracy, robustness, and cybersecurity?
- [ ] Known or foreseeable circumstances where AI system may not perform as intended?
- [ ] Human oversight measures and competencies required?
- [ ] Expected lifetime and maintenance procedures?
-
[ ] Is information:
- [ ] Written in clear, understandable language?
- [ ] Available in official EU languages as required?
3.6 Human Oversight
-
[ ] Have you implemented human oversight measures enabling humans to:
- [ ] Fully understand AI system capabilities and limitations?
- [ ] Remain aware they are interacting with an AI system?
- [ ] Monitor AI system operation and detect anomalies, dysfunctions, or unexpected performance?
- [ ] Interpret AI system outputs correctly?
- [ ] Decide not to use AI system or override/disregard AI system output?
- [ ] Intervene or interrupt AI system operation (stop button)?
-
[ ] Have you identified:
- [ ] Who is responsible for human oversight?
- [ ] What training and competencies are required?
- [ ] What technical measures enable effective oversight (explainability, override mechanisms)?
3.7 Accuracy, Robustness, and Cybersecurity
-
[ ] Have you ensured your AI system:
- [ ] Achieves appropriate level of accuracy, as declared in technical documentation?
- [ ] Is resilient against errors, faults, and inconsistencies during operation?
- [ ] Is robust against adversarial attacks (data poisoning, model evasion, model extraction)?
- [ ] Fails safely (fallback procedures, graceful degradation)?
- [ ] Has cybersecurity protections against unauthorized access, manipulation, or data breaches?
-
[ ] Have you tested AI system:
- [ ] Against validation and test datasets?
- [ ] Under various conditions (edge cases, stress testing, adversarial testing)?
- [ ] For bias, fairness, and discriminatory outcomes?
3.8 Conformity Assessment
- [ ] Have you completed conformity assessment to demonstrate compliance with EU AI Act?
- [ ] Internal conformity assessment (for most high-risk AI systems)?
- [ ] OR Third-party conformity assessment (if required by specific product safety legislation)?
- [ ] Issued EU declaration of conformity?
- [ ] Affixed CE marking to AI system (if applicable)?
3.9 Registration in EU Database
- [ ] Have you registered your high-risk AI system in the EU database for high-risk AI systems?
- [ ] Provided required information (provider name, AI system description, intended purpose, risk classification, etc.)?
- [ ] Updated registration when AI system is modified?
3.10 Post-Market Monitoring
- [ ] Have you established post-market monitoring system to:
- [ ] Collect and analyze data on AI system performance in real-world use?
- [ ] Identify and address issues, malfunctions, or unexpected behaviors?
- [ ] Monitor for bias, fairness issues, or discriminatory outcomes?
- [ ] Track incidents and near-misses?
- [ ] Update risk management system based on monitoring findings?
4. General-Purpose AI (GPAI) Compliance
Effective Date: August 2, 2026
If you are a provider of a general-purpose AI model (e.g., LLM like GPT, Claude, Llama), you must comply with:
4.1 Standard GPAI Model Requirements
-
[ ] Have you provided technical documentation including:
- [ ] General description of model (architecture, parameters, training data)?
- [ ] Training methodology and computational resources used?
- [ ] Data governance (data sources, data curation, data filtering, bias mitigation)?
- [ ] Model evaluation results (capabilities, limitations, known biases)?
- [ ] Intended use cases and foreseeable misuse?
-
[ ] Have you provided information and documentation to downstream providers:
- [ ] Enabling downstream providers to comply with their obligations?
- [ ] Instructions for use and recommended risk mitigation measures?
-
[ ] Have you implemented:
- [ ] Copyright and intellectual property policy (compliance with EU copyright law)?
- [ ] Publicly available summary of training data content?
4.2 GPAI Models with Systemic Risk
If your GPAI model has systemic risk (>10^25 FLOPs training compute, or designated by EU Commission), additional requirements:
-
[ ] Have you conducted model evaluation:
- [ ] Assessing systemic risks (e.g., risks to public health, safety, security, fundamental rights, society, environment)?
- [ ] Using standardized protocols and tools (e.g., red-teaming, adversarial testing)?
- [ ] Evaluating capabilities that could lead to systemic risks (e.g., capability to automate cyber-attacks, create bioweapons, manipulate elections)?
-
[ ] Have you assessed and mitigated systemic risks:
- [ ] Implementing risk mitigation measures (safety filters, alignment techniques, usage restrictions)?
- [ ] Monitoring for emerging risks?
-
[ ] Have you ensured adequate level of cybersecurity protection:
- [ ] Against unauthorized access, model theft, adversarial attacks?
- [ ] Incident response procedures for security breaches?
-
[ ] Have you tracked and reported:
- [ ] Serious incidents (incidents with significant impact on health, safety, fundamental rights)?
- [ ] To EU AI Office and national competent authorities?
-
[ ] Have you ensured energy efficiency:
- [ ] Measured and documented energy consumption during training and inference?
- [ ] Implemented energy efficiency measures where feasible?
5. Limited-Risk AI (Transparency Obligations)
If your AI system is a chatbot, generates synthetic content, or uses biometric categorization/emotion recognition:
5.1 Chatbots and Conversational AI
- [ ] Have you designed your chatbot to disclose to users that they are interacting with an AI system?
- [ ] Disclosure is clear, prominent, and displayed before or at the beginning of interaction?
- [ ] Disclosure is in plain language understandable to average user?
- [ ] Exception: If it is obvious from context that user is interacting with AI (e.g., clearly labeled "AI Assistant")?
Example disclosure:
"This is an AI chatbot. Responses are generated by artificial intelligence and may not be accurate."
5.2 AI-Generated Content (Deepfakes)
- [ ] If your AI generates synthetic images, audio, video, or text content, have you:
- [ ] Disclosed in machine-readable format that content was AI-generated?
- [ ] Disclosed in human-readable format that content was AI-generated (e.g., watermark, label, metadata)?
- [ ] Ensured disclosure is clear, prominent, and visible to average user?
Example disclosure:
"This image was generated by AI." "This video contains AI-generated or manipulated content."
Exception for creative or artistic purposes:
- [ ] If AI-generated content is part of creative or artistic work, disclosure may be less prominent but still required
5.3 Biometric Categorization and Emotion Recognition
- [ ] If your AI system categorizes individuals based on biometric data or recognizes emotions, have you:
- [ ] Informed individuals that they are subject to such system?
- [ ] Disclosure is clear, prominent, and provided before or at the beginning of exposure?
6. GDPR Compliance for AI Systems
GDPR applies to AI systems that process personal data of EU residents.
6.1 Lawful Basis for Processing (GDPR Article 6)
-
[ ] Have you identified a lawful basis for processing personal data in your AI system?
- [ ] Consent (Article 6(1)(a)): Freely given, specific, informed, unambiguous consent?
- [ ] Contract (Article 6(1)(b)): Processing necessary to perform contract with data subject?
- [ ] Legal Obligation (Article 6(1)(c)): Processing necessary to comply with legal obligation?
- [ ] Vital Interests (Article 6(1)(d)): Processing necessary to protect vital interests?
- [ ] Public Task (Article 6(1)(e)): Processing necessary for public interest or official authority?
- [ ] Legitimate Interest (Article 6(1)(f)): Processing necessary for legitimate interests (must conduct balancing test)?
-
[ ] If processing sensitive data (health, biometric, racial/ethnic origin, political opinions, religious beliefs, sexual orientation), have you identified lawful basis under GDPR Article 9?
- [ ] Explicit consent?
- [ ] Processing necessary for employment, social security, health/social care?
- [ ] Processing necessary for substantial public interest?
- [ ] Other Article 9(2) condition?
6.2 Privacy Notices and Transparency
-
[ ] Have you updated your privacy policy to explain:
- [ ] What personal data is collected for AI system?
- [ ] How personal data is used in AI system (training, inference, evaluation)?
- [ ] Lawful basis for processing?
- [ ] Whether AI system uses automated decision-making (see Section 7)?
- [ ] How long personal data is retained?
- [ ] Who personal data is shared with (third parties, processors, sub-processors)?
- [ ] Data subject rights (access, rectification, erasure, restriction, portability, objection)?
- [ ] Whether personal data is transferred outside EU (and safeguards)?
-
[ ] Is privacy policy:
- [ ] Written in clear, plain language?
- [ ] Easily accessible (prominent link on website, provided before data collection)?
6.3 Data Minimization and Purpose Limitation
- [ ] Have you ensured:
- [ ] AI system collects only personal data necessary for specified purpose (data minimization)?
- [ ] Personal data is not used for incompatible purposes beyond what was disclosed (purpose limitation)?
- [ ] Training data does not include unnecessary personal data?
6.4 Data Retention and Deletion
- [ ] Have you defined:
- [ ] Retention periods for personal data used in AI system?
- [ ] Procedures for deleting or anonymizing personal data when no longer needed?
- [ ] How to delete personal data from trained models (if feasible)?
Challenge: Personal data "learned" by AI models is difficult to remove (machine unlearning is emerging research area) Best Practice: Use anonymized/pseudonymized data for training where possible
6.5 Data Subject Rights
- [ ] Have you implemented processes to respond to data subject rights requests:
- [ ] Right of Access (Article 15): Provide copy of personal data used in AI system, information about AI system
- [ ] Right to Rectification (Article 16): Correct inaccurate personal data in AI system
- [ ] Right to Erasure (Article 17): Delete personal data from AI system (if applicable)
- [ ] Right to Restriction (Article 18): Limit processing of personal data in AI system
- [ ] Right to Data Portability (Article 20): Provide personal data in machine-readable format
- [ ] Right to Object (Article 21): Stop processing of personal data for AI system (especially for legitimate interests basis or direct marketing)
6.6 Privacy by Design and Default (Article 25)
-
[ ] Have you implemented privacy by design principles in AI system development:
- [ ] Data minimization (collect only necessary data)?
- [ ] Pseudonymization and encryption of personal data?
- [ ] Transparency (explainability, auditability)?
- [ ] User control (consent mechanisms, opt-out options)?
- [ ] Security measures (access controls, encryption, anomaly detection)?
-
[ ] Have you implemented privacy by default:
- [ ] Default settings ensure only necessary personal data is processed?
- [ ] Users must opt-in to processing beyond what is strictly necessary?
7. Automated Decision-Making (GDPR Article 22)
GDPR Article 22 prohibits automated decision-making with legal or similarly significant effects on individuals, unless:
- [ ] Decision is necessary for contract
- [ ] Authorized by EU/Member State law
- [ ] Based on individual's explicit consent
7.1 Identify Automated Decisions
- [ ] Does your AI system make decisions that:
- [ ] Produce legal effects (e.g., termination of contract, denial of legal benefit)?
- [ ] Similarly significantly affect individuals (e.g., credit denial, insurance pricing, job application rejection, school admission)?
→ If YES: Continue below. If NO, skip this section.
7.2 Ensure Compliance with Article 22
- [ ] If your AI system makes automated decisions with legal/significant effects, have you:
- [ ] Obtained explicit consent from individuals?
- [ ] OR Ensured decision is necessary for contract (e.g., automated credit scoring for loan approval)?
- [ ] OR Ensured decision is authorized by EU/Member State law with safeguards?
7.3 Implement Safeguards
- [ ] Have you implemented safeguards for automated decision-making:
- [ ] Right to human intervention: Allow individuals to request human review of decision?
- [ ] Right to express views: Allow individuals to provide input or contest decision?
- [ ] Right to explanation: Provide meaningful information about logic, significance, and consequences of automated decision?
7.4 Prohibitions on Sensitive Data
- [ ] Have you ensured automated decisions are NOT based on sensitive data (health, biometric, racial/ethnic origin, political opinions, religious beliefs, sexual orientation) UNLESS:
- [ ] You have explicit consent?
- [ ] OR Processing is necessary for substantial public interest and suitable safeguards are in place?
8. Data Protection Impact Assessments (DPIAs) for AI
GDPR Article 35 requires DPIAs for processing likely to result in high risk to individuals.
8.1 Determine if DPIA Required
- [ ] Does your AI system involve:
- [ ] Systematic and extensive profiling with automated decision-making?
- [ ] Large-scale processing of sensitive data (health, biometric, racial/ethnic origin, etc.)?
- [ ] Systematic monitoring of publicly accessible areas on large scale (facial recognition, surveillance)?
- [ ] Innovative use of technology (novel AI techniques, emerging ML methods)?
- [ ] Processing that prevents individuals from exercising rights or using services?
- [ ] Large-scale processing of personal data?
- [ ] Matching or combining datasets?
- [ ] Data concerning vulnerable individuals (children, elderly, employees)?
- [ ] Biometric identification or categorization?
- [ ] Processing likely to result in high risk (based on nature, scope, context, purposes)?
→ If YES to any above: DPIA is REQUIRED. Continue below.
8.2 Conduct DPIA
-
[ ] Have you completed a DPIA that includes:
-
[ ] Description of processing operations:
-
Nature, scope, context, purposes of processing
-
Categories of personal data processed
-
Categories of data subjects
-
AI system architecture, algorithms, data flows
-
[ ] Assessment of necessity and proportionality:
-
Is processing necessary for specified purpose?
-
Is processing proportionate (data minimization, least intrusive means)?
-
Are there less privacy-invasive alternatives?
-
[ ] Assessment of risks to data subjects:
-
What risks does AI system pose (discrimination, inaccurate predictions, loss of privacy, security breaches)?
-
Likelihood and severity of risks?
-
Impact on fundamental rights and freedoms?
-
[ ] Measures to address risks:
-
Technical safeguards (encryption, pseudonymization, access controls, differential privacy)?
-
Organizational safeguards (data governance, staff training, audits)?
-
Bias mitigation measures?
-
Explainability and transparency measures?
-
Human oversight mechanisms?
-
8.3 Consult Data Protection Officer (DPO)
- [ ] Have you consulted your Data Protection Officer (if you have one) during DPIA?
8.4 Consult Supervisory Authority (if necessary)
- [ ] If residual risk remains high after mitigation measures, have you consulted supervisory authority before deploying AI system?
8.5 Review and Update DPIA
- [ ] Have you established process to:
- [ ] Review DPIA when AI system changes (new features, new data sources, new use cases)?
- [ ] Update DPIA at least annually or when risks change?
9. AI Governance and Risk Management
Best Practice: Establish AI governance framework (even if not legally required)
9.1 AI Governance Structure
- [ ] Have you established:
- [ ] AI governance committee or AI ethics board?
- [ ] Designated AI product owner or AI lead responsible for compliance?
- [ ] Clear roles and responsibilities for AI development, deployment, monitoring, and auditing?
9.2 AI Governance Policies
- [ ] Have you documented:
- [ ] AI ethics principles (fairness, transparency, accountability, safety, privacy, human-centricity)?
- [ ] AI use case approval process (who approves new AI projects)?
- [ ] AI risk assessment process (before deploying new AI system)?
- [ ] AI procurement policy (vendor due diligence for third-party AI—see Section 16)?
- [ ] AI incident response plan (how to handle AI failures, biases, security breaches)?
- [ ] AI audit schedule (internal and external AI audits)?
9.3 AI Risk Management Framework
-
[ ] Have you adopted an AI risk management framework?
- [ ] NIST AI Risk Management Framework (RMF)?
- [ ] ISO/IEC 42001 (AI Management System)?
- [ ] ISO/IEC 23894 (AI Risk Management)?
- [ ] ISO/IEC 42005 (AI System Impact Assessment)?
- [ ] Other industry-specific framework?
-
[ ] Does your risk management framework address:
- [ ] Technical risks (inaccuracy, unreliability, security vulnerabilities, adversarial attacks)?
- [ ] Ethical risks (bias, fairness, discrimination, privacy violations)?
- [ ] Legal and compliance risks (GDPR, EU AI Act, sector-specific regulations)?
- [ ] Reputational risks (negative publicity, loss of trust)?
- [ ] Operational risks (dependency on AI, single points of failure)?
10. Algorithmic Transparency and Explainability
10.1 Explainability Requirements
- [ ] Have you assessed:
- [ ] Whether your AI system requires explainability (high-risk AI, automated decisions affecting individuals, regulated sectors)?
- [ ] What level of explainability is appropriate (global explanations, local explanations, counterfactuals)?
- [ ] Who needs explanations (data subjects, auditors, regulators, internal teams)?
10.2 Implement Explainability Techniques
- [ ] Have you implemented explainability methods such as:
- [ ] Model-agnostic methods (LIME, SHAP, counterfactual explanations)?
- [ ] Model-specific methods (attention maps, feature importance, decision trees)?
- [ ] Human-understandable explanations (natural language explanations, visualizations)?
10.3 Provide Explanations to Users
- [ ] If your AI system affects individuals (e.g., automated decisions), have you:
- [ ] Provided meaningful information about logic involved in decision?
- [ ] Explained significance and consequences of decision?
- [ ] Used clear, non-technical language understandable to average person?
- [ ] Made explanations accessible (via user interface, on request)?
10.4 Internal Transparency
- [ ] Have you documented:
- [ ] How AI system makes decisions (model architecture, feature importance, decision logic)?
- [ ] Training data and data sources used?
- [ ] Model performance metrics (accuracy, precision, recall, fairness metrics)?
- [ ] Known limitations and edge cases?
11. Bias Testing and Fairness
11.1 Identify Potential Sources of Bias
-
[ ] Have you assessed potential sources of bias in:
-
[ ] Training data:
-
Historical bias (data reflects past discrimination)?
-
Representation bias (underrepresented groups in dataset)?
-
Measurement bias (proxy variables that correlate with protected attributes)?
-
Labeling bias (subjective or inconsistent labels)?
-
[ ] Model design:
-
Algorithmic bias (model amplifies biases in data)?
-
Feature selection (inclusion of features correlated with protected attributes)?
-
Optimization objective (model optimized for majority group, not all groups)?
-
[ ] Deployment context:
-
Interaction bias (biased user behavior influences model predictions)?
-
Feedback loops (model predictions influence future data, reinforcing bias)?
-
11.2 Define Fairness Metrics
- [ ] Have you defined appropriate fairness metrics for your AI system?
- [ ] Demographic parity (equal positive prediction rates across groups)?
- [ ] Equalized odds (equal true positive and false positive rates across groups)?
- [ ] Equal opportunity (equal true positive rates across groups)?
- [ ] Predictive parity (equal positive predictive values across groups)?
- [ ] Individual fairness (similar individuals receive similar predictions)?
- [ ] Counterfactual fairness (predictions would be same if individual's protected attribute changed)?
Note: Different fairness metrics may conflict—prioritize based on use case and stakeholder input
11.3 Test for Bias and Discrimination
-
[ ] Have you tested AI system for bias across protected attributes:
- [ ] Race/ethnicity?
- [ ] Gender?
- [ ] Age?
- [ ] Disability?
- [ ] Religion?
- [ ] Sexual orientation?
- [ ] National origin?
- [ ] Other relevant protected attributes?
-
[ ] Have you tested AI system across intersectional groups (e.g., Black women, elderly disabled individuals)?
-
[ ] Have you used bias detection tools such as:
- [ ] AI Fairness 360 (AIF360)?
- [ ] Fairlearn?
- [ ] What-If Tool?
- [ ] Other bias auditing tools?
11.4 Mitigate Bias
-
[ ] If bias is detected, have you implemented mitigation strategies:
-
[ ] Pre-processing (training data):
-
Reweighting samples from underrepresented groups?
-
Resampling to balance dataset?
-
Removing biased features or proxies?
-
[ ] In-processing (model training):
-
Fairness constraints during training (adversarial debiasing, fairness-aware optimization)?
-
Using fairness-aware algorithms?
-
[ ] Post-processing (model outputs):
-
Adjusting decision thresholds for different groups?
-
Calibration to ensure equal predictive parity?
-
-
[ ] Have you documented:
- [ ] Bias mitigation techniques used?
- [ ] Trade-offs between fairness metrics and model performance?
- [ ] Residual bias that could not be eliminated (and reasons why)?
11.5 Ongoing Bias Monitoring
- [ ] Have you established processes to:
- [ ] Monitor AI system for bias in production (ongoing fairness audits)?
- [ ] Track fairness metrics over time?
- [ ] Detect and address emerging biases (e.g., distribution shift, feedback loops)?
- [ ] Retrain or update model when bias is detected?
12. AI Training Data Quality and Documentation
12.1 Data Collection and Sourcing
-
[ ] Have you documented:
- [ ] Data sources (where data was collected from)?
- [ ] Data collection methodology (how data was collected)?
- [ ] Date range of data collection?
- [ ] Data licensing and permissions (do you have legal right to use data)?
- [ ] Data provenance (origin and chain of custody)?
-
[ ] Have you ensured:
- [ ] Data was collected lawfully (compliance with GDPR, copyright, ToS)?
- [ ] Data is representative of intended use cases and populations?
- [ ] Data does not contain illegal or harmful content?
12.2 Data Quality Assessment
-
[ ] Have you assessed training data for:
- [ ] Completeness (missing values, data gaps)?
- [ ] Accuracy (errors, outliers, noise)?
- [ ] Consistency (conflicting or duplicate records)?
- [ ] Relevance (data is appropriate for intended purpose)?
- [ ] Timeliness (data is sufficiently recent)?
-
[ ] Have you documented:
- [ ] Data quality issues identified?
- [ ] Data cleaning and preprocessing steps taken?
12.3 Data Labeling and Annotation
-
[ ] If using labeled data, have you documented:
- [ ] Labeling instructions and guidelines?
- [ ] Who labeled data (in-house annotators, crowdsourced, automated)?
- [ ] Inter-annotator agreement (consistency across labelers)?
- [ ] Quality assurance procedures for labels?
-
[ ] Have you assessed:
- [ ] Labeling bias (subjective or inconsistent labels)?
- [ ] Label accuracy (spot-checks, expert review)?
12.4 Data Security and Protection
- [ ] Have you implemented security measures to protect training data:
- [ ] Access controls (limit who can access training data)?
- [ ] Encryption (at rest and in transit)?
- [ ] Data loss prevention (backups, redundancy)?
- [ ] Protection against data poisoning attacks (adversarial samples injected into training data)?
12.5 Data Documentation (Datasheets for Datasets)
- [ ] Have you created a "datasheet" for your training dataset documenting:
- [ ] Motivation (why dataset was created)?
- [ ] Composition (what data is included, how many instances, data types)?
- [ ] Collection process (how data was collected, sampled, labeled)?
- [ ] Preprocessing (data cleaning, transformations, feature engineering)?
- [ ] Uses (recommended uses, prohibited uses)?
- [ ] Distribution (how dataset is distributed, licensing)?
- [ ] Maintenance (who maintains dataset, how is it updated)?
- [ ] Known limitations, biases, and ethical considerations?
13. Human Oversight and Control
13.1 Human-in-the-Loop (HITL)
- [ ] For high-risk AI systems, have you implemented human-in-the-loop:
- [ ] Human reviews AI system outputs before decisions are executed?
- [ ] Human has authority to override or reject AI system recommendations?
- [ ] Human understands AI system capabilities and limitations?
13.2 Human-on-the-Loop (HOTL)
- [ ] For lower-risk AI systems, have you implemented human-on-the-loop:
- [ ] Human monitors AI system performance and intervenes when necessary?
- [ ] Anomaly detection alerts human to unusual outputs or behaviors?
- [ ] Human periodically audits AI system decisions?
13.3 Stop Button and Override Mechanisms
- [ ] Have you implemented:
- [ ] Emergency stop button or kill switch to halt AI system immediately?
- [ ] Override mechanisms allowing human to reverse or correct AI decisions?
13.4 Human Competence and Training
- [ ] Have you ensured:
- [ ] Humans overseeing AI system have necessary training and competencies?
- [ ] Humans understand AI system limitations and risks?
- [ ] Humans are not over-reliant on AI system (automation bias)?
- [ ] Humans are empowered to intervene (no pressure to rubber-stamp AI decisions)?
14. Technical Documentation and Model Cards
14.1 Model Card
- [ ] Have you created a "model card" documenting:
- [ ] Model details (architecture, parameters, training data, training procedure)?
- [ ] Intended use (primary use cases, out-of-scope uses)?
- [ ] Performance metrics (accuracy, precision, recall, AUC, fairness metrics)?
- [ ] Limitations (known failure modes, edge cases, biases)?
- [ ] Ethical considerations (fairness, privacy, security, societal impact)?
- [ ] Caveats and recommendations (how to use model responsibly)?
Reference: https://modelcards.withgoogle.com/about
14.2 Technical Documentation for High-Risk AI
- [ ] If AI system is high-risk under EU AI Act, have you created comprehensive technical documentation (see Section 3.3)?
14.3 Internal Documentation
- [ ] Have you documented for internal teams:
- [ ] Model development process (experiments, hyperparameter tuning, model selection)?
- [ ] Model versioning and change history?
- [ ] Dependencies (libraries, frameworks, APIs)?
- [ ] Deployment architecture (infrastructure, scalability, redundancy)?
- [ ] Monitoring and alerting procedures?
- [ ] Incident response procedures?
15. AI Incident Reporting and Monitoring
15.1 Define AI Incidents
- [ ] Have you defined what constitutes an "AI incident" for your organization?
- [ ] Model failure or significant performance degradation?
- [ ] Discriminatory or biased outputs?
- [ ] Privacy violations or data breaches?
- [ ] Security incidents (adversarial attacks, model theft)?
- [ ] Unexpected or harmful behaviors?
- [ ] User complaints or negative feedback?
15.2 Incident Detection and Monitoring
- [ ] Have you implemented monitoring to detect AI incidents:
- [ ] Performance monitoring (accuracy, latency, throughput)?
- [ ] Fairness monitoring (ongoing bias audits)?
- [ ] Anomaly detection (unusual inputs or outputs)?
- [ ] User feedback channels (complaints, bug reports)?
- [ ] Security monitoring (intrusion detection, adversarial attack detection)?
15.3 Incident Response Procedures
- [ ] Have you established incident response procedures:
- [ ] Who is responsible for responding to AI incidents?
- [ ] How are incidents triaged (severity levels, escalation paths)?
- [ ] What immediate actions are taken (rollback, disable feature, manual override)?
- [ ] How are incidents investigated (root cause analysis, forensics)?
- [ ] How are incidents remediated (model retraining, data correction, policy updates)?
- [ ] How are affected individuals notified (if applicable)?
15.4 Incident Reporting to Authorities
- [ ] For high-risk AI systems under EU AI Act, have you established procedures to:
- [ ] Report serious incidents to national competent authorities?
- [ ] Report incidents that result in death or serious harm to health, safety, fundamental rights?
- [ ] Report within [SPECIFIED TIMEFRAME] of becoming aware of incident?
15.5 Incident Documentation and Learning
- [ ] Have you established processes to:
- [ ] Document all AI incidents (incident log)?
- [ ] Conduct post-incident reviews?
- [ ] Identify lessons learned and improvements?
- [ ] Update AI governance policies, risk assessments, and training programs based on incidents?
16. Third-Party AI Systems (Vendor Due Diligence)
If you use third-party AI systems (e.g., OpenAI API, Google Vertex AI, AWS SageMaker, AI-powered SaaS tools):
16.1 Vendor Risk Assessment
- [ ] Before procuring third-party AI, have you assessed:
- [ ] What personal data will be processed by third-party AI?
- [ ] Where is third-party AI provider located (EU, US, other)?
- [ ] Is third-party AI system high-risk under EU AI Act?
- [ ] What are vendor's AI governance and compliance practices?
- [ ] What are vendor's security and privacy practices?
- [ ] What are vendor's bias testing and fairness practices?
16.2 Vendor Contracts and DPAs
- [ ] Have you ensured vendor contract includes:
- [ ] Data Processing Agreement (DPA) compliant with GDPR Article 28 (if processing personal data)?
- [ ] Specification of data processing purposes, duration, data types?
- [ ] Vendor's obligations to implement security measures, assist with data subject rights, notify of breaches?
- [ ] Vendor's obligations to comply with EU AI Act (if applicable)?
- [ ] Audit rights (right to audit vendor's compliance)?
- [ ] Liability and indemnification (who is liable for AI failures, biases, compliance violations)?
- [ ] Data deletion obligations upon termination?
16.3 Vendor Transparency
- [ ] Have you obtained from vendor:
- [ ] Information about AI system's capabilities and limitations?
- [ ] Model card or technical documentation?
- [ ] Information about training data sources and biases?
- [ ] Performance metrics and fairness assessments?
- [ ] Information about security measures and incident response?
16.4 Ongoing Vendor Monitoring
- [ ] Have you established processes to:
- [ ] Monitor vendor's compliance with contract and applicable laws?
- [ ] Review vendor's security and privacy certifications (SOC 2, ISO 27001, etc.)?
- [ ] Track vendor incidents and breaches?
- [ ] Reassess vendor risk periodically (annually or when vendor changes)?
17. US Federal and State AI Compliance
Note: US AI regulation is evolving rapidly. Check for latest requirements.
17.1 US Federal AI Laws
- [ ] If you are a federal contractor or subcontractor, have you reviewed:
- [ ] Executive Order 14110 on Safe, Secure, and Trustworthy AI (October 2023)?
- [ ] NIST AI Risk Management Framework?
- [ ] OMB guidance on AI use by federal agencies?
- [ ] Sector-specific AI requirements (e.g., FDA for medical AI, NHTSA for autonomous vehicles)?
17.2 US State AI Laws
- [ ] If you operate in specific US states, have you reviewed state AI laws:
- [ ] California: AI transparency and disclosure requirements, AB 2013 (insurance), SB 1047 (frontier models)?
- [ ] Colorado: Colorado AI Act (SB 24-205)—requires risk management for high-risk AI systems?
- [ ] Illinois: Biometric Information Privacy Act (BIPA), AI in employment (video interview analysis)?
- [ ] New York: NYC Local Law 144 (automated employment decision tools—bias audits required)?
- [ ] Washington: Facial recognition regulation, proposed AI bills?
- [ ] Texas: Proposed AI transparency and deepfake regulations?
- [ ] Other states with AI laws or proposals?
17.3 US Sector-Specific AI Regulations
- [ ] If you operate in regulated sectors, have you reviewed:
- [ ] Financial Services: Fair lending laws (ECOA, FCRA) applied to AI credit scoring, algorithmic trading regulations?
- [ ] Healthcare: FDA regulation of AI/ML medical devices, HIPAA privacy for health AI?
- [ ] Employment: EEOC guidance on AI discrimination in hiring, ADA compliance for AI screening tools?
- [ ] Insurance: State insurance regulators' guidance on AI underwriting and pricing?
- [ ] Automotive: NHTSA autonomous vehicle regulations?
18. AI Literacy and Training
EU AI Act Requirement (Effective February 2, 2025): All providers and deployers must ensure sufficient AI literacy among staff.
18.1 AI Literacy for All Staff
- [ ] Have you provided AI literacy training to ALL employees covering:
- [ ] What AI is and how it works (basic concepts)?
- [ ] Company's AI systems and their uses?
- [ ] Risks associated with AI (bias, inaccuracy, privacy violations, security vulnerabilities)?
- [ ] Company's AI governance policies and ethics principles?
- [ ] How to report AI incidents or concerns?
18.2 Role-Specific AI Training
-
[ ] Have you provided specialized training for:
-
[ ] AI Developers/Engineers:
-
Responsible AI development practices (bias mitigation, fairness testing, explainability)?
-
EU AI Act and GDPR compliance requirements?
-
Secure AI development (adversarial robustness, model security)?
-
[ ] Product Managers:
-
AI risk assessment and use case approval?
-
AI governance and compliance?
-
Ethical considerations for AI products?
-
[ ] Legal/Compliance Teams:
-
EU AI Act, GDPR, and other AI regulations?
-
AI risk management and audit procedures?
-
[ ] HR/Hiring Teams:
-
AI bias and discrimination in employment decisions?
-
Compliance with employment AI laws (NYC Local Law 144, Illinois video interview law, etc.)?
-
[ ] Customer Support:
-
How to explain AI system outputs to customers?
-
How to handle AI-related complaints?
-
18.3 Training Documentation
- [ ] Have you documented:
- [ ] Training curriculum and materials?
- [ ] Who has completed training (training records)?
- [ ] Training completion dates and renewal schedule?
Summary: Priority Actions
If you're just getting started with AI compliance, prioritize these actions:
Immediate Actions (Do Today):
- Classify your AI system's risk level under EU AI Act (Section 1)
- Check if your AI is prohibited (Section 2)—if yes, DO NOT deploy in EU
- Identify lawful basis for processing personal data in AI system (Section 6.1)
- Provide AI literacy training to staff (Section 18)—EU AI Act requirement as of Feb 2, 2025
Short-Term Actions (Next 30 Days):
- Update privacy policy to explain AI system and automated decision-making (Section 6.2)
- Conduct DPIA if AI system is high-risk (Section 8)
- Test AI system for bias across protected attributes (Section 11)
- Implement transparency disclosures for chatbots, deepfakes, biometric systems (Section 5)
- Establish AI governance structure and policies (Section 9)
- Document AI system (model card, technical documentation, datasheet for datasets) (Sections 12, 14)
Medium-Term Actions (Next 90 Days):
- Implement human oversight mechanisms (Section 13)
- Establish AI incident monitoring and response procedures (Section 15)
- Conduct vendor due diligence for third-party AI systems (Section 16)
- Prepare for EU AI Act high-risk compliance if applicable (Section 3)—full compliance required by August 2, 2027
Ongoing Actions:
- Monitor AI system performance for accuracy, bias, security (Section 15.2)
- Update risk assessments and DPIAs when AI system changes (Sections 8.5, 9.3)
- Stay informed on evolving AI regulations (EU AI Act implementation, US state laws, sector-specific rules)
Common Mistakes to Avoid
-
Assuming AI is low-risk: Many startups underestimate their AI system's risk level. Carefully review EU AI Act risk classification.
-
Ignoring GDPR: Even if your AI is not high-risk under EU AI Act, GDPR still applies if you process personal data.
-
No bias testing: Deploying AI without testing for bias across protected attributes is a major compliance and ethical failure.
-
Using personal data without lawful basis: You cannot train AI models on personal data without GDPR lawful basis (consent, legitimate interest, etc.).
-
No explainability: High-risk AI and automated decisions affecting individuals require explainability.
-
Overlooking training data quality: Biased or poor-quality training data leads to biased or inaccurate AI systems.
-
No human oversight: High-risk AI requires meaningful human oversight—not rubber-stamping AI decisions.
-
Vendor blindness: Using third-party AI (OpenAI, Google, AWS) without due diligence creates compliance and security risks.
-
No incident response plan: AI failures, biases, or security incidents WILL occur—have a response plan ready.
-
No AI governance: Lack of AI governance structure leads to fragmented, inconsistent AI practices across organization.
Next Steps
- Work Through This Checklist: Go through each section systematically
- Document Your Compliance: Create compliance documentation for each requirement
- Engage Legal Counsel: Consult AI/privacy lawyers for complex issues
- Implement AI Governance: Establish AI governance committee, policies, and procedures
- Train Your Team: Provide AI literacy training to all staff
- Test and Audit: Conduct bias testing, security testing, and compliance audits
- Monitor and Update: Continuously monitor AI systems and update compliance as regulations evolve
Related Resources
From Promise Legal:
- AI Regulations Guide - Overview of EU AI Act, GDPR AI compliance, and US AI laws
- Privacy Policy Template - GDPR/CCPA privacy policy including AI disclosures
- Data Processing Agreement (DPA) - GDPR Article 28 DPA for AI processors
- Privacy Audit Template - GDPR/CCPA compliance audit including AI systems
- GDPR Compliance Guide - Complete GDPR overview
- Data Security Guide - Cybersecurity for AI systems
External Resources:
- EU AI Act Official Text: https://artificialintelligenceact.eu/
- EU AI Act Compliance Checker: https://artificialintelligenceact.eu/assessment/eu-ai-act-compliance-checker/
- NIST AI Risk Management Framework: https://www.nist.gov/itl/ai-risk-management-framework
- ISO/IEC 42001 (AI Management System): https://www.iso.org/standard/81230.html
- Model Cards: https://modelcards.withgoogle.com/about
- AI Fairness 360 (AIF360): https://aif360.res.ibm.com/
- EDPB Guidelines on Automated Decision-Making (GDPR): https://www.edpb.europa.eu/
Get Legal Help
Need help with AI compliance?
Promise Legal helps startups navigate AI regulations, including:
- EU AI Act risk classification and compliance strategy
- GDPR compliance for AI systems (lawful basis, DPIAs, data subject rights)
- AI governance framework design
- AI vendor contracts and DPAs
- Bias testing and fairness assessments (legal review)
- Automated decision-making compliance (GDPR Article 22)
- AI incident response planning
- US federal and state AI compliance
Schedule a consultation or email us at [email protected].
Disclaimer: This checklist is provided for informational purposes only and does not constitute legal advice. AI regulations are complex, rapidly evolving, and vary by jurisdiction. You should consult with qualified legal counsel specializing in AI, privacy, and technology law to ensure compliance with all applicable laws and to address your specific circumstances. Promise Legal assumes no liability for any damages arising from use of this checklist.
Last Updated: September 30, 2025