Skip to main content

What is the EU AI Act?

The EU Artificial Intelligence Act (Regulation EU 2024/1689) is the world’s first comprehensive legal framework for artificial intelligence. It came into force on August 1, 2024, with phased implementation through 2026.
The EU AI Act applies to any company offering AI systems in the EU market, regardless of where the company is based.

Risk-Based Approach

The Act categorizes AI systems into four risk levels:
  • Unacceptable Risk
  • High Risk
  • Limited Risk
  • Minimal Risk
Prohibited AI PracticesThese AI systems are banned outright:
  • Social scoring by governments
  • Real-time biometric identification in public spaces (with exceptions)
  • Emotion recognition in workplace/education
  • Manipulation of human behavior causing harm
  • Exploiting vulnerabilities of specific groups
⚠️ Penalties: Up to €35M or 7% of global turnover

Key Requirements for High-Risk AI

If your AI system is classified as high-risk, you must:

1. Risk Management System

1

Identify Risks

Document potential risks throughout the AI lifecycle
2

Risk Mitigation

Implement measures to eliminate or reduce risks to acceptable levels
3

Residual Risk Evaluation

Assess remaining risks after mitigation
4

Continuous Monitoring

Test and update risk management throughout deployment

2. Data Governance

High-quality training data requirements:
  • Relevant, representative, and free of errors
  • Appropriate statistical properties
  • Consider biases and limitations
  • Document data collection processes
  • Maintain data provenance records
  • Ensure GDPR compliance
  • Examine training data for biases
  • Implement bias detection and correction
  • Regular bias testing and monitoring

3. Technical Documentation

Must maintain comprehensive technical documentation:
Required Documentation:
├── General Description
│   ├── Intended purpose
│   ├── Risk classification
│   └── Deployment conditions
├── Technical Specifications
│   ├── System architecture
│   ├── Data requirements
│   └── Model details
├── Risk Management
│   ├── Risk assessment
│   ├── Mitigation measures
│   └── Residual risks
├── Data Governance
│   ├── Training data description
│   ├── Data quality metrics
│   └── Bias assessments
└── Testing & Validation
    ├── Test procedures
    ├── Validation results
    └── Performance metrics

4. Record Keeping

Automatic logging requirements:
  • Operation Logs: All significant events during operation
  • User Interactions: Complete audit trail
  • Performance Data: Accuracy, errors, and anomalies
  • Incident Records: Malfunctions and corrective actions
RegPilot automatically handles this through the AI Gateway logging system.

5. Transparency & Information

Provide clear information to users:

User Instructions

  • Operating instructions
  • System capabilities and limitations
  • Expected performance levels
  • Known risks

Technical Specifications

  • Input/output formats
  • Performance metrics
  • Hardware requirements
  • Integration guidelines

6. Human Oversight

High-risk systems must allow for:
  • Ability to stop or interrupt the system
  • Disregard, override, or reverse AI decisions
  • Real-time monitoring capabilities
  • Clear responsibility assignment

7. Accuracy, Robustness & Cybersecurity

Ensure:
  • Accuracy: Achieve and maintain specified performance levels
  • Robustness: Resilience against errors, faults, and inconsistencies
  • Security: Protection against unauthorized access or manipulation

How RegPilot Helps You Comply

Automatic Classification

RegPilot helps determine your AI system’s risk level based on its intended use

Documentation Generator

Auto-generate required technical documentation from your AI systems

Automated Logging

Complete audit trail of all AI interactions through AI Gateway

Bias Detection

Built-in bias monitoring and fairness metrics in analytics dashboard

Risk Management

Track and manage compliance risks with the violations system

Transparency Tools

Generate user-facing disclosures and compliance statements

Implementation Timeline

The EU AI Act has a phased implementation:
Start preparing now! Even if full compliance isn’t required until 2026, beginning implementation early reduces risk and demonstrates due diligence.

Compliance Checklist

Use this checklist to track your compliance progress:
1

Risk Classification

  • Determine AI system risk level
  • Document classification rationale
  • Review classification annually
2

Risk Management

  • Establish risk management system
  • Conduct risk assessments
  • Implement mitigation measures
  • Test and validate effectiveness
3

Data Governance

  • Document training data sources
  • Assess data quality and biases
  • Implement data protection measures
  • Establish data governance policies
4

Technical Documentation

  • Create system description
  • Document technical specifications
  • Maintain risk management records
  • Record testing and validation
5

Logging & Monitoring

  • Implement automatic logging
  • Set up performance monitoring
  • Establish incident reporting
  • Create audit procedures
6

Human Oversight

  • Define oversight mechanisms
  • Assign responsibilities
  • Implement override capabilities
  • Train oversight personnel
7

Transparency

  • Create user documentation
  • Implement AI disclosures
  • Provide performance information
  • Establish communication channels

Common Compliance Scenarios

Scenario 1: Customer Service Chatbot

Risk Level: Limited Risk (transparency obligations) Requirements:
  • Disclose to users they’re interacting with AI
  • Provide option to escalate to human
  • Log conversations for quality assurance
RegPilot Solution:
// Automatic disclosure with RegPilot
const response = await regpilot.chat.create({
  messages: [...],
  metadata: {
    disclosureType: 'chatbot',
    escalationAvailable: true
  }
});

// Response includes disclosure text
console.log(response.disclosure); 
// "You are chatting with an AI assistant..."

Scenario 2: HR Recruitment Tool

Risk Level: High Risk Requirements:
  • Full EU AI Act compliance
  • Risk management system
  • Bias testing and monitoring
  • Technical documentation
  • Human oversight
RegPilot Solution:
  • Register model in Models Registry
  • Enable Governor for all interactions
  • Automatic compliance documentation
  • Bias detection in analytics
  • Complete audit trail

Scenario 3: Content Recommendation

Risk Level: Limited Risk (transparency) Requirements:
  • Inform users about AI-powered recommendations
  • Allow users to opt-out or customize
RegPilot Solution:
// Include transparency metadata
const recommendations = await regpilot.complete({
  prompt: "Generate recommendations for user",
  transparency: {
    type: 'recommendation_system',
    customizable: true
  }
});

Penalties for Non-Compliance

The EU AI Act includes substantial penalties:
ViolationFine
Prohibited AI practicesUp to €35M or 7% of global turnover
Non-compliance (high-risk)Up to €15M or 3% of global turnover
Incorrect information to authoritiesUp to €7.5M or 1.5% of global turnover
Penalties are based on the higher amount between the fixed sum and percentage of turnover.

Resources

Next Steps

1

Classify Your AI Systems

Use our risk assessment tool to determine which systems need compliance
2

Start Monitoring

Integrate RegPilot AI Gateway to begin automatic compliance logging
3

Generate Documentation

Use RegPilot’s documentation generator for required technical docs
4

Regular Review

Schedule quarterly compliance reviews and updates
Need personalized guidance? Our compliance experts can help assess your specific situation and create a compliance roadmap. Contact us.