The EU AI Act is the world’s first comprehensive, legally binding framework governing the development, deployment, and use of AI systems. It introduces strict security, governance, and risk‑management controls—especially for high‑risk and general‑purpose AI (GPAI)—and applies to any organisation whose AI systems affect people in the EU, including those outside Europe. Below is a clear, structured breakdown based on authoritative EU sources.
The EU Artificial Intelligence Act (AI Act) is the first global, horizontal regulatory framework for AI.
Its purpose is to ensure trustworthy, safe, transparent, and rights‑respecting AI across the EU.
It applies to:
Developers (providers) of AI systems
Deployers (users) operating AI in a professional capacity
General‑purpose AI (GPAI) model providers
Third‑country organisations whose AI outputs are used in the EU
This makes it extraterritorial—relevant even for organisations in Scotland and the wider UK.
The Act categorises AI systems into four risk levels:
Risk Level
Status
Examples
(Unacceptable Risk)
Banned
Social scoring, manipulative AI, biometric categorisation, emotion recognition in workplaces/schools
(High Risk)
Strictly regulated
Credit scoring, HR recruitment, essential public services, medical devices, law enforcement
(Limited Risk)
Transparency obligations
Chatbots, deepfakes
(Minimal Risk)
Unregulated
Video games, spam filters
Most obligations fall on high‑risk systems and GPAI models.
The EU AI Act introduces explicit security requirements, especially for high‑risk and systemic‑risk GPAI models.
Key obligations include:
High‑risk AI and systemic‑risk GPAI must implement:
Robust cybersecurity measures
Protections against adversarial attacks
Monitoring for model drift and misuse
Secure logging and auditability
Incident detection and reporting
Systemic‑risk GPAI providers must conduct:
Model evaluations
Adversarial testing
Red‑teaming
Providers must ensure:
Integrity of training data
Secure development processes
Documentation of model lineage
The Act mandates strong governance structures for high‑risk AI:
Organisations must maintain a continuous risk‑management process, including:
Hazard identification
Mitigation planning
Monitoring and review
High‑risk AI must use:
High‑quality, representative, bias‑controlled datasets
Documented data provenance
Data protection aligned with GDPR
Systems must include:
Clear human‑in‑the‑loop mechanisms
Ability to override or stop AI decisions
Training for human operators
Providers must maintain:
Model design documentation
Training data summaries
Evaluation results
Instructions for use
The Act introduces a special category for GPAI models, including foundation and generative models.
Publish a training data summary
Provide technical documentation
Provide usage instructions
Comply with copyright rules
Conduct adversarial testing
Track and report serious incidents
Ensure cybersecurity protections
Implement model evaluations
Manage systemic risks
This applies to major LLMs and multimodal models.
The Act bans eight categories of AI, including:
Manipulative or deceptive AI
Exploitation of vulnerable groups
Social scoring
Emotion recognition in workplaces/education
Untargeted scraping for facial recognition databases
Real‑time biometric identification (with narrow exceptions)