The European Union’s Artificial Intelligence Act entered its full enforcement phase on Tuesday, marking the first time in history that a comprehensive regulatory framework governing AI systems has moved from legislative text to active enforcement with meaningful financial penalties.

What the Act Requires

The regulation establishes a risk-based framework with four tiers. Systems deemed “unacceptable risk” — including social scoring systems and real-time biometric identification in public spaces — are banned outright. High-risk systems, including AI used in medical devices, employment decisions, credit scoring, and critical infrastructure, must meet strict requirements for transparency, human oversight, data governance, and technical robustness.

Compliance Status

Industry surveys conducted in the weeks before the enforcement date suggested that approximately 60% of companies with AI systems deployed in EU markets believe they are fully compliant. The remaining 40% cited challenges with documentation requirements, algorithm transparency obligations, and the costs of implementing required human oversight processes.

First Enforcement Actions

Regulators in Germany and France have already confirmed that preliminary investigations are underway against several companies, though they declined to name specific targets. The enforcement agency has 600 staff dedicated to AI regulation, with plans to expand to 1,400 by 2027.

Global Implications

The EU’s approach has already influenced legislative discussions in the United Kingdom, Canada, Brazil, and Japan. Technology companies are increasingly designing AI systems to EU standards as a de facto global baseline, creating what some observers call a “Brussels Effect” in AI governance.