The European Commission is reportedly preparing to postpone important portions of the EU AI Act, potentially delaying enforcement until technical standards and guidelines are finalized.
The EU AI Act, the world’s first comprehensive AI law, was adopted in late 2023 and began phased implementation in August 2024. Provisions banning “unacceptable risk” AI went live in February 2025, with the next major compliance date—covering general‑purpose AI models and high‑risk applications—scheduled for August 2, 2025.
However, industry representatives have raised alarms, arguing that the requisite technical standards and “code of practice” are not ready, making it impossible to comply on schedule. In response, EU tech chief Henna Virkkunen has acknowledged that if standards fail to materialize, delaying compliance deadlines may be justified.
Why This Matters: Balancing Standards with Innovation
🚫 Industry Voices Concern:
- Delays in harmonized standards from CEN-CENELEC and the AI Office are cited as major obstacles.
- U.S. officials have also urged caution, warning over-regulation could hamper innovation/job growth .
- Lobbyists cite the need for “stop-the-clock” mechanisms to ensure fairness and clarity before enforcing compliance.
🛡️ Regulatory Context:
- The AI Act assigns risk levels to AI systems:
- Unacceptable – banned (from Feb 2025)
- High‑risk – subject to stringent compliance (from Aug 2025)
- Minimal-risk – largely unregulated usage.
- Enforcement ramp-up faces risks: firms built systems expecting delays; nagging uncertainty threatens SME competitiveness.
What the Delay Could Change
- Compliance Deadlines Pushed Back:
Companies gain additional time to prepare documentation, risk assessments, and testing before the August 2025 rollout. - AI Code of Practice Strengthened:
Completion of this blueprint is crucial—it specifies how general-purpose LLMs should be documented, governed, and trained. - Global Ripple Effects:
The AI Act is widely regarded as a global regulatory benchmark, with nations across LATAM and Asia shaping their own models around it. Any delay could reshape others’ timelines.
Mixed Reactions: Innovation vs. Safeguards
- Supporters of delay argue that rushing enforcement risks penalizing juristic ambiguity and setting unrealistic expectations—especially for small and medium enterprises.
- Critics caution that pausing could weaken the Act’s integrity and undermine Europe’s role as a pioneer in ethical AI governance.
EU digital ministers, including representatives from Poland, Finland, and Germany, have expressed openness to negotiation—suggesting a cautious recalibration rather than outright rollback.
How Businesses Should Navigate This
- 📌 Track official communications from the European AI Office and national regulators for updated timelines.
- 📌 Initiate compliance groundwork now: risk and robustness documentation, human oversight systems, transparency logs—even if enforcement is delayed. That builds goodwill and readiness.
- 📌 Engage in consultation: The current public consultations on implementation challenges will help shape future standards.
What’s Next
A public consultation on high-risk AI implementation closed June 6, and was followed by broad EU feedback . The Commission is expected to deliver its decision ahead of the July European Council summit, and whether a “stop-the-clock” measure is formally adopted — and for how long — will be crucial.
Verdict
Europe is at a regulatory crossroads. Delaying rules could offer much-needed flexibility and buy time for small innovators—but also risks sowing confusion and diluting the credibility of landmark legislation. For stakeholders worldwide, staying compliance-ready remains essential, regardless of when enforcement rolls out.