The Next Phase of the EU AI Act Is Here Are You Leading or Lagging?
31/07/2025
12 MIN READ /
It’s already been a year since the EU AI Act’s operationalisation. We all need to realise that the Act has moved from policy to execution, and the next 12 months will separate the reactive from the ready (or proactive). Here is what’s coming next for businesses, citizens and organizations.
The Next Major Milestone: August 2025
August 2, 2025, marks the first significant implementation deadline under the EU AI Act. At this stage, obligations for General-Purpose AI (GPAI) model providers come into force alongside the establishment of national governance and oversight structures.
What’s Changing?
-
- GPAI Compliance Requirements: Providers of foundational AI models must disclose detailed technical documentation, ensure transparency around training data, and establish copyright compliance policies. These obligations are presumed to apply to models meeting certain compute thresholds.
- National Authorities Activated: Each EU Member State must designate and empower market surveillance authorities and notifying bodies. These entities will be responsible for enforcement and conformity assessments.
- AI Office and European AI Board: The central AI Office will coordinate implementation and issue interpretive guidance, while the Board will ensure consistency across Member States.
What Should Businesses Do?
-
- Audit Your AI Technology Stack: Determine whether you are a GPAI model provider, system integrator, or downstream deployer. Each role brings distinct obligations..
- Prepare Technical Documentation: Create or update logs detailing data sources, training processes, evaluation methods, and risks.
- Establish a Copyright Strategy: The Act calls for transparency around the copyrighted material used in training datasets.
- Engage with the Code of Practice: A voluntary GPAI Code of Practice, issued in July 2025, provides a soft-landing path for early compliance and collaboration.
Looking Ahead: The Phased Roadmap to 2027
While 2025 kicks off the first wave of compliance, the Act follows a structured rollout through 2027. Each phase introduces new requirements and corresponding strategic implications.
February 2, 2025, marked the first major compliance deadline. From this date, the Act’s prohibitions on certain high-risk and unacceptable AI practices—outlined in Article 5—became enforceable. These include bans on manipulative AI techniques, exploitative systems targeting vulnerable groups, and real-time biometric surveillance in public spaces (with narrow exceptions). Organizations also became responsible for ensuring staff interacting with AI systems possess adequate AI literacy.
August 2026: High-Risk AI Takes Centre Stage
This is where the Act’s real teeth show. Providers and users of high-risk AI systems—such as those used in healthcare, education, finance, or critical infrastructure—must comply with rigorous risk management, data governance, and post-market monitoring protocols.
Action Plan:
- Classify AI systems based on the Act’s Annex III use cases.
- Conduct Conformity Assessments: Many high-risk applications will require certification through third-party notified bodies.
- Develop a Fundamental Rights Impact Assessment (FRIA) process (if applicable).
- Implement robust cybersecurity and human oversight mechanisms.
August 2027: Grandfathered GPAI Models Must Comply
Existing general-purpose AI models made available before the Act’s entry into force will need to be fully compliant. Also, in this date Article 6(1) classification rules for high-risk AI systems in safety components and their corresponding obligations will start to apply.
Action: Begin retroactive audits now. If your model is already widely used, adapting it later could lead to more complexity, friction with partners, or market disruption.
Navigating Ambiguity: What Are Stakeholders Saying?
The EU AI Act has been widely applauded for its risk-based, innovation-preserving approach. However, feedback is emerging as implementation begins.
1. Industry Requests for Clarity
AI providers and developers are asking for clearer technical guidance on topics like:
- How to demonstrate sufficient transparency in foundation models.
- How to balance compliance with IP protection for proprietary models.
For example, CCIA Europe, which represents big-tech companies like Alphabet, Meta, and Apple, has urged EU leaders to pause or delay implementation of the AI Act, citing inconsistent guidance, regulatory fragmentation, and potential disruption to innovation ecosystems.
The AI Office is expected to publish further interpretive guidance and best practices throughout 2025 and 2026. Keeping track of this guidance will be essential.
2. Concerns About Fragmentation
National competent authorities will have room for interpretation, especially regarding high-risk assessments and audits. Businesses fear that regulatory fragmentation across Member States may increase compliance costs or delay innovation. A letter from 44 European CEOs (e.g., Airbus, BNP Paribas, Philips) argued that the Act’s complexity could hamper competitiveness unless simplified or postponed.
The EU AI Board is working to harmonize enforcement interpretations—but until then, businesses operating in multiple countries should prepare for local variation.
3. Voluntary Compliance vs. Legal Certainty
The Code of Practice for GPAI providers is voluntary, but it’s becoming a de facto compliance guide in practice. Stakeholders note the need for quicker feedback loops to avoid grey zones between voluntary best practice and enforceable regulation.
The Opportunity: Compliance as Competitive Advantage
While the EU AI Act introduces a complex regulatory structure, it also gives first-movers a chance to lead.
- Responsible AI as Brand Differentiator: Consumers and partners increasingly value trustworthy and transparent AI. Compliance can be a market signal.
- Streamlined Operations: The Act encourages lifecycle documentation and monitoring—these are also excellent practices for internal quality control.
- Global Readiness: The EU Act may become the global benchmark. Early adopters will be better positioned to scale across jurisdictions.
Final Thoughts: From Compliance to Capability
The EU AI Act is not just a legal milestone—it’s a test of strategic and technical maturity. The businesses that will emerge stronger aren’t just those that tick the right boxes, but those that integrate compliance into their core development workflows, build AI systems that are trustworthy by design, and view transparency as a catalyst for better products.
At code4thought, we see regulation as an opportunity to strengthen the link between engineering, ethics, and innovation. Whether you’re a downstream user of a general-purpose model or integrating AI into high-risk workflows, the real challenge isn’t just understanding what the Act demands—it’s operationalizing it with clarity, speed, and precision.
Our EU AI Act Assurance service combined with our iQ4AI platform for comprehensive AI quality testing ensure that businesses will feel confident that they invest in trustworthy, explainable, rigorously governed, and compliant AI solutions. They will be ready for external scrutiny, not only avoiding fines but also building systems people trust and ecosystems others want to be part of.
Want to explore how your AI systems can align with the EU AI Act without slowing down innovation? Let’s talk about building compliance into your codebase from the ground up.