LexSutra← Back to home
Live Intelligence Feed

EU AI Act Regulatory Intelligence

Official EU AI Act developments, guidance documents, and compliance milestones — monitored and analysed by the LexSutra team.

What do these updates mean for your company?

Enter your website and we'll analyse your public footprint to tell you exactly which obligations apply to your AI system — and where your gaps are. Based on publicly available information only.

Analyse my company →

GPAI Model Obligations Now in Effect as of August 2025

High Impact
EU AI Office Published 02 August 2025

The AI Act rules governing General-Purpose AI (GPAI) models became applicable on 2 August 2025. Providers of GPAI models are now legally required to meet transparency and copyright obligations, and those posing systemic risks must assess and mitigate those risks. This marks a live enforcement milestone, not merely a future deadline.

What this means for AI companies

A company offering a foundation model integrated into recruitment or credit-scoring tools must immediately comply with transparency and systemic risk assessment requirements or face regulatory action from the AI Office.

TransparencyRisk Management SystemTechnical Documentation

High-Risk AI System Rules Confirmed for August 2026 and August 2027 Application

Medium Impact
EU AI Office Published 01 August 2024

The page confirms that obligations for high-risk AI systems will come into effect in August 2026, with an extended transition period until August 2027 for high-risk AI embedded into regulated products. This reaffirms existing deadlines and clarifies the two-tier timeline for high-risk compliance.

What this means for AI companies

An HR tech firm using automated CV-screening (classified as high-risk) must have its risk management system, logging infrastructure, technical documentation, and human oversight mechanisms in place by August 2026, while a manufacturer embedding AI into a medical device has until August 2027.

Risk Management SystemData GovernanceTechnical DocumentationLogging and Record KeepingTransparencyHuman OversightAccuracy and RobustnessConformity Assessment

Transparency and High-Risk AI Rules Application Deadline: August 2026

High Impact
EU AI Office Published 02 August 2026

The full suite of transparency obligations for AI systems (including chatbot disclosure and AI-generated content labelling) and the obligations for high-risk AI systems will come into effect on 2 August 2026. A further extended deadline of 2 August 2027 applies specifically to high-risk AI embedded into regulated products. Companies must use this period to achieve full compliance.

What this means for AI companies

An HR tech firm deploying AI-based CV-sorting software must have its risk management system, data governance controls, human oversight mechanisms, technical documentation, and conformity assessment completed before 2 August 2026 to legally place its product on the EU market.

TransparencyRisk Management SystemData GovernanceTechnical DocumentationLogging and Record KeepingHuman OversightAccuracy and RobustnessConformity Assessment

GPAI Model Obligations Now in Effect (August 2025)

High Impact
EU AI Office Published 02 August 2025

The AI Act's rules for General-Purpose AI (GPAI) models became applicable on 2 August 2025. Providers of GPAI models must now comply with transparency and copyright-related rules, and those whose models carry systemic risks must assess and mitigate those risks. This marks a major enforcement milestone in the AI Act's phased application timeline.

What this means for AI companies

A company offering a widely-used foundational AI model in the EU must immediately comply with transparency obligations and conduct systemic risk assessments, or face enforcement action by the European AI Office.

TransparencyRisk Management SystemTechnical Documentation

Prohibited AI Practices Guidelines Published and Effective (February 2025)

High Impact
EU AI Office Published 02 February 2025

Eight categories of AI practices were banned as of February 2025, including social scoring, real-time biometric identification in public spaces, and emotion recognition in workplaces. The Commission published guidelines on prohibited AI practices and on the AI system definition to help stakeholders understand the scope of these bans and ensure compliance.

What this means for AI companies

An HR tech firm using an emotion recognition tool to assess job candidates during video interviews must immediately cease this practice, as it falls under the prohibited uses effective from February 2025.

Risk Management SystemConformity AssessmentHuman Oversight

Code of Practice on Marking and Labelling of AI-Generated Content (Under Preparation)

Medium Impact
EU AI Office Added 21 March 2026

The AI Office has selected developers to create a voluntary Code of Practice on marking and labelling AI-generated content, covering deepfakes, images, audio, and text. Guidelines on transparent AI systems are also under preparation. Both instruments are expected to be published in Q2 2026 ahead of the August 2026 transparency obligations deadline.

What this means for AI companies

A media company using generative AI to produce news articles should monitor the forthcoming Code of Practice closely, as it will provide practical guidance on how to label AI-generated text to comply with the August 2026 transparency obligations.

Transparency

Is your AI system ready for the August 2026 deadline?

LexSutra provides a full EU AI Act compliance diagnostic — graded report, legal citations, remediation roadmap.

Get your free diagnostic preview →