EU AI Act · August 2026 Compliance Deadline

A compliance inspection
for your AI system.

Like a professional house inspection — but for AI. We assess your system against EU AI Act obligations, produce a graded diagnostic report with legal citations, and give you a prioritised remediation roadmap.

8
Obligations assessed
80+
Diagnostic questions
10
Days to delivery
EU AI Act High-Risk Compliance Deadline
August 2, 2026
Get your free snapshot

What makes us different

Not a chatbot.
A defensible document.

ChatGPT gives a conversation. LexSutra gives a legally cited, version-stamped, human-reviewed, audit-ready compliance report.

Legally cited

Every finding is referenced to specific EU AI Act articles. No vague summaries — citations your legal team can verify.

Version-stamped

Your report is permanently tied to the exact regulation version at time of assessment. Defensible forever.

Human-reviewed

Every diagnostic is reviewed by a qualified expert before delivery. AI drafts, humans verify. That's the standard.

Audit-ready

Structured for regulatory inspection. Not a summary document — a compliance instrument with legal weight.

The process

How a diagnostic works

01
Day 1–2

Pre-Scan

We scan your AI system's public digital footprint — website, documentation, app stores — to build an initial risk and obligation profile.

02
Day 3–5

Assessment

80+ structured questions across all 8 EU AI Act obligation areas. You complete the questionnaire; we analyse every response.

03
Day 6–8

AI + Human Review

Claude AI drafts the findings. A qualified compliance expert reviews, adjusts, and validates every clause before the report is finalised.

04
Day 9–10

Report Delivery

Your graded PDF diagnostic report — colour-coded by obligation, legally cited, version-stamped, with a prioritised remediation roadmap.

The EU AI Act framework

The 8 obligations we diagnose

Every high-risk AI system must comply with all 8 areas. We assess each one and tell you exactly where you stand — with legal citations.

Art. 9

Risk Management System

Continuous identification, analysis, and mitigation of risks throughout the AI system lifecycle.

Art. 10

Data Governance

Training, validation, and testing data quality. Bias examination and data management practices.

Art. 11 & Annex IV

Technical Documentation

Complete technical file before market placement — architecture, capabilities, limitations, intended purpose.

Art. 12

Logging & Record Keeping

Automatic event logging to enable post-market monitoring, investigation, and regulatory audit.

Art. 13

Transparency

Clear disclosure of AI capabilities, limitations, and intended purpose to deployers and end users.

Art. 14

Human Oversight

Built-in mechanisms enabling human supervision, intervention, and override of AI outputs.

Art. 15

Accuracy & Robustness

Performance levels, cybersecurity resilience, and continuity of operation under adverse conditions.

Art. 43 & Annex VI/VII

Conformity Assessment

Third-party or self-assessment procedures required before CE marking and market placement.

Live — EU AI Act Intelligence

The law is moving.
Is your AI system keeping up?

We monitor official EU sources continuously and flag what matters for high-risk AI companies.

High Impact·21 Mar 2026TransparencyRisk Management System

GPAI Model Obligations Now in Effect as of August 2025

The AI Act rules governing General-Purpose AI (GPAI) models became applicable on 2 August 2025. Providers of GPAI models are now legally required to meet transparency and copyright obligations, and those posing systemic risks must assess and mitigate those risks. This marks a live enforcement milestone, not merely a future deadline.

Medium Impact·21 Mar 2026Risk Management SystemData Governance

High-Risk AI System Rules Confirmed for August 2026 and August 2027 Application

The page confirms that obligations for high-risk AI systems will come into effect in August 2026, with an extended transition period until August 2027 for high-risk AI embedded into regulated products. This reaffirms existing deadlines and clarifies the two-tier timeline for high-risk compliance.

High Impact·21 Mar 2026TransparencyRisk Management System

Transparency and High-Risk AI Rules Application Deadline: August 2026

The full suite of transparency obligations for AI systems (including chatbot disclosure and AI-generated content labelling) and the obligations for high-risk AI systems will come into effect on 2 August 2026. A further extended deadline of 2 August 2027 applies specifically to high-risk AI embedded into regulated products. Companies must use this period to achieve full compliance.

August 2, 2026 — high-risk AI compliance deadline

Pricing

Priced to your context.

Every AI system is different. So is every quote. Tell us what you have and we'll come back with a number that makes sense for your situation.

Starter

Public footprint assessment — your AI system's first compliance snapshot.

  • Public Footprint Pre-Scan
  • AI system risk classification
  • Initial obligation gap summary
  • PDF snapshot report
Request a Quote →
MOST POPULAR

Core

The full diagnostic report — legally cited, human-approved, and structured to hand to your legal counsel, investors, or a regulator with confidence.

  • Everything in Starter
  • Full 80+ question diagnostic
  • All 8 EU AI Act obligations assessed
  • Graded PDF report with legal citations
  • Compliance scorecard
  • Prioritised remediation roadmap
  • Human expert review before delivery
  • Policy version stamp
Request a Quote →

Premium

Core diagnostic plus a 1-hour strategy session with a compliance expert and an Investor Readiness Pack. Built for teams preparing for due diligence, audit, or market launch.

  • Everything in Core
  • 1-hour strategy session with expert
  • Investor Readiness Pack
  • 30-day regulatory Q&A support
Request a Quote →

Not sure which tier fits?

Start by building your AI system inventory — it's free, and it's the first thing we'll need to give you an accurate quote. Get the free template →

Client results

From the founders who went first

What AI startup founders say after receiving their LexSutra diagnostic report.

Testimonials from our founding clients — coming soon.

The diagnostic gave us a clear, defensible picture of where we stood against the EU AI Act. The legal citations meant our counsel could verify every finding directly. Worth every euro.
F

Founding Client · CTO

HR Tech Startup

We tried using ChatGPT for compliance advice and got vague answers. LexSutra gave us a graded report with article references we could actually act on. The remediation roadmap alone saved us weeks.
F

Founding Client · CEO

Fintech Scale-up

The human review step made the difference. We had an edge case in our training data that an automated tool would have missed. LexSutra caught it and flagged it with the exact Article 10 reference.
F

Founding Client · Head of Product

AI Infrastructure Company

Free first step

See where you stand — free

Share your company details. We'll analyse your AI system's public footprint and send you 5 concrete EU AI Act compliance insights within 24 hours. No questionnaire. No commitment.

Personal email providers (Gmail, Outlook, etc.) are not accepted.

No account needed · No spam · Results within 24 hours

LexSutra provides compliance infrastructure tools, not legal advice. By submitting you agree to our Privacy Policy and Terms & Conditions.