On August 2, 2026, the EU AI Act's most significant provisions become enforceable. High-risk AI system obligations. Transparency requirements. Full enforcement powers for market surveillance authorities. Fines up to €35 million or 7% of global annual turnover.
That's four months from now.
And here's what most businesses don't realize: you don't have to build AI to be regulated by the AI Act. If you're using AI tools for recruiting, credit scoring, customer interactions, or workforce management, you have independent legal obligations as a "deployer." Your vendor's compliance doesn't cover you.
The regulation is live. The guidance is incomplete. The standards aren't finalized. And the clock is ticking.
What the EU AI Act Actually Regulates
The Act uses a four-tier risk classification that determines what you can and can't do with AI in the EU:
Prohibited (banned outright): Social scoring systems, manipulative AI exploiting vulnerabilities, real-time biometric identification in public spaces. If you're doing any of this, stop. These prohibitions have been enforceable since February 2025.
High-risk (heavy obligations): This is where most business impact lives. AI used for:
- Recruiting, screening, and hiring decisions
- Employee performance evaluation, promotion, and termination
- Creditworthiness assessment and financial eligibility
- Critical infrastructure management
- Education and exam scoring
- Law enforcement and border control
Limited risk (transparency obligations): Chatbots must disclose they're AI. Deepfakes and synthetic content must be labeled. AI-generated text published for public consumption must be marked as such.
Minimal risk (unregulated): Spam filters, video game AI, recommendation engines for entertainment. No specific obligations.
The critical insight: high-risk classification is based on the use case, not the technology. The same large language model is minimal risk when generating marketing copy and high-risk when screening job applicants. It's how you deploy it that determines your obligations.
Not sure where your AI usage falls? We help European businesses classify their AI systems, identify compliance gaps, and build a roadmap to August 2026. No legal jargon—just clear, actionable guidance.
If You're Using AI, You're a "Deployer"—And You Have Obligations
This is the part that catches most companies off guard.
The EU AI Act creates two categories of regulated entities: providers (who build AI systems) and deployers (who use them). Most businesses are deployers. And deployer obligations are substantial.
If you're deploying a high-risk AI system, starting August 2, 2026 you must:
Assign qualified human oversight. Not a checkbox—actual people with the competence, training, and authority to review, question, and override AI decisions. This must be documented.
Inform your workers. Before you deploy a high-risk AI system that affects employees, they must be notified. Using AI to screen resumes? Your HR team and candidates need to know.
Maintain logs for at least six months. Every decision the system makes, logged and auditable.
Monitor for discrimination and bias. Ongoing, not one-time. You need processes to detect when the AI produces discriminatory outcomes.
Conduct fundamental rights impact assessments. Before deploying certain high-risk systems, you must assess the impact on people's fundamental rights.
Use systems according to provider instructions. If you're using an AI tool outside its intended purpose, that's on you.
Here's what this means in practice: if you're using HireVue, Pymetrics, or any AI-powered recruitment platform, you have deployer obligations. If you're using AI to score loan applications, evaluate employee performance, or prioritize customer service requests in ways that affect access to essential services—you have deployer obligations.
Your vendor being compliant does not make you compliant. These are independent requirements.
What's Still Unclear—And Why That's a Problem
The EU AI Act is law. But the implementation landscape is, frankly, a mess.
The Commission missed its own deadlines. Critical guidance on how operators of high-risk AI systems should meet their obligations was due by February 2, 2026. It hasn't been published. Businesses are expected to comply with rules whose practical interpretation hasn't been officially clarified.
Technical standards aren't ready. The standardization bodies CEN and CENELEC were supposed to deliver harmonized standards by fall 2025. They missed the deadline and are now targeting end of 2026—after the compliance deadline. Complying with standards that create a "presumption of conformity" is the clearest path to compliance. That path doesn't fully exist yet.
Member states aren't prepared. At least 12 EU member states missed the deadline to appoint competent authorities. France, Germany, and Ireland haven't enacted relevant legislation. There's no consistent enforcement infrastructure across the EU.
The definition of "AI system" is ambiguous. The Act's definition relies on subjective criteria like "varying autonomy" and "adaptiveness." This creates grey areas where conventional software might fall within scope—or might not, depending on interpretation.
Interaction with existing regulations is uncertain. How does the AI Act interact with GDPR? With the Digital Services Act? With sector-specific regulations in finance and healthcare? These boundaries are still being worked out.
None of this is reason to wait. The law is the law, regardless of implementation gaps. But it means compliance requires judgment calls, not just following a checklist.
Navigating regulatory uncertainty is what we do. We help teams make defensible compliance decisions even when the guidance is incomplete. Pragmatic, risk-based approaches that protect your business without paralysis.
The Penalties Are Real
The EU AI Act's enforcement teeth are sharp:
- €35 million or 7% of global annual turnover for prohibited AI practices
- €15 million or 3% of global annual turnover for violations of high-risk system obligations
- €7.5 million or 1.5% of global annual turnover for supplying incorrect information to authorities
Starting August 2, 2026, market surveillance authorities gain full investigatory powers: requesting documentation, conducting evaluations, ordering product withdrawals, and levying fines. The European AI Office can directly investigate general-purpose AI model providers and demand source code access.
These aren't theoretical maximums designed to generate headlines. GDPR showed us that EU regulators use their enforcement powers. Meta was fined €1.2 billion. Amazon, €746 million. The apparatus for AI Act enforcement is being built on the same foundation.
The risk calculation is straightforward: the cost of compliance is a fraction of the cost of a single enforcement action.
What Compliance Actually Looks Like
Forget compliance theater. Here's what a defensible AI compliance posture requires for businesses deploying high-risk AI systems:
1. Inventory Your AI Usage
You can't comply with regulations for systems you don't know about. Most organizations have AI embedded in more places than they realize:
- Recruitment and HR platforms with AI screening
- Customer service chatbots and virtual assistants
- Credit scoring and financial decision tools
- Marketing platforms with AI-driven personalization
- Internal tools using AI for performance evaluation or task allocation
Map every AI system, classify its risk level, and identify whether you're a provider, deployer, or both.
2. Establish Human Oversight Procedures
For every high-risk system, document:
- Who is responsible for oversight (named roles, not departments)
- What training they've received on the specific AI system
- What authority they have to override AI decisions
- How they exercise that oversight in practice
- When human review is triggered vs. automated processing
This isn't about having a human rubber-stamp every AI output. It's about meaningful oversight by people who understand both the system and its limitations.
3. Build Your Documentation
The Act requires extensive technical documentation and record-keeping. At minimum:
- Risk management records for each high-risk system
- Data governance documentation
- System performance testing results
- Incident logs and response procedures
- Fundamental rights impact assessments
- Evidence of human oversight implementation
Start building these records now. Retroactive documentation is never as credible as contemporaneous records.
4. Implement Monitoring and Logging
High-risk AI systems need continuous monitoring, not periodic audits. That means:
- Automated logging of all AI-driven decisions
- Bias and discrimination detection pipelines
- Performance drift monitoring
- Incident detection and escalation procedures
- Log retention for at least six months (longer for some sectors)
5. Review Your Vendor Contracts
If you're using third-party AI systems, your vendor agreements need to address:
- What compliance obligations the provider is meeting
- What documentation and transparency they provide to you
- How they handle incidents and notifications
- What happens if the system is found non-compliant
- Your rights to audit and verify their compliance claims
Remember: vendor compliance doesn't eliminate your deployer obligations. But vendor cooperation makes meeting those obligations possible.
Need a structured compliance roadmap? We build practical, prioritized plans that get European businesses from "we should probably look at this" to documented compliance. Technical implementation, not just policy documents.
Start your compliance roadmap →
General-Purpose AI: The Upstream Problem
If your AI systems are built on foundation models—GPT, Claude, Gemini, Llama—there's another layer to consider.
The EU AI Act imposes specific obligations on General-Purpose AI (GPAI) model providers. These have been applicable since August 2025 and include requirements for technical documentation, training data transparency, and copyright compliance. Models with systemic risk (trained with more than 10²⁵ FLOPs) face additional obligations around adversarial testing, incident reporting, and cybersecurity.
As a deployer, you don't directly bear GPAI obligations. But your compliance depends on your upstream providers meeting theirs. If the foundation model powering your high-risk system isn't compliant, your conformity assessment has a gap.
Practical steps:
- Verify your GPAI providers' compliance status. Ask for documentation. Major providers are publishing compliance information—review it.
- Understand the model's limitations. GPAI providers must provide instructions for use. Read them. Your compliance depends on using the model within its documented parameters.
- Plan for provider changes. If your GPAI provider fails to comply, you need the architectural flexibility to switch. Avoid hard dependencies on a single model provider.
The Businesses That Will Navigate This Best
The companies that will handle the EU AI Act well aren't the ones with the biggest legal teams. They're the ones that treat compliance as an engineering problem, not just a legal one.
That means:
Technical compliance infrastructure. Logging, monitoring, bias detection, and audit trails built into your systems—not maintained in spreadsheets.
Cross-functional ownership. AI compliance touches engineering, legal, HR, product, and operations. It can't live in one department.
Pragmatic risk management. Perfect compliance with ambiguous regulations isn't possible. Defensible, documented, good-faith compliance is. Focus on material risks, not theoretical edge cases.
Continuous adaptation. The guidance, standards, and enforcement landscape will evolve throughout 2026 and beyond. Build processes that adapt, not static compliance snapshots.
The EU AI Act is the most comprehensive AI regulation in the world. It's also the first. Every major economy is watching how it plays out. The compliance infrastructure you build now won't just serve you in the EU—it's practice for whatever comes next.
The deadline is August 2, 2026. Whether you're just starting to assess your AI usage or you need help closing specific compliance gaps, we help European businesses build practical, defensible AI compliance programs. Engineering-led, not checkbox-driven.
