EU AI Act · Regulation (EU) 2024/1689 · Digital Omnibus
The EU AI Act for US Boards: A Reference Guide
When the EU AI Act applies to US companies, what high-risk classification means, the Digital Omnibus deferral of high-risk obligations to December 2027, and the practical compliance steps US-headquartered organizations should take in 2026.
Summary
The EU AI Act (Regulation (EU) 2024/1689) is the European Union's comprehensive regulation governing artificial intelligence, in force since August 1, 2024 and applying in phases through 2028. It applies extraterritorially to US companies that place AI systems on the EU market, deploy AI systems in ways that affect EU users, or provide general-purpose AI models used in the EU. The Act classifies AI systems into four risk tiers — prohibited, high-risk, limited-risk, and minimal-risk — with the most extensive obligations falling on high-risk systems and on providers of general-purpose AI (GPAI) models. As of May 2026, prohibited AI practices and GPAI obligations are already in force; high-risk system obligations were originally scheduled to apply on August 2, 2026, but a May 7, 2026 political agreement under the Digital Omnibus would defer those obligations to December 2, 2027 for stand-alone systems and August 2, 2028 for systems embedded in regulated products. US boards should plan compliance against both the original and the proposed timelines until the Omnibus is formally adopted.
Overview
Regulation (EU) 2024/1689, commonly known as the EU AI Act, is the European Union's comprehensive regulation governing artificial intelligence. It entered into force on August 1, 2024, and applies in phases through 2028. The Act is the world's first comprehensive legal framework for AI, predating comparable comprehensive legislation in the United States, the United Kingdom, China, and Canada — though all of those jurisdictions are pursuing AI regulation through other instruments.
For US companies, the EU AI Act is significant for two reasons that are easy to overlook. First, it applies extraterritorially in a range of scenarios that capture most US companies operating in or selling into the EU market. Second, the obligations it imposes — documentation, conformity assessment, post-market monitoring, incident reporting — produce compliance infrastructure that is influencing US AI governance practice generally, similar to how GDPR's privacy infrastructure influenced US privacy practice starting in 2018.
This reference is for US boards, audit committees, GCs, and CCOs. It covers when the Act applies to US companies, how its risk-tier classification works, what high-risk system and GPAI provider obligations require, the application timeline (and the proposed Digital Omnibus deferral of high-risk deadlines), penalties, and practical compliance steps.
When does the EU AI Act apply to US companies?
The Act's extraterritorial reach is articulated in Article 2 and is broader than many US companies initially appreciate. Five distinct scenarios trigger application:
1. The US company is a provider placing an AI system on the EU market
A US company that develops an AI system and licenses, sells, or otherwise makes the system available to EU customers is a provider under the Act. This includes US software companies with EU customers, US cloud-AI services available in the EU, and US AI products distributed by EU resellers. The provider bears the primary substantive obligations under the Act.
2. The US company is a deployer using AI in ways that affect the EU
A US company using an AI system whose output is used in the EU is a deployer under the Act. This includes US companies using AI hiring tools to screen EU candidates, US companies using AI underwriting tools to evaluate EU loan applications, and US companies using AI customer service tools to interact with EU customers.
3. The US company provides a general-purpose AI (GPAI) model used in the EU
A US company that develops a GPAI model — a model designed for general-purpose use, capable of performing distinct tasks — and that is integrated into AI systems available in the EU is subject to GPAI provider obligations. This category captures OpenAI, Anthropic, Google, Meta, and similar developers of foundational models.
4. The US company's AI affects EU residents
Where a US company uses AI to make decisions about EU residents — even without offering products or services in the EU — the Act may apply on the theory that the AI system is being deployed in the EU through its effects on EU persons. This is the most legally contested scenario; companies should treat it as creating risk until the Commission and the Court of Justice of the European Union clarify the boundary.
5. Authorized representatives, importers, and distributors
Article 2 also captures authorized representatives, importers, and distributors of AI systems in the EU. US companies acting through these intermediaries should ensure the intermediaries understand their own obligations under the Act.
Risk-tier classification
The Act's defining structural feature is its four-tier risk classification. Each tier carries a different set of obligations.
Prohibited AI practices
Article 5 prohibits certain AI practices outright. These include: AI systems that exploit vulnerabilities of specific groups (e.g., children, persons with disabilities), social scoring systems used by public authorities, real-time remote biometric identification in publicly accessible spaces by law enforcement (with limited exceptions), AI systems that infer emotions in workplace and educational settings, biometric categorization based on protected characteristics, and untargeted scraping of facial images for facial recognition databases.
The May 7, 2026 political agreement on the Digital Omnibus added a new prohibition on AI systems used to generate non-consensual sexual or intimate content or child sexual abuse material (CSAM).
High-risk AI systems
Annex III of the Act lists AI systems classified as high-risk. The high-risk category includes AI systems used in:
- Biometric identification and categorization (when not prohibited)
- Critical infrastructure (energy, traffic, water)
- Education and vocational training (admissions, evaluation, grading)
- Employment and workforce management (recruitment, candidate selection, performance evaluation, task allocation, monitoring, promotion, termination)
- Access to essential services (credit scoring, social benefits, emergency services)
- Law enforcement
- Migration, asylum, and border control
- Administration of justice and democratic processes
Additionally, AI systems that are safety components of products covered by EU harmonization legislation listed in Annex II (medical devices, toys, lifts, machinery, in vitro diagnostic devices, watercraft, and similar regulated product categories) are automatically classified as high-risk.
Limited-risk AI systems
AI systems that interact with humans, that generate or manipulate content, or that recognize emotions are subject to transparency obligations even when not high-risk. Users must be informed when they are interacting with AI, and AI-generated content must be identifiable as such (with the deepfake transparency provisions advancing on a separate, accelerated timeline under the Omnibus).
Minimal-risk AI systems
AI systems not falling into the prohibited, high-risk, or limited-risk categories are minimal-risk. The vast majority of AI systems in commercial use fall into this category. The Act imposes no specific obligations on minimal-risk systems beyond voluntary codes of conduct.
High-risk system obligations
Companies that provide or deploy high-risk AI systems must comply with an extensive set of substantive obligations. The provider obligations (Articles 16-22) are the most extensive and include:
- Risk management system. Establishment of a continuous, iterative risk management process across the AI system lifecycle (Article 9).
- Data governance. Training, validation, and testing data must meet quality criteria, including being relevant, representative, free of errors, and complete (Article 10).
- Technical documentation. Comprehensive technical documentation must be maintained, available to authorities on request (Article 11).
- Record-keeping. Automatic logging of events throughout the AI system lifecycle (Article 12).
- Transparency and information for deployers. Deployers must receive sufficient information to understand and use the system appropriately (Article 13).
- Human oversight. The system must be designed to allow effective human oversight (Article 14).
- Accuracy, robustness, and cybersecurity. Systems must achieve appropriate levels of accuracy, robustness, and cybersecurity (Article 15).
- Quality management system. Providers must establish a quality management system (Article 17).
- Conformity assessment. Before placing a system on the market, providers must conduct conformity assessment, in some cases involving notified bodies.
- EU declaration of conformity and CE marking.
- Registration in the EU database for high-risk AI systems.
- Post-market monitoring. Ongoing monitoring of system performance and reporting of serious incidents.
Deployers of high-risk AI systems (Article 26) bear narrower but still substantive obligations: using the system per provider instructions, ensuring human oversight, monitoring operation, and — for public authorities and certain other deployers — conducting fundamental rights impact assessments.
GPAI provider obligations
The Act devotes a separate set of obligations to providers of general-purpose AI models (Articles 50-55). These obligations are in force as of August 2, 2025. They include:
- Technical documentation for the model and its training process
- Information for downstream providers who integrate the GPAI model into their AI systems
- Compliance with Union copyright law, including respect for opt-out mechanisms in text and data mining
- Public summary of training data sufficient for copyright holders to identify potentially copyrighted content
Providers of GPAI models that present systemic risk — a category that captures the largest and most capable foundation models — face additional obligations including model evaluations, serious incident tracking, cybersecurity protections, and reporting to the AI Office.
Application timeline
The EU AI Act phases in over four years following entry into force on August 1, 2024:
- February 2, 2025 — Prohibited AI practices ban and AI literacy obligations took effect
- August 2, 2025 — Governance provisions and GPAI provider obligations took effect; the AI Office became operational
- August 2, 2026 (original) — High-risk system obligations for stand-alone systems take effect
- August 2, 2027 (original) — Full applicability, including obligations for high-risk systems embedded in regulated products and certain transitional rules
The Digital Omnibus proposal would adjust the high-risk timeline (see next section).
The Digital Omnibus deferral
On November 19, 2025, the European Commission published the Digital Omnibus on AI — a legislative proposal to amend the AI Act in light of delays in the development of harmonized standards and the establishment of national competent authorities. The most consequential change is the deferral of high-risk system obligations.
On May 7, 2026, the European Council and Parliament reached a political agreement on the Omnibus. The provisional agreement includes:
- Stand-alone high-risk AI systems: compliance deadline deferred from August 2, 2026 to December 2, 2027
- High-risk AI systems embedded in regulated products: compliance deadline deferred from August 2, 2027 to August 2, 2028
- Transparency provisions for AI-generated content: grace period reduced from 6 months to 3 months, with new deadline of December 2, 2026
- New prohibition on AI used to generate non-consensual sexual or intimate content or child sexual abuse material
- Regulatory sandbox deadline postponed to August 2, 2027
As of the date of this article (May 9, 2026), the political agreement has not yet been formally adopted. Formal adoption typically follows political agreement by 2-4 months. Until formal adoption, the original August 2, 2026 deadline for high-risk obligations remains the operative legal date.
For US boards, the practical implication is to continue compliance preparation against the original timeline. The Omnibus does not relieve any substantive obligation; it only shifts the deadline. All preparation work — risk management systems, data governance, technical documentation, conformity assessment procedures, post-market monitoring — remains required. The additional time provided by the Omnibus is best used for refinement and external review of compliance posture, not for delaying the work.
Penalties
The EU AI Act establishes tiered administrative fines based on the severity of the violation:
| Violation category | Maximum fine |
|---|---|
| Prohibited AI practices (Article 5) | €35 million or 7% of worldwide annual turnover, whichever is higher |
| Other Act violations (high-risk obligations, GPAI obligations, etc.) | €15 million or 3% of worldwide annual turnover |
| Supplying incorrect, incomplete, or misleading information to authorities | €7.5 million or 1% of worldwide annual turnover |
Fines apply to both EU and non-EU companies offering AI systems in the EU. Smaller companies and SMEs may be subject to reduced maximum amounts under proportionality principles, but the worldwide turnover percentages are an absolute cap with no carve-out.
EU governance and enforcement
Enforcement is shared between two layers:
The AI Office
Established within the European Commission and operational since August 2025, the AI Office is responsible for governing GPAI provider obligations, coordinating across Member States, and enforcing rules on a subset of certain AI systems. The AI Office is the primary point of contact for US companies on GPAI obligations.
National competent authorities
Each EU Member State designates one or more national competent authorities to oversee and enforce the Act's rules within its territory. These authorities handle enforcement of high-risk system obligations, market surveillance, and most penalty determinations. The pace of authority designation has been slower than originally anticipated, contributing to the Digital Omnibus deferral rationale.
The European AI Board
A coordination body composed of representatives from each Member State, supporting consistent application of the Act across the EU.
Interaction with US frameworks
US companies subject to the EU AI Act are typically also subject to one or more US frameworks. The interaction patterns to understand:
NIST AI Risk Management Framework
The NIST AI RMF (NIST AI 100-1) is voluntary in the US but increasingly referenced in federal procurement, sector-specific guidance, and state regulation. Adopting NIST AI RMF practices produces documentation and processes that materially overlap with EU AI Act obligations. Most US multinationals build a unified AI governance program using NIST as the architectural framework, then layer in EU-specific obligations (CE marking, EU database registration, EU declaration of conformity) where applicable.
ISO/IEC 42001
ISO/IEC 42001 (the AI management system standard) maps closely to EU AI Act obligations. Companies pursuing ISO 42001 certification are building infrastructure substantially equivalent to what the EU AI Act requires. The European Commission has indicated that harmonized standards under the Act will likely incorporate or reference ISO 42001, though the formal harmonization process is one source of the standards delays that prompted the Omnibus deferral.
State AI laws
For US companies subject to both the EU AI Act and state laws like Illinois HB 3773, Colorado AI Act, NYC LL 144, or California ADMT, the practical compliance approach is to build a unified governance program that satisfies the most rigorous applicable requirement at each component. The EU AI Act's high-risk system obligations typically set the most rigorous bar; satisfying those generally satisfies parallel US state obligations as a matter of substantive compliance, though the formal compliance artifacts (notices, audit trails, regulatory filings) differ across jurisdictions.
Practical compliance steps for US companies
A US company evaluating its EU AI Act exposure should work through the following sequence:
Step 1 — Determine applicability
Map the company's AI activities against the five extraterritoriality scenarios above. Document the determination in writing. For activities that do not clearly fall in or out of scope, treat them as in-scope and revisit if and when the Commission or Court of Justice clarifies the boundary.
Step 2 — Build the AI inventory
Catalog every AI system in scope. For each, document: the function, the EU exposure, the role (provider, deployer, both), the risk-tier classification, and the relevant obligations.
Step 3 — Classify each system into risk tiers
For each in-scope AI system, determine whether it is prohibited, high-risk, limited-risk, or minimal-risk. The high-risk determination requires careful analysis of Annex III categories and Annex II product types. Document the classification and the rationale.
Step 4 — Build compliance posture for high-risk and GPAI systems
For each high-risk system or GPAI model, build the substantive compliance infrastructure: risk management system, data governance, technical documentation, record-keeping, human oversight, accuracy and robustness measures, quality management system, conformity assessment, EU database registration, and post-market monitoring.
Step 5 — Establish governance and reporting
Designate the board committee responsible for EU AI Act oversight (typically audit, risk, or technology committee). Establish a quarterly reporting cadence. Document the oversight in committee minutes.
Step 6 — Engage with the EU regulatory process
Monitor the Commission's guidance, the AI Office's pronouncements, the development of harmonized standards, and the formal adoption of the Digital Omnibus. Participate in public consultations where applicable. Build relationships with the relevant national competent authorities in Member States where the company has significant operations.
Step 7 — Coordinate with US compliance frameworks
Build a unified governance program that satisfies the EU AI Act alongside applicable US frameworks (NIST, state laws, sector regulations). Do not run parallel programs; they will diverge over time and produce contradictory outputs.
This article was last reviewed on May 9, 2026, two days after the European Council and Parliament reached political agreement on the Digital Omnibus. The article will be updated when the Omnibus is formally adopted (expected mid-to-late 2026), when the AI Office issues additional guidance, and quarterly otherwise. For US companies preparing for EU AI Act compliance alongside applicable US state and federal frameworks, see the Multi-Jurisdictional AI Compliance Review service. The Illinois AI Legislative Ecosystem tracker provides parallel real-time tracking of US state AI regulatory developments.
Frequently asked questions
- Does the EU AI Act apply to a US company with no EU office?
- Potentially yes. The Act applies extraterritorially in several scenarios: (a) the US company places an AI system on the EU market (e.g., licenses software to EU customers, hosts a SaaS product available in the EU); (b) the US company is a deployer of an AI system whose output is used in the EU; (c) the US company is a provider of a general-purpose AI model that is used in the EU; and (d) the US company's AI system is used to make decisions affecting EU residents. Having no EU office does not exempt the company; the test is whether the AI activity reaches the EU market or affects EU persons.
- What is the difference between a "provider" and a "deployer" under the EU AI Act?
- A provider develops an AI system or has it developed and places it on the EU market or puts it into service under its own name or trademark. A deployer is a company or person using an AI system under its authority for professional purposes. The same company can be both a provider and a deployer in different scenarios. A US software company that licenses an AI hiring tool to EU employers is a provider of that AI system. The EU employers using the tool are deployers. Provider obligations are more extensive than deployer obligations under the Act.
- When do high-risk system obligations actually apply — August 2026 or December 2027?
- As of May 9, 2026, both dates are in play. The original AI Act timeline applies high-risk obligations on August 2, 2026 for stand-alone systems and August 2, 2027 for systems embedded in regulated products. On May 7, 2026, the European Council and Parliament reached political agreement on the Digital Omnibus, which would defer these dates to December 2, 2027 (stand-alone) and August 2, 2028 (embedded). The Omnibus must still be formally adopted, which typically follows political agreement by 2-4 months. Until the Omnibus is formally adopted, the original August 2, 2026 deadline remains the legal effective date. Companies should plan against both timelines.
- What systems are classified as "high-risk" under the EU AI Act?
- High-risk systems are listed in Annex III of the Act and include AI systems used in: critical infrastructure, education and vocational training, employment and workforce management (recruitment, candidate selection, performance evaluation, task allocation, monitoring), access to essential services (credit, social benefits, emergency services), law enforcement, migration and border control, administration of justice, and biometric categorization and identification. AI systems that are safety components of products covered by EU harmonization legislation (medical devices, toys, lifts, machinery) are also high-risk. The Annex III list is subject to revision by the Commission.
- Does the EU AI Act apply to the use of ChatGPT, Claude, or other GPAI tools by US companies operating in the EU?
- For deployers, generally no — using a third-party GPAI tool internally does not turn the deployer into a provider. However, if the company integrates the GPAI model into a product or service that is then provided in the EU, the company may become a provider of the integrated AI system, with corresponding provider obligations. The GPAI model providers themselves (OpenAI, Anthropic, Google, etc.) bear the provider obligations under the GPAI provisions in force since August 2025.
- What are the penalties for noncompliance with the EU AI Act?
- Penalties are tiered. Violations of the prohibited AI practices ban carry the highest penalties: up to €35 million or 7% of worldwide annual turnover, whichever is higher. Violations of high-risk system obligations or GPAI obligations carry penalties up to €15 million or 3% of worldwide annual turnover. Supplying incorrect, incomplete, or misleading information to authorities carries penalties up to €7.5 million or 1% of worldwide annual turnover. The penalties apply to both EU and non-EU companies offering AI systems in the EU.
- How does the EU AI Act interact with the NIST AI Risk Management Framework?
- The NIST AI RMF is a voluntary US framework; the EU AI Act is binding EU regulation. They cover overlapping subject matter — AI risk identification, governance, documentation, monitoring — but with different legal status and different levels of specificity. Adopting NIST AI RMF practices does not satisfy EU AI Act obligations, but it produces documentation and processes that significantly reduce the marginal effort required for EU AI Act compliance. Most US companies subject to both frameworks build a unified governance program that satisfies both, rather than running parallel programs.
- Should US companies wait for the Digital Omnibus to be formally adopted before starting compliance work?
- No. Three reasons: (1) the Omnibus is not yet formally adopted, so the original August 2, 2026 deadline is still legally in force as of May 2026; (2) even with the deferral to December 2, 2027, the substantive compliance obligations are unchanged — only the deadline shifts, so all preparation work remains relevant; (3) GPAI provider obligations and the prohibited practices ban are already in force regardless of the Omnibus, so any company with GPAI provider exposure or prohibited-practice exposure is already on the legal hook. The strategic move is to continue compliance work against the original timeline; the Omnibus deferral provides additional time for refinement, not relief from the underlying obligations.
How to cite this article
APA
Abdullahi, K. M. (2026, May 9). The EU AI Act for US Boards: A Reference Guide. Techné AI. https://techne.ai/insights/eu-ai-act-for-us-boards
MLA
Abdullahi, Khullani M. "The EU AI Act for US Boards: A Reference Guide." Techné AI, May 9, 2026, https://techne.ai/insights/eu-ai-act-for-us-boards.
Plain text
Abdullahi, Khullani M. "The EU AI Act for US Boards: A Reference Guide." Techné AI, May 9, 2026. Available at: https://techne.ai/insights/eu-ai-act-for-us-boards
About the author
Khullani M. Abdullahi, JD, is an AI governance and compliance consultant and the founder of Techné AI, an independent advisory firm based in Chicago. She submitted written testimony to the Illinois Senate Executive Subcommittee on AI and Social Media; the substance of one of her recommendations was incorporated into an AI-risk impact study bill. She authored the AI Governance & D&O Liability briefing now in active distribution to boards and broker partners, maintains the Illinois AI Legislative Ecosystem tracker, and hosts the AI in Chicago podcast. Techné AI is an advisory firm, not a law firm.