EU AI Act vs US Executive Order vs UK AI Safety: Global Regulation Compared
If you're shipping AI products to customers in more than one country, you're not dealing with one set of rules. You're dealing with four. The EU has a 144-page law with tiered risk categories and fines that scale to 7% of global revenue. The US has no federal AI law at all, but a growing patchwork of state regulations and an executive order trying to preempt them. The UK spent two years betting on voluntary frameworks, then started drafting binding legislation. China has been quietly enforcing AI registration requirements since 2023 and has already approved over 5,000 algorithm filings.
Each regime reflects a different theory about what AI regulation should do. The EU wants to classify and control. The US wants to avoid stifling innovation while states fill the vacuum. The UK wants to be the place where AI companies set up shop. China wants to maintain control over information flows while building a domestic AI industry. These aren't just philosophical differences. They create concrete compliance obligations that vary by jurisdiction, company size, and use case.
Here's what each framework actually requires, where they overlap, and where the gaps create real problems for companies building AI systems in 2026.
| EU AI Act | US (Federal + State) | UK | China | |
|---|---|---|---|---|
| Primary Law | AI Act (Regulation 2024/1689) | No federal AI law; state laws + executive orders | No AI-specific law yet; binding bill expected 2026 | Three overlapping regulations + Cybersecurity Law amendments |
| Approach | Risk-based classification | Sector-specific, market-led | Pro-innovation, regulator-led | Content control + algorithm registration |
| Key Deadline | Aug 2, 2026 (high-risk systems) | Jan 1, 2026 (Colorado, Texas, California) | Spring 2026 (King's Speech decision) | Jan 1, 2026 (Cybersecurity Law amendments) |
| Max Fine | €35M or 7% global revenue | Varies by state (no federal penalty) | TBD (no penalty framework yet) | Up to 5% of revenue + service suspension |
| Who Enforces | EU AI Office + national authorities | State AGs + FTC + sector regulators | Existing regulators (FCA, Ofcom, ICO, CMA) | Cyberspace Administration of China (CAC) |
| Registration Required | Yes (EU database for high-risk) | No federal registry | Proposed for frontier models | Yes (mandatory algorithm filing) |
| Covers GPAI/Foundation Models | Yes (specific GPAI rules since Aug 2025) | California SB 53 (frontier models only) | Planned for frontier models | Yes (generative AI interim measures) |
The EU: Classification, Compliance, and Real Consequences

The EU AI Act is the most detailed AI regulation anywhere in the world. It's also the most consequential for companies shipping globally, because it applies to anyone whose AI systems affect people in the EU, regardless of where the company is headquartered.
The Act organizes AI systems into four risk tiers: unacceptable (banned), high-risk , limited risk with transparency requirements, and minimal risk with no specific obligations. The prohibited practices took effect on February 2, 2025. Social scoring, manipulative AI targeting vulnerable people, untargeted facial recognition scraping, and emotion recognition in workplaces and schools are already illegal.
The big enforcement date is August 2, 2026. That's when requirements for high-risk AI systems become enforceable. If your AI touches hiring decisions, credit scoring, education admissions, medical triage, law enforcement, or critical infrastructure, you fall into Annex III and must comply with the full regulatory apparatus: risk management systems, technical documentation, conformity assessments, human oversight, data governance, and registration in the EU database.
GPAI and Foundation Model Rules
General-purpose AI model obligations kicked in on August 2, 2025. Providers of GPAI models must publish training data summaries, maintain technical documentation, comply with EU copyright law, and draw up acceptable use policies. Models designated as posing "systemic risk" face additional requirements: adversarial testing, incident reporting to the EU AI Office, and cybersecurity protections.
The enforcement teeth arrive on August 2, 2026, when the AI Office gains full powers to investigate, request information, order model recalls, mandate mitigations, and impose fines. For GPAI violations, fines run up to €15 million or 3% of global annual turnover. For broader AI Act breaches, fines reach €35 million or 7% of revenue.
To put those numbers in context: 7% of Meta's 2024 revenue would be roughly $8.5 billion. For Google, $14 billion. For Microsoft, $16 billion. These aren't theoretical maximums. They're calibrated to hurt.
The Provider-Deployer Split
The Act distinguishes between providers, who develop AI systems, and deployers, who use them. Providers carry the heavier compliance burden. But the line blurs fast in practice. If you fine-tune a foundation model, add tool-calling capabilities, or wrap it in a multi-step workflow that changes what it does, you've likely crossed from deployer to provider territory and inherited the full obligation set.
Models placed on the market before August 2, 2025 get a grace period until August 2, 2027. Everything after that date must comply from day one.
The US: No Federal Law, Maximum Confusion

The United States does not have a comprehensive federal AI law. It has something arguably worse: a growing collection of state laws, agency enforcement actions under existing statutes, and executive orders that may or may not survive the next election cycle.
The Executive Order Problem
On December 11, 2025, President Trump signed an executive order titled "Ensuring a National Policy Framework for Artificial Intelligence." The order's central aim is preemption: establishing a federal policy that overrides state AI laws deemed to burden interstate commerce or conflict with federal objectives.
The order directed the Secretary of Commerce to publish, by March 2026, an evaluation identifying state AI laws that merit legal challenge. It directed the FTC to issue a policy statement on preemption of state laws that require altering truthful AI outputs. And it created a task force, led by the Attorney General, to challenge state laws on constitutional grounds.
The order explicitly carves out a few areas from preemption: child safety regulations (see also whether deployed AI agents are truly safe), AI compute and data center infrastructure except for permitting, and state government procurement rules. Everything else is potentially on the chopping block.
Here's the problem. Executive orders aren't legislation. They can be reversed by the next president. They don't create enforceable private rights. And the preemption claims haven't been tested in court. Companies building compliance programs around the executive order are building on sand.
The State Patchwork
Three states have significant AI laws that took effect on or around January 1, 2026:
Colorado's AI Act is the most comprehensive. It requires developers and deployers of high-risk AI systems to use reasonable care to protect consumers from algorithmic discrimination. The attorney general has exclusive enforcement authority. After pushback from industry, enforcement was delayed to June 30, 2026. No private right of action exists.
Texas's Responsible AI Governance Act (TRAIGA) initially looked aggressive, then got gutted during the legislative process. Most of the original requirements for impact assessments, consumer disclosures, and harm prevention were either deleted or limited to government entities. What survived: categorical bans on AI for behavioral manipulation, discrimination, deepfakes, and child exploitation.
California has taken a transparency-first approach. The AI Transparency Act (SB 942) requires generative AI providers to offer AI detection tools, include disclosures in AI-generated content, and maintain labeling capabilities. The compliance deadline was extended to August 2, 2026. Separately, the Transparency in Frontier AI Act (SB 53) creates registration and safety disclosure requirements for developers of frontier models.
Both Colorado and Texas offer a compliance safe harbor if companies implement recognized frameworks like the NIST AI Risk Management Framework or ISO 42001. That's a meaningful incentive to adopt voluntary standards even in a fragmented regulatory environment.
What This Means in Practice
If you're a US-based AI company, your compliance obligations depend entirely on where your customers are. A hiring tool used in Colorado faces different rules than one used in Texas. A chatbot deployed in California needs transparency features that aren't required in Florida. And all of this could be preempted by federal action that hasn't happened yet.
The result is that most US companies building AI for regulated use cases are treating the strictest state law as their baseline, because building separate compliance tracks per state is more expensive than just meeting the highest bar.
The UK: From Voluntary to Binding

For two years, the UK's position was clear: no AI-specific legislation. The 2023 white paper "A Pro-Innovation Approach to AI Regulation" laid out five principles: safety, transparency, fairness, accountability, and contestability and told existing regulators to apply them using their current powers. The FCA would handle AI in financial services. Ofcom would cover AI in communications. The ICO would deal with AI and personal data. No new law needed.
That position is shifting. The government has signaled that binding AI legislation is coming, potentially as part of the spring 2026 King's Speech. The proposed framework would require frontier model developers to register with the government, conduct safety evaluations including assessments of misuse potential for bioweapons, cyberattacks, and disinformation, and report serious safety incidents within defined timeframes.
The AI Safety Institute
The UK established the AI Safety Institute (now rebranded as the AI Security Institute) after the 2023 Bletchley Summit. It conducts pre-deployment testing of frontier models and has evaluated systems from major labs including OpenAI, Anthropic, Google DeepMind, and Meta. But its work has been voluntary. Labs cooperate because it's good PR and because the implicit threat of regulation motivates engagement.
An AI Assurance Innovation Fund is planned for spring 2026, alongside a voluntary code of ethics and a skills framework for a new AI assurance profession. The UK is also building a "Trusted Third-Party AI Assurance Roadmap" to formalize how external auditors evaluate AI systems.
The Strategic Bet
The UK's regulatory gap is deliberate. Post-Brexit, the government wants London to be the global hub for AI development. Lighter regulation is supposed to attract companies that find the EU's compliance burden too heavy. Whether this works depends on whether "lighter" regulation also means "less trustworthy." If the EU's AI Act becomes the de facto global standard the way GDPR did, the UK's permissive approach might leave British AI companies struggling to sell into the EU market.
China: Already Enforcing While Others Debate

China doesn't get enough attention in Western AI regulation discussions, which is a mistake. It's the only major jurisdiction that's been actively enforcing AI-specific rules since 2023, and its approach is fundamentally different from the EU's risk-based framework or America's market-led model.
Three Overlapping Regulations
China regulates AI through three sets of rules, each targeting a specific technology layer:
Algorithm Recommendation Measures (2022) require companies to register algorithms that have "public opinion properties or social mobilization capabilities." Developers must disclose training data sources, how algorithms work, and what datasets they use. Over 5,000 algorithms have been filed through the national registry administered by the Cyberspace Administration of China.
Deep Synthesis Measures (2023) cover AI-generated or AI-altered content, including video, audio, images, and text. Providers must label synthetic content, verify user identities, and maintain logs of generated content. This was China's answer to deepfakes, and it predates similar requirements in both the EU and US.
Generative AI Interim Measures (2023) made China the first country with binding regulations specifically for generative AI. Providers must register models with the Algorithm Registry, undergo security assessments, use legally sourced training data, respect intellectual property rights, and obtain consent for personal information used in training. Content generated by these systems must align with "core socialist values," which in practice means content moderation requirements that would be unconstitutional in the US.
The 2026 Consolidation
On January 1, 2026, amendments to China's Cybersecurity Law took effect that bring AI governance into national law for the first time. The government is consolidating the three existing regulation sets into a single National AI Governance Code with mandatory registration for high-impact algorithms, standardized model evaluation, and government-approved datasets for sensitive industries.
The enforcement model differs sharply from the EU's. China doesn't just fine companies. It can suspend services entirely, require algorithm modifications, and demand that companies submit to government audits. For companies operating in China, non-compliance isn't a financial risk. It's an existential one, because you can lose your operating license.
When This Affects You
Not every AI company needs to worry about all four regimes. Your compliance map depends on what you build and where your users are.
If you sell AI to enterprises globally: You need EU AI Act compliance by August 2, 2026, for any high-risk use case. Treat NIST AI RMF as your US baseline. Monitor UK developments but don't build to a framework that doesn't exist yet.
If you build foundation models: You're already subject to EU GPAI rules since August 2025. California's SB 53 adds frontier model requirements. If you have any Chinese users or partners, expect algorithm registration requirements.
If you deploy AI in HR, lending, or insurance: You're in the crosshairs everywhere. The EU classifies these as high-risk. Colorado's AI Act specifically targets algorithmic discrimination in these sectors. The UK's existing regulators (FCA, EHRC) are already investigating AI bias in these areas under current law.
If you're a startup with under 50 employees: The EU AI Act includes reduced obligations for SMEs and startups, including access to regulatory sandboxes. But "reduced" doesn't mean "exempt." You still need to know which tier your system falls into.
If you only operate in the US: You still can't ignore the EU. If a single EU resident uses your product, you're potentially in scope. The same extraterritorial reach that made GDPR a global standard applies here.
What the Headlines Miss
The real story isn't in any single regulation. It's in the gaps between them.
Regulatory arbitrage is already happening. Companies are structuring operations to minimize exposure to the EU AI Act, routing AI services through jurisdictions with lighter rules. The EU anticipated this and included extraterritorial provisions, but enforcement across borders is slow and politically complicated.
Enforcement capacity doesn't match enforcement ambition. The EU AI Office became operational in August 2025 with a staff of roughly 140 people. They're responsible for overseeing GPAI compliance across every foundation model provider that serves the EU market. Compare that to the SEC, which has over 5,000 employees to regulate US securities markets. The EU has written ambitious rules. Whether it can enforce them is a separate question.
China's rules create a hard fork. Any AI system that operates in China must comply with content moderation requirements that are incompatible with how the same system would operate in the US or EU. This isn't a compliance burden you can solve with a settings toggle. It requires fundamentally different model behavior, training data, and output filtering. Most Western AI companies have already decided this isn't worth it.
The US preemption fight could blow up everything. If the Trump administration successfully preempts state AI laws, it would create a regulatory vacuum at the federal level with no replacement. If it fails, the state patchwork grows more complex. Either outcome creates uncertainty, and uncertainty is more expensive than regulation for companies trying to plan multi-year product roadmaps.
Voluntary frameworks are becoming compliance currency. Both the EU and US reward companies that adopt standards like NIST AI RMF and ISO 42001. These aren't legally required in most jurisdictions, but they're becoming the baseline that regulators and courts use to evaluate "reasonable care." Companies that ignore them are building without insurance.
Frequently Asked Questions
Does the EU AI Act apply to US companies?
Yes. The EU AI Act applies to any provider or deployer of AI systems that affect people in the EU, regardless of where the company is based. If your AI system is used by EU residents or if its outputs affect them, you're in scope. This extraterritorial reach mirrors GDPR.
Which US states have comprehensive AI laws?
Colorado and Texas have the most comprehensive AI-specific laws, both effective in 2026. California has multiple targeted AI laws covering transparency, frontier models, and training data disclosure. Over 40 states introduced AI-related bills in 2025, but most are narrow in scope. The federal executive order on AI preemption adds another layer of uncertainty about which state laws will survive legal challenge.
Is the UK going to pass an AI law in 2026?
Possibly. The government has signaled intent to introduce binding legislation for frontier AI models, potentially in the spring 2026 King's Speech. But as of March 2026, no formal bill has been published and no public consultation on legislation has occurred. The UK is still operating under the voluntary, regulator-led approach established in 2023. Companies should plan for a binding framework but not assume its specifics.
How does China's AI regulation differ from the EU's?
China's approach prioritizes content control and state oversight, while the EU focuses on risk classification and consumer protection. China requires algorithm registration, content moderation aligned with state values, and government security assessments. The EU classifies systems by risk tier and applies proportional obligations. China has been enforcing since 2023; the EU's high-risk enforcement starts August 2026. The two frameworks are largely incompatible, which is why most companies treat them as separate compliance tracks.
Sources: EU AI Act implementation timeline, EU AI Act GPAI guidelines, White House AI Executive Order (Dec 2025), DLA Piper on EU AI Act obligations, King & Spalding on US state AI laws, Baker Botts US AI law update, Drata on AI regulations 2026, White & Case UK AI tracker, WebProNews UK AI regulatory push, White & Case China AI tracker, IAPP China AI governance, Regulations.AI UK overview, SIG EU AI Act summary, Sumsub global AI laws guide