The Politics of Fragmentation and Capture in AI Regulation

written by TheFeedWired

In new research, Filippo Lancieri, Laura Edelson, and Stefan Bechtold explore how the political economy of artificial intelligence regulation is shaped by the strategic behavior of governments, technology companies, and other agents. As the capabilities of artificial intelligence (AI) systems continue to grow—particularly those based on large language models (LLMs)—there are increasing international discussions on whether and how to regulate them. For example, in recent months, the European Union passed the AI Act, the United States has issued (and rescinded) executive orders and state-level proposals, and China has doubled down on national data controls.

Amid these regulatory efforts, we often overlook a key question: What kind of global AI governance landscape is likely to emerge from this patchwork of national regulatory efforts? In a recent academic article, we explore how the political economy of AI regulation is shaped by the strategic behavior of governments, technology companies, and other agents. To do this, we think of AI regulation as a dynamic, international “game” between national regulators and companies.

We study both how this game plays out within a single jurisdiction—what we call the “local” game—and the “global” game, where multiple jurisdictions and multiple companies compete against each other. These dynamics open room for different forms of regulatory behavior, shaped either by the prevalence of different national interests over business interests, or by regulatory capture in which national regulations reflect mostly business wishes. In an ever-evolving regulatory and political landscape, our stylized framework should help scholars and policymakers understand and analyze the potential impact of regulatory actions, and predict how other countries and companies may respond to regulatory initiatives.

The local game: four paths for individual jurisdictions Our stylized model starts with a local-level, sequenced game that helps us understand the different alternatives available to the players engaged in international regulatory discussions. Governments begin by deciding whether to regulate AI technologies to address perceived harms to their citizens. Once a regulation is put in place, companies must then respond by choosing whether to comply or evade these rules.

If companies choose evasion, then regulators decide whether to ramp up enforcement (e.g., by extending their laws extraterritorially through an “effects-based” regime or similar measures) or tolerate such evasion. Finally, companies may decide for a full market withdrawal. Each of these decisions have different costs and benefits given the interplay between these agents.

Ultimately, this game can lead to four possible scenarios at the national level: 1. No Local Regulation In some jurisdictions, governments may decide not to regulate AI at all. This may be because the harms associated with the new technology are perceived to be low, because of regulatory capture, or because the cost of enforcing rules is too high.

This laissez-faire approach allows companies to operate freely to operate but may expose citizens to unmitigated risks. 2. Compliance and Local Adaptation In more proactive jurisdictions, regulators may create enforceable rules that companies comply with.

This is the ideal case for local governance: companies adapt their models and services to the specific rules of the country, and enforcement is effective. This scenario is more likely when the cost of evasion exceeds the cost of compliance or when companies fear future regulatory escalation. 3.

Partial Evasion and Regulatory Gaps Often, when faced with regulation, some companies comply while others evade. This can occur when governments lack the technical capacity or political will to close loopholes, especially when enforcement requires cross-border cooperation or sophisticated monitoring. In this scenario, non-compliant products persist in the market, leading to uneven consumer protections and distorted competition.

4. Market Withdrawal Finally, if a regulatory environment is too hostile or product adaptation and compliance are too costly, some companies may choose to exit the market entirely, foregoing the benefits of national presence. This happened, for example, when Google and Meta pulled out of China rather than comply with its censorship laws or refused to launch new products in Europe (and many have suggested that Apple or Meta could leave the EU altogether).

While this protects a country’s regulatory integrity and its citizens, it also potentially reduces access to cutting-edge technologies. These four scenarios capture the real-world trade-offs faced by national regulators–which are summarized by the image below: Figure I: The Different Steps of the Local Regulatory Game The global game: four futures for AI governance The initial stylized local model provides some insights into available alternatives. However, we live in an interconnected digital economy, so local decisions rarely remain purely local.

Adding the international level makes the regulatory game more complex and more interesting. Countries do not regulate in a vacuum; they compete with other countries to attract AI entrepreneurs and investment, secure strategic advantages, and in some cases even export their regulatory values. At the same time, companies exploit differences between jurisdictions–effectively pitching one another–to shape or evade rules.

In this process, they can capture regulators and push them away from the public interest towards more private goals (as Mistral has partially done in the EU, as have tech companies in the U.S.). Regulatory capture is a recurring theme for digital regulations. Tech companies, for example, have successfully captured the Irish privacy regulator to preclude GDPR enforcement.

Under our framework, these global interactions can rise to four distinct global governance states. 1. Multiple Local Regimes In this world, many countries regulate AI domestically, accepting some level of arbitrage or evasion to avoid conflict or maintain political autonomy.

This “benign fragmentation” allows for a diversity of approaches—say, stricter consumer protection rules in the EU versus more permissive regimes in the U.S.—while still enabling companies to operate across borders. This model respects national sovereignty and reflects differing societal values. It can work well when the cost of compliance is modest and economies of scale are not a significant factor.

2. International Harmonization As divergences between national regulatory regimes grow too sharp, companies may struggle to navigate them. This may lead to pushes for international harmonization where governments recognize that fragmented regulation imposes real costs, thereby choosing to bridge their differences through treaties, mutual recognition, or coordinated rulemaking. Companies may then support harmonization to reduce compliance burdens and preserve global markets.

This model can preserve both national input and operational efficiency. Antitrust enforcement is an example where bodies such as the OECD and the International Competition Network have historically facilitated greater convergence in merger review rules because the costs of having a large, international merger blocked by a single jurisdiction were seen as too high. But harmonization is politically difficult and often slow—especially in areas tied to national security or societal norms.

It is no surprise, for example, that the increasing importance of industrial policy and strategic autonomy goals increasingly leads to a breakdown of international antitrust enforcement harmonization. 3. Unilateral Imposition (The “Brussels Effect”) Sometimes, one powerful jurisdiction can set de facto global standards by regulating early and strictly.

Faced with the cost of creating multiple product versions, companies may simply adopt the strictest standard worldwide. This is the so-called “Brussels Effect.” A recent example is Apple’s global shift to USB-C chargers after the EU mandated it. The EU AI Act aspires to this role for AI regulation—but here, the path is less clear.

Companies like Meta and Apple have delayed or avoided launching their latest AI models in Europe, citing legal uncertainty. The effectiveness of the Brussels Effect depends on the willingness of companies to internalize external rules and the absence of strong counter-pressures from other countries. 4.

Global Fragmentation (Splinternet of AI) In the most fractured scenario, countries insist on fully sovereign AI regimes, regardless of the costs. Regulatory divergence becomes so significant that companies must create entirely separate products—or abandon certain markets. Each country becomes its own AI island.

This is not hypothetical. China maintains strict controls over training data and foreign AI systems, pursuing a ‘sovereign AI strategy.’ The U.S. and EU are beginning to discuss restricting data exports to rival countries. Companies already face mounting barriers to accessing global markets.

This scenario imposes serious costs: companies lose economies of scale, and the global diffusion of innovation slows. But if AI is viewed as a core element of national security or identity, countries may be willing to bear these costs to further advance their national agendas. The four scenarios resulting from our model’s global game are summarized in the figure below: Figure 2: Four Governance Scenarios The real-world outlook on current events: strategic fragmentation Recent developments—notably Executive Order 14179 issued by the Trump administration in January 2025— exemplify the dynamics of strategic fragmentation discussed in our model.

The Executive Order rescinds several Biden-era AI safety measures and transparency requirements, instead mandating the development of an “AI Action Plan” while directing federal agencies to eliminate “ideological bias” in AI regulation. And a provision in the current “One Big Beautiful Bill Act” would prevent U.S. states from regulating AI for ten years. Such measures align U.S. regulatory posture more explicitly with the U.S. AI industry’s commercial interests.

In the language of our model, this represents a decisive move by a major jurisdiction to orient its regulatory policies towards local industry interests, thereby also shaping the global regulatory game in the direction of business preferences. The U.S. is signaling a low-regulation environment aimed at attracting AI investment and innovation. For companies, this may serve as an invitation to shift operations to the U.S. in search of a more permissive environment—an example of regulatory arbitrage.

For other countries, particularly those pursuing more stringent AI rules, it raises the stakes of maintaining their current approach. Globally, the Trump administration’s Executive Order may thus be catalyzing a more fragmented AI regulatory landscape, pushing the world toward the strategic fragmentation scenario outlined in our global game. China is likely to double down on its strategy of centralized control over data and models, viewing the U.S. moves as validation of its own AI sovereignty framework.

The EU’s response is still pending. As countries regulate assertively in service of their national strategic goals, even at the cost of global interoperability, they generate significant country-level costs. Fragmentation erodes the economies of scale that have made AI model development efficient and globally deployable.

Companies are likely to be compelled to develop jurisdiction-specific models, which can fragment innovation efforts and raise economic costs. Over time, the inefficiencies of this scenario may spur selective harmonization within blocs of aligned countries—particularly among U.S. allies and partners—which seek to balance the gains of ensuring sovereignty with potential losses associated with weaker economies of scale. Overall, this means that a globally harmonized AI governance regime or a “Brussels effect” of European AI regulation are unlikely scenarios, at least in the short to mid term.

AI regulation is too entangled with geopolitical competition, economic sovereignty, and industrial policy. Instead, we expect a world of strategic fragmentation: jurisdictions will assert their regulatory independence where it matters most—like compute infrastructure or training data—but may cooperate selectively in areas where alignment yields real economic benefits. The real and ultimate risk, though, is that regulatory harmonization occurs through companies successfully pushing a “let it RIP” agenda that promotes their interests at potentially significant costs to society at large.

This article originally appeared on Oxford Business Law Blog here. Authors’ Disclosures: Filippo Lancieri and Stefan Becthold report no conflicts of interest. Laura Edelson previously worked as the chief technologist for the United States Department of Justice Civil Rights Division.

You can read our disclosure policy here. Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty. Subscribe here for ProMarket‘s weekly newsletter, Special Interest, to stay up to date on ProMarket‘s coverage of the political economy and other content from the Stigler Center.

posterbot

Recent Updates

Recent Updates

Contact

Address: CY
Email: support@thefeedwire.com

Recent News