In the decades following the end of the Cold War, global power has often been defined by GDP and the size of military arsenals. But in 2025, that calculus has changed. The most consequential struggle is no longer over territory, oil reserves, or the number of missiles. It is over artificial intelligence.
National sovereignty is being redefined, not by borders or missiles, but by a country’s ability to command the new strategic trinity: energy, compute, and algorithms. States that fail to build capacity across these domains will not merely fall behind; they will become technologically subordinate.
Traditional sovereignty presumed a state’s ability to control its own fate within defined boundaries. But AI sovereignty is not territorial. It is infrastructural. The notion that a nation can achieve absolute autonomy in AI is a mirage. Even the most advanced countries are dependent on a brittle chain of global chokepoints — from Taiwan’s semiconductor fabs to U.S.-based GPU suppliers, from cloud platform monopolies to transnational flows of training data. Purity of control is unattainable. What matters is trajectory. Sovereignty must be understood as a spectrum — a multidimensional posture reflecting relative strength across five layers: energy, hardware, data, models, and talent.
WEALTH IN THE AGE OF AI: HEDGE OR HARNESS?
Energy is foundational. AI models require massive power inputs, and demand is only accelerating. Global AI data centers are projected to consume an additional 100 terawatt-hours annually, equivalent to the yearly electricity usage of the Netherlands. This has made access to reliable, scalable, and affordable energy a precondition for AI capability. Nations that cannot guarantee this resource are effectively disqualified from the frontier.
The second layer, hardware, is no less fraught. The dominance of NVIDIA, which controls roughly 80% of the AI chip market, and the geographic concentration of advanced chip manufacturing in a handful of Taiwanese and South Korean plants, introduces systemic vulnerabilities. Sanctions, supply shocks, or conflict in East Asia could paralyze downstream AI efforts elsewhere. Autonomy here does not mean nationalizing manufacturing but reducing single points of failure through diversification and strategic alliances.
The third layer — data — is more than a technical input. It is a cultural and linguistic artifact. Large models trained on Western-centric corpora do not translate well across borders. Without high-quality, context-specific, and ethically governed datasets, nations risk deploying systems that misalign with local norms, languages, and institutional values. Data governance is thus not only a privacy issue; it is a sovereignty issue.
Model capacity follows. Training a frontier large language model today requires hundreds of millions of dollars and petabytes of data. Most governments must choose whether to build proprietary models, adapt open-source ones, or license foreign systems. Each path entails trade-offs in performance, control, and policy alignment. Full independence may be infeasible, but strategic competence is not.
Finally, talent is the scarcest layer — and perhaps the most decisive. AI progress is still driven by a relatively small pool of elite researchers and engineers. The United States and China dominate this field, not merely by funding, but by retaining human capital. Nations that cannot cultivate, attract, or retain top-tier talent will remain dependent, regardless of how much compute or data they acquire.
This logic is already shaping national strategies. France, for instance, has adopted a partner-and-build approach: leveraging European regulatory power while nurturing domestic champions like Mistral. The UAE, by contrast, has opted for a capital-intensive model, backing compute infrastructure and open-source projects such asFalcon. These are not perfect blueprints, but they represent serious responses to the spectrum of AI power. The consequences of ignoring this shift are not abstract. Countries that treat AI purely as a commercial sector — delegated to the private market or outsourced to foreign platforms — will soon find their economic systems, public services, and civic institutions encoded by decisions made elsewhere. They will import not just models, but epistemologies, categories, and control structures. Algorithmic dependence will become policy dependence.
RESTORING AMERICA: BUILD THE INFRASTRUCTURE, BUILD THE AI BOOM
In an age where predictive policing, automated welfare systems, and digital identity schemes are shaped by code, surrendering AI control is tantamount to political abdication. None of this implies techno-nationalism or isolationism. Collaboration remains essential. But it must be grounded in reciprocal capability, not asymmetrical reliance. Nations that can build, adapt, and govern AI infrastructure on their own terms will negotiate from strength. Those that cannot will be subject to a new form of digital tutelage.
The AI race is not a Silicon Valley spectacle or a geopolitical abstraction. States that move now—investing in energy resilience, diversifying chip supply, governing data with intent, building open models, and funding academic talent—can still shape the architecture of the coming era.
Sebastien Laye is an economist and AI entrepreneur.















