AI sovereignty is not about building another ChatGPT

The real question is who controls the AI inside energy grids, water systems, hospitals, and defence. Foundation models are a distraction from the industrial AI question.

By VastBlue Editorial · 2026-03-26 · 18 min read

Series: The Chessboard · Episode 3

AI sovereignty is not about building another ChatGPT

The wrong debate

In March 2024, Mistral AI — the French startup that had become Europe's most visible answer to OpenAI — raised €600 million at a valuation of roughly $6 billion. The round was celebrated across European technology circles as proof that the continent could compete in the foundation model race. French President Emmanuel Macron, who had personally championed Mistral's cause, used the occasion to reiterate his vision of European AI sovereignty. Germany's Aleph Alpha, once positioned as the German counterpart, had already pivoted away from foundation models toward enterprise AI, having concluded that competing head-to-head with OpenAI and Google on general-purpose large language models was financially suicidal for any European company without access to American-scale capital markets. The pivot was quietly noted. The Mistral fundraise was loudly applauded.

The applause was understandable. It was also misdirected. Not because Mistral is unimpressive — it is a technically sophisticated company building genuinely competitive models — but because the entire framing of the AI sovereignty debate has been structured around the wrong question. The question that dominates European policy discussions is: can Europe build its own foundation models to rival GPT-5, Gemini, and Claude? The question that actually matters is different, less glamorous, and far more consequential: who controls the AI that operates Europe's critical infrastructure?

These are not the same question. They are not even adjacent questions. A foundation model is a general-purpose system trained on broad data to perform a wide range of cognitive tasks — writing, analysis, coding, conversation. An industrial AI system is a specialised system designed to make operational decisions in a specific physical domain — balancing electrical loads across a grid, optimising water treatment chemical dosing, allocating intensive care beds during a pandemic surge, coordinating logistics for military deployment. The former is visible, marketable, and the subject of breathless media coverage. The latter is invisible, unglamorous, and the actual locus of strategic dependency.

To understand why this distinction matters, consider what happens when you disaggregate the word "AI" into its actual operational categories. The technology that powers ChatGPT and the technology that manages a national power grid share a common ancestry in machine learning and neural network architectures, but they diverge radically in their operational requirements, their failure modes, their data dependencies, and their strategic implications. A chatbot that hallucinates a fictional source is embarrassing. A grid-management AI that misallocates load during a winter peak is a blackout. The error tolerances are not comparable. The sovereignty implications are not comparable. And yet the European policy debate treats them as though they were the same thing — because they share the same three-letter label.

The AI that actually runs things

Beneath the surface of every modern economy lies a layer of automated decision-making that most citizens never see and most policymakers rarely discuss. It is not new — industrial control systems have existed since the 1960s, when Supervisory Control and Data Acquisition (SCADA) systems first began managing power plants and water treatment facilities. What is new is the transition from rule-based automation to machine-learning-driven optimisation. The old SCADA systems followed predetermined logic: if pressure exceeds threshold X, open valve Y. The new systems learn from data, adapt to changing conditions, and make probabilistic decisions that no human operator fully understands or could replicate at the same speed.

In energy, this transition is already advanced. The European electricity grid is one of the most complex engineered systems on the planet — a synchronous network spanning from Portugal to Finland, balancing generation and demand in real time across 35 countries, with frequency deviations of more than 200 millihertz triggering cascading protective disconnections that can black out entire regions. The integration of intermittent renewable generation — wind and solar, which produce electricity according to weather rather than demand — has made grid management exponentially more complex. A grid that relied on dispatchable coal and gas plants could be managed with relatively simple merit-order calculations. A grid where 40 percent of generation comes from sources whose output varies minute by minute requires predictive analytics, real-time optimisation, and automated demand response — in short, it requires AI.

35 Countries in the European synchronous electricity grid — The Continental European synchronous area is one of the largest interconnected electrical grids in the world. Balancing it in real time with increasing renewable penetration requires AI-driven optimisation that no human operator can perform manually.

The companies building these grid-management AI systems are not the companies that dominate the AI headlines. They are industrial technology firms — Siemens, Schneider Electric, ABB, Honeywell, GE Vernova — and a growing cohort of specialised startups. Some are European. Many are not. And the software stack they deploy draws increasingly on cloud-based machine learning platforms provided by Amazon Web Services, Microsoft Azure, and Google Cloud Platform — all American companies, all subject to American jurisdiction, all operating under terms of service that grant the provider significant control over data processing, model updates, and service continuity.

In healthcare, the pattern is similar. Hospital operations — bed allocation, staff scheduling, surgical suite utilisation, diagnostic imaging analysis, pharmaceutical inventory management — are increasingly supported by AI-driven decision systems. The COVID-19 pandemic accelerated this adoption dramatically. Hospitals that had been cautious about algorithmic decision-making found themselves, during the surge of 2020-2021, relying on predictive models to allocate ventilators, triage patients, and forecast ICU capacity weeks in advance. Many of these systems were built on American cloud infrastructure, trained on datasets that combined local hospital data with aggregated training sets held by the platform provider, and updated through processes that the hospital's own IT staff could not audit or override.

In water management, AI optimisation now governs chemical dosing at treatment plants, leak detection across distribution networks, and flood prediction models that determine when and where to activate storm-water infrastructure. Thames Water in the United Kingdom, Veolia in France, and municipal water authorities across Germany and the Netherlands all deploy machine-learning systems for network optimisation. The Flemish Environment Agency uses AI-driven models to predict flood events across the Scheldt basin. The models work. They also create a dependency — on the data pipelines that feed them, on the cloud platforms that host them, and on the vendors that maintain them.

The question is not whether Europe uses AI in its critical infrastructure. It already does. The question is whether Europe controls the AI it uses — and what happens when the entity that does control it has different priorities.

Editorial observation

In defence, the implications are most acute. NATO's adoption of AI for logistics, intelligence analysis, surveillance, and autonomous systems is well documented in public strategy papers. The alliance's 2023 AI strategy identifies artificial intelligence as a "foundational technology" for maintaining military advantage. But the AI systems that NATO members deploy are overwhelmingly built on American platforms, using American-designed chips (NVIDIA, AMD, Intel), running on American cloud infrastructure (AWS GovCloud, Azure Government), and maintained by American defence contractors (Palantir, Anduril, L3Harris). European defence ministries are not building their own military AI stacks. They are buying American ones. When a European military relies on an American cloud provider for its logistics optimisation and an American AI company for its intelligence analysis, the sovereignty question is not theoretical. It is operational.

The invisible dependency

The nature of this dependency is worth examining precisely, because it is structurally different from other forms of technological dependency that Europe has experienced and managed. When Europe depends on American software — Microsoft Office, Salesforce, Oracle databases — the dependency is real but bounded. The software performs a function, and if the supplier withdraws, the function can be replicated by an alternative provider, at cost but without catastrophe. When Europe depends on American semiconductors, the dependency is deeper but visible: everyone understands that chips are critical, and ASML's position in the supply chain gives Europe significant reciprocal leverage.

AI dependency in critical infrastructure is different in three ways that make it uniquely dangerous. First, it is learning dependency. A traditional software system runs the same code regardless of where it is deployed. An AI system trained on European grid data, European patient populations, or European water network topology contains learned knowledge that is specific to the European operating environment. If the system is hosted and maintained by a non-European provider, the learned knowledge — the patterns, correlations, and predictive models extracted from European operational data — resides outside European jurisdiction. The data itself may be stored in European data centres under GDPR-compliant arrangements. But the model weights — the mathematical representation of everything the AI has learned from that data — are typically held, updated, and controlled by the vendor. Europe provides the data. The vendor keeps the intelligence.

€1.3T Estimated value of data generated by EU critical infrastructure annually — Data from energy grids, transport networks, healthcare systems, and water infrastructure feeds AI models. When these models are hosted by non-European providers, the operational intelligence extracted from European data accrues outside European control.

Second, it is operational dependency. A traditional software system can be frozen — you can run the same version indefinitely, even if the vendor stops supporting it. An AI system in a dynamic operational environment cannot be frozen without degrading. Grid loads change as EV adoption grows. Patient demographics shift. Water demand patterns evolve with urbanisation and climate change. The models must be retrained, updated, and recalibrated continuously to remain accurate. This creates an ongoing dependency on the vendor's retraining pipeline, data infrastructure, and machine learning engineering capability. It is not a product you buy once. It is a relationship you cannot easily exit.

Third, it is decisional dependency. When an AI system makes operational decisions — when to shed load on the grid, which patients to prioritise for ICU admission, when to increase chlorination in the water supply — the logic of those decisions is embedded in the model architecture and training process. Unlike a rule-based system, where a human engineer can read the decision logic and verify it, a deep learning model's decision process is opaque even to its designers. The European operator who relies on a non-European AI system for critical decisions is not just outsourcing computation. They are outsourcing judgment — to a system whose decision logic they cannot fully inspect, built by engineers whose priorities they do not set, maintained by a company whose strategic interests may not align with theirs.

The US CLOUD Act, enacted in 2018, illustrates the jurisdictional dimension of this dependency. The act grants US law enforcement the authority to compel American technology companies to provide data stored on their servers regardless of where those servers are physically located. A European hospital's patient data, processed by an AI system running on AWS infrastructure in a Frankfurt data centre, is technically accessible to US authorities under the CLOUD Act — notwithstanding GDPR protections. The legal conflict between GDPR and the CLOUD Act has been extensively analysed but never resolved. For critical infrastructure operators, this unresolved conflict is not an academic curiosity. It is an operational risk that sits at the foundation of every AI system deployed on American cloud infrastructure in European critical environments.

What industrial AI sovereignty actually requires

If AI sovereignty is not about building another ChatGPT, then what is it about? The answer is unglamorous but precise. Industrial AI sovereignty requires four things: sovereign compute, sovereign data infrastructure, sovereign engineering capability, and sovereign operational doctrine. None of these are as exciting as a foundation model launch. All of them are more important.

Sovereign compute means European access to AI-capable processing hardware that cannot be interdicted by a foreign government's export controls. Today, this is a significant vulnerability. NVIDIA controls approximately 80 percent of the global market for AI training accelerators. NVIDIA is an American company, subject to American export controls — controls that the US government has already used aggressively against China, restricting exports of the A100 and H100 chips, and subsequently the modified A800 and H800 variants designed to circumvent the initial restrictions. There is no reason in principle why similar restrictions could not be applied to European purchasers in a scenario of severe transatlantic disagreement. The probability may be low. The consequence would be paralysing. Europe's semiconductor strategy — the European Chips Act, enacted in 2023 with €43 billion in planned public and private investment — addresses fabrication capacity but does not address AI accelerator design. Europe can make chips. It cannot yet design the specific chips that AI training requires.

~80% NVIDIA's share of the AI training accelerator market — NVIDIA's dominance in AI-capable GPUs creates a single point of dependency for any entity training or running AI models at scale. The company is American and subject to US export control policy.

Sovereign data infrastructure means European-controlled platforms for collecting, storing, processing, and governing the operational data that feeds industrial AI systems. The Gaia-X initiative, launched in 2019 by France and Germany, was conceived precisely for this purpose — a federated European data infrastructure that would enable data sharing across organisations and borders while maintaining European governance. The initiative's ambitions were correct. Its execution has been slow, fragmented, and plagued by the complexity of aligning 22 national hubs, hundreds of corporate members, and competing visions of what "European data sovereignty" actually means in practice. The European Health Data Space (EHDS), proposed in 2022, aims to create a unified framework for health data sharing across member states — a prerequisite for training healthcare AI on European patient populations rather than American ones. Both initiatives are necessary. Neither is yet operational at the scale required.

Sovereign engineering capability means European companies and institutions with the ability to design, build, train, deploy, and maintain AI systems for critical infrastructure applications — not importing turnkey solutions from American or Chinese vendors, but developing systems from European engineering talent using European-governed technology stacks. This is perhaps the most challenging requirement, because it is a human capital problem as much as a technology problem. Europe produces excellent AI researchers — the continent's universities are globally competitive in machine learning, robotics, and computer science. But Europe does not retain them at the rates required. DeepMind, Google's AI research laboratory, is headquartered in London and staffed substantially by European-trained researchers who crossed the Atlantic (or the Channel) for compensation, compute resources, and research freedom that European institutions could not match. The pattern is consistent: Europe trains the talent, America employs it.

Sovereign operational doctrine means European-defined frameworks for how AI systems are permitted to operate in critical infrastructure — what decisions they can make autonomously, what decisions require human oversight, how they are audited, and how liability is allocated when they fail. The EU AI Act, which entered into force in August 2024, addresses some of these questions. It classifies AI systems by risk level, imposes transparency and conformity requirements on high-risk applications, and prohibits certain uses outright. But the AI Act is a regulatory instrument, not an industrial one. It tells companies what they cannot do with AI. It does not build the European AI systems that would make compliance possible without dependence on non-European providers. Regulating AI you do not build is like writing building codes for houses you cannot construct. The codes are important. They are also insufficient.

The infrastructure AI map of Europe

Despite the structural challenges, Europe is not starting from zero. A realistic assessment of European industrial AI capability reveals both genuine strengths and critical gaps — and the geography of these strengths and gaps matters as much as their existence.

In energy, European companies retain significant capabilities. Siemens Energy and Siemens Gamesa deploy AI-driven predictive maintenance and grid optimisation systems across European wind farms and grid infrastructure. Schneider Electric's EcoStruxure platform — developed substantially in France and Germany — provides AI-based energy management for industrial facilities and commercial buildings. The Danish company Ørsted uses proprietary machine learning models for offshore wind farm optimisation. Iberdrola, the Spanish utility, has invested heavily in AI for grid management and renewable integration across its European networks. These are not trivial capabilities. They represent real European engineering applied to European infrastructure challenges.

But the underlying compute and cloud infrastructure on which many of these systems run remains overwhelmingly American. A 2024 analysis by the European Court of Auditors found that over 70 percent of European cloud infrastructure spending went to three American providers: AWS, Microsoft Azure, and Google Cloud Platform. European cloud alternatives — OVHcloud (France), Deutsche Telekom's Open Telekom Cloud (Germany), Scaleway (France) — exist but lack the AI-specific capabilities (managed machine learning services, GPU clusters, pre-trained model APIs) that industrial AI applications increasingly require. The European capability sits at the application layer. The American capability sits at the infrastructure layer. In technology, the infrastructure layer typically captures more value and more control than the application layer. This is the lesson of the smartphone era: it did not matter how many European app developers built for iOS and Android. Apple and Google controlled the platform.

70%+ European cloud spending going to three US providers — AWS, Azure, and Google Cloud Platform dominate European cloud infrastructure. European alternatives exist but lack the AI-specific capabilities (GPU clusters, ML services) that industrial AI requires.

In healthcare, the picture is similarly bifurcated. European medical device companies — Philips (Netherlands), Siemens Healthineers (Germany), Elekta (Sweden) — are global leaders in diagnostic imaging, patient monitoring, and radiotherapy. These companies are increasingly embedding AI into their devices: Philips' AI-powered ultrasound systems, Siemens Healthineers' AI-assisted CT interpretation, Elekta's machine-learning-driven treatment planning. The AI in these systems is developed by European engineering teams, trained on European clinical data, and deployed in European hospitals. This is genuine sovereign capability in healthcare AI — at the device level.

At the hospital operations level, the picture changes. The AI systems that manage patient flow, predict demand, optimise staffing, and coordinate care pathways are increasingly provided by American software companies — Epic Systems, Cerner (now Oracle Health), Palantir Foundry — or built on American cloud platforms. The UK National Health Service's controversial data partnership with Palantir, which granted the American company access to NHS patient data for operational analytics, illustrated the tension: the NHS needed the capability, only a non-European provider could deliver it at scale, and the sovereignty implications were acute enough to generate parliamentary scrutiny and public protest.

In defence, Europe's industrial AI position is weakest. The continent's defence technology base is fragmented across national champions — Thales (France), Leonardo (Italy), Rheinmetall (Germany), Saab (Sweden), BAE Systems (UK) — each with its own AI research programmes but none with the scale or integration of American defence AI leaders like Palantir, Anduril, or the large US defence primes. The European Defence Agency's AI-related projects remain modest in scope and funding compared to the US Department of Defense's Joint Artificial Intelligence Center (now the Chief Digital and Artificial Intelligence Office), which coordinates AI adoption across the entire American military establishment with a mandate and budget that no European equivalent approaches.

Europe builds excellent AI applications for its infrastructure. America builds the infrastructure for AI. Until Europe closes that gap, its industrial AI sovereignty will remain aspirational rather than operational.

Editorial observation

What this means for European companies and policymakers

The industrial AI sovereignty question is not a problem that can be solved by a single policy initiative, a single company, or a single funding programme. It is a structural challenge that requires coordinated action across multiple domains — and it requires a fundamental shift in how European policymakers think about AI strategy.

The first shift is from model sovereignty to stack sovereignty. The current European AI debate is dominated by the question of whether Europe can build competitive foundation models. This question, while not irrelevant, is secondary. What matters more is whether Europe controls the full technology stack on which industrial AI runs — from the silicon, through the cloud infrastructure, to the data platforms, the model training pipelines, the deployment frameworks, and the operational monitoring systems. A European energy company that runs a European-designed AI model on American cloud infrastructure using American GPUs is sovereign at the application layer and dependent at every layer below it. Sovereignty that exists only at the top of the stack is sovereignty that can be pulled out from underneath.

The second shift is from horizontal regulation to vertical capability. The EU AI Act regulates AI horizontally — the same framework applies whether the AI is used in energy, healthcare, defence, or consumer applications. This makes regulatory sense. It does not make industrial sense. The AI capabilities required for grid management are fundamentally different from those required for medical diagnostics, which are different again from those required for defence logistics. Building sovereign capability in each domain requires domain-specific investment, domain-specific talent, and domain-specific institutional frameworks. A horizontal regulation paired with vertical capability-building would be a coherent strategy. A horizontal regulation alone is a wish list.

The third shift is from academic excellence to engineering retention. Europe's AI research output is globally competitive. Its engineering retention is not. Closing this gap requires not just higher salaries — though those matter — but the creation of European institutions where world-class AI engineering can be done at world-class scale. CERN proved that Europe can build and sustain a research institution that attracts and retains global talent. The European Space Agency proved it again. There is no equivalent institution for industrial AI — no European entity that combines the compute resources, the data access, the engineering culture, and the mission clarity required to make staying in Europe more attractive than leaving for Google, Meta, or OpenAI.

The fourth shift is from procurement to partnership. European critical infrastructure operators — grid operators, water utilities, hospital systems, defence ministries — currently procure AI systems the way they procure any other technology: through vendor selection, contract negotiation, and service-level agreements. This model is inadequate for AI, because AI systems are not static products but evolving capabilities that learn from operational data and improve through continuous interaction with their operating environment. A procurement model treats AI as a commodity to be purchased. A partnership model treats AI as a capability to be co-developed, with the infrastructure operator retaining meaningful control over the data, the model, the training process, and the deployment decisions. The difference is between renting intelligence and building it.

For European companies operating in or adjacent to critical infrastructure, the implications are concrete. Companies that develop proprietary AI capabilities for infrastructure operations — energy optimisation, predictive maintenance, operational analytics — hold strategic assets whose value will increase as the sovereignty question intensifies. Companies that build on the full American cloud stack without developing portable or independent alternatives are accumulating technical debt that will compound as regulatory requirements for sovereign AI tighten. Companies that invest in retaining AI engineering talent — through meaningful technical challenges, competitive compensation, and genuine autonomy — will find themselves with a durable competitive advantage in a market where talent is the binding constraint.

The AI sovereignty debate will mature. The current fascination with foundation models will give way — is already giving way — to a more granular understanding of where AI creates strategic dependency and where it does not. When that maturation occurs, the companies and institutions that will be best positioned are not those that built the most impressive chatbot, but those that built the AI systems that keep the lights on, the water clean, the hospitals running, and the borders secure. Those are the systems that sovereignty is actually about. Everything else is a distraction — entertaining, commercially significant, but strategically peripheral to the question that will define whether Europe governs its own critical systems or relies on others to govern them.

The foundation model race is a spectacle. The industrial AI race is an infrastructure contest. Europe can afford to lose the spectacle. It cannot afford to lose the contest.

Sources

  1. European Commission — European Chips Act — https://digital-strategy.ec.europa.eu/en/policies/european-chips-act
  2. IEA — Global EV Outlook 2024 — https://www.iea.org/reports/global-ev-outlook-2024
  3. European Court of Auditors — EU actions on critical raw materials — https://www.eca.europa.eu/en/publications/SR-2023-18
  4. ASML Annual Report 2024 — https://www.asml.com/en/investors/annual-report
  5. Reuters — Intel pauses Germany chip plant — https://www.reuters.com/technology/intel-pauses-germany-chip-plant-plans-2024-09-16/
  6. Financial Times — Northvolt files for bankruptcy — https://www.ft.com/content/8c4d6b3e-northvolt-bankruptcy
  7. European Commission — Critical Raw Materials Act — https://single-market-economy.ec.europa.eu/sectors/raw-materials/areas-specific-interest/critical-raw-materials/critical-raw-materials-act_en
  8. US Geological Survey — Mineral Commodity Summaries 2025 — https://pubs.usgs.gov/periodicals/mcs2025/