Over the past year, the concept of “Sovereign AI” has evolved from an aspirational idea to a strategic priority for both governments and enterprises seeking to build AI systems that reflect their values, protect their data, and serve their unique societal or business objectives. As artificial intelligence becomes embedded in everything from public services to economic infrastructure, the ability to govern, control, and shape these systems is becoming a key differentiator.

This journey isn’t about isolationism or digital protectionism. Rather, it is about building AI that is trustworthy, performant, and inclusive, which means an AI rooted in local languages, regulatory frameworks, and cultural norms, yet still able to connect to and benefit from a global innovation ecosystem. Let’s explore the recent and major trends shaping the sovereign AI landscape and the growing role open source plays in this transformation.

From nation-states to enterprises: Who is pursuing sovereign AI?

At its core, sovereign AI is about having control over data, infrastructure, and the development and deployment of AI technologies. This drive toward sovereignty is being seen across both the public and private sectors.

Governments are pursuing sovereign AI to ensure alignment with national regulations such as GDPR or the EU AI Act, to mitigate risks to national security, and to reinforce cultural or linguistic relevance in AI systems. From ensuring data does not cross borders, to creating AI that reflects societal norms and democratic values, governments are investing in AI that they can trust and shape.

Enterprises, especially those in regulated industries like finance and healthcare, are also embracing the concept. These organizations are motivated by the desire to reduce dependency on third-party providers, maintain ownership of their proprietary data, and deploy AI in secure and cost-effective environments, often within hybrid or on-premise infrastructures.

Across both sectors, one theme is clear: open source is becoming central to realizing the promise of sovereign AI.

Open source: The cornerstone of a sovereign AI strategy

Open source is emerging as the critical foundation for achieving AI sovereignty. This is not because it enables isolation, but because it grants agency.

With access to “open source” (technically, open weights) models such as LLaMA, Falcon, Qwen, and Mistral, and open source tooling covering the entire spectrum of requirements to build and maintain scalable AI platforms, governments and enterprises can inspect, modify, and fine-tune AI systems to suit their specific needs.

For example, Ray is an open-source framework for building and running distributed applications, especially in machine learning workloads, and that allows teams to build distributed AI pipelines across trusted compute infrastructure (on-prem, edge clusters, sovereign cloud). Or I could highlight vLLM, an open-source library that optimizes inference efficiency for large language models (LLMs), especially in high-throughput and low-latency settings, and designed to support popular open-source LLMs. Properly implemented, open source AI platforms allow full visibility into the data flows and logic driving AI outputs, and offer the flexibility to innovate faster through collaboration with a global community.

Recent research from the Linux Foundation indicates that 41 percent of organizations express a preference for open source GenAI technologies, while only 9% lean towards proprietary solutions. This shift is motivated by the need for transparency, performance optimization, and cost efficiency. The result is a new class of AI stacks designed around openness, supporting everything from multilingual chatbots to vertical-specific models in finance, law, and healthcare.

Models of sovereignty: Centralized, decentralized, or collaborative?

Globally, we are witnessing a diverse range of strategies in the pursuit of sovereign AI.

In Europe, countries are combining strong regulatory frameworks with investments in open AI infrastructure. Initiatives like the AI Act, the BLOOM language model, and the Gaia-X project reflect a philosophy that emphasizes control, trust, and open collaboration.

The United States is leaning on the strength of its private sector and open-source community contribution, with state-level R&D investments complementing a broader innovation-led approach.

China, in contrast, is pursuing a centralized, state-led model of sovereignty, but this effort is increasingly powered by significant investments from both state-backed research institutions and leading technology firms. Major players like Alibaba, through its Qwen model series, and startups such as DeepSeek are actively developing frontier LLMs that rival global counterparts. These initiatives are aligned with national goals for technological self-reliance, while also adhering to strict content governance policies set by the government. The result is a rapidly advancing ecosystem where public mandates and private innovation converge to build end-to-end AI capabilities tailored to domestic needs and values.

Meanwhile, countries in ASEAN and the Middle East are making bold investments in regional AI capacity. Singapore’s SEA-LION and the UAE’s Falcon projects showcase how open source and regional collaboration can be leveraged to achieve sovereignty, especially in multilingual and culturally specific contexts.

While governance models differ, a shared thread unites these efforts: the ambition to tailor AI to local values, languages, needs, and goals.

The dimensions of digital sovereignty

Sovereign AI doesn’t exist in isolation. It is deeply connected to broader principles of digital sovereignty, which can be broken down into three key dimensions:

  • Technology sovereignty: As AI systems become increasingly foundational to public services and economic competitiveness, the ability to independently design, build, and operate these systems is critical. Technology sovereignty refers not only to visibility into model architecture, training data, and system behavior, but also to control over the hardware and platforms on which these models run. A key concern is the widespread dependence on foreign-made accelerators, such as GPUs from NVIDIA and AMD, which currently dominate the AI compute landscape. In response, countries and enterprises are investing in alternative supply chains, domestic chip manufacturing, and open hardware initiatives to reduce strategic vulnerabilities. Achieving technology sovereignty means being able to develop and deploy AI models on infrastructure that is both trusted and locally governed, minimizing risks associated with geopolitical tensions, export controls, or external platform dependencies.
  • Operational sovereignty: This dimension addresses not only where AI systems are deployed—such as on-premises or in a sovereign cloud—but also who has the authority, skills, and access to operate and maintain them. For governments and enterprises seeking greater autonomy, it is not sufficient to own the infrastructure; operational sovereignty means ensuring that AI systems can be managed by locally trusted personnel with the appropriate skills and clearance. This includes building a talent pipeline of AI engineers, MLOps specialists, and cybersecurity professionals, as well as reducing reliance on foreign managed service providers. In many cases, national policies are beginning to mandate that critical digital infrastructure must be supported by staff of specific nationality or within legal jurisdictions to safeguard sensitive data and systems from foreign influence or supply chain risks. Achieving operational sovereignty ensures that AI systems remain functional, secure, and accountable under local control, even in times of global disruption.
  • Data sovereignty: Data sovereignty pertains to the legal and ethical governance of data—specifically, ensuring that data is collected, stored, and processed within the boundaries of national laws and values. In a world increasingly reliant on AI, data is not just an asset; it is a strategic resource. Sovereign AI systems must operate in compliance with local regulations, including privacy laws, data residency requirements, and consent frameworks. Moreover, data governance must reflect cultural and societal expectations, particularly in areas like biometric data, healthcare, and finance. Countries and enterprises are therefore investing in trusted data infrastructures, federated data platforms, and national datasets to maintain control over critical information assets. The ability to govern who can access, analyze, and share data—especially in multi-cloud or cross-border contexts—is essential to maintaining trust, compliance, and competitive advantage.

Open source enhances each of these pillars. It supports transparency, enables interoperability, and provides a foundation for aligning systems with both national regulations and organizational strategies.

Challenges ahead: Compute, data, skills, and governance

Despite growing momentum, implementing sovereign AI at scale remains complex. Several challenges persist:

Access to high-performance computing remains a major constraint, with shortages in GPUs and the cost of training large models proving prohibitive for many governments and businesses. The availability of high-quality, localized datasets is also a limiting factor, particularly for underrepresented languages or niche domains.

Workforce development is another pressing issue. There is a global shortage of professionals with the skills to build, deploy, and govern AI systems responsibly. At the same time, the absence of shared technical and ethical standards across jurisdictions can create barriers to cross-border collaboration and model interoperability.

Overcoming these obstacles will require a combination of public investment, private innovation, international cooperation, and sustained support for open source communities.

What’s next? A sovereign, open, and responsible future

We are entering a critical phase where the capabilities of AI will help define national competitiveness and organizational resilience. Those that succeed will not necessarily be the ones with the largest models, but rather those with systems that are most aligned with their strategic priorities and stakeholder needs.

Sovereign AI, when grounded in open source principles, provides a powerful pathway forward. It enables localized innovation without duplicating global efforts. It fosters transparency and accountability without compromising on performance. And it supports a more ethical and sustainable AI ecosystem, with governance models that reflect the values of those who build and use it.

Open source is not just a tool for achieving AI sovereignty. In many ways, it is the model of sovereignty itself.

If your organization is exploring its own path toward Sovereign AI — whether in government, industry, or research — this is the time to embrace openness as a lever for control, not a concession. By doing so, we can build an AI future that is not only powerful and intelligent but also inclusive, transparent, and truly our own.


Vincent Caldeira is the Chief Technology Officer, APAC at Red Hat. In this role, he is primarily responsible for engaging and building partnerships with strategic customers to explain Red Hat’s vision and establish and reinforce Red Hat as an industry leader while establishing trusted relationships with customer’s technology leaders and advocating for relevant emerging technologies.

Vincent has spent more than 20 years of his career in the financial technology sector, both as a Chief Technology Officer shaping technology strategy, enterprise architecture and driving technology transformation roadmaps, as well as driving talented engineering teams to design, build and deliver software solutions in the financial software vendor industry.

Vincent also contributes to OS-Climate, a Linux Foundation-backed open source project that intends to build the breakthrough technology and data platforms needed to more fully integrate the impacts of climate change in global financial decision-making and risk management, where he acts as the lead architect and Technical Advisory Council member.

Vincent holds a Master of Science in Management (majoring in Information Systems Management) from HEC Paris.

TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.

Featured image: Jack Stapleton on Unsplash

AI is not merely a technology, but a strategic cultural asset