top of page

Albania’s AI Minister: A Signpost for the Future of Governance

In September 2025, when Albania unveiled Diella as the "Minister of State for Artificial Intelligence," it became the first country in the world to give an AI system a cabinet-level role. Diella started as a virtual assistant on the e-Albania portal, helping citizens navigate online services, but now has a formal mandate to oversee public procurement and, over time, take a central role in awarding tenders.

Diella signals that AI agents could be embedded in decision-making structures within the next five to ten years, particularly in small and mid-sized states that view digitalisation as a development strategy and a means to achieve cleaner governance.


The Promise: AI as Super-Expert in Law and Bureaucracy


Albania’s digital transformation has been accelerating for years. The e-Albania platform centralises hundreds of services and serves as the primary gateway through which citizens and businesses interact with the state. Diella, developed by the National Agency for the Information Society using Microsoft Azure infrastructure and OpenAI models, was designed to sit atop this system and guide users through everyday interactions.


From a governance perspective, the potential benefits are clear: an AI system that specialises in Albanian law, procurement rules, and administrative procedures can swiftly scan vast amounts of legislation and case law, draft memos and legal documents faster than human experts, and verify that decisions align with current regulations. In a state where EU institutions and transparency organisations have long identified public procurement as a hotspot for corruption and clientelism, the idea of an incorruptible, tireless "super-expert" is politically powerful.


In the short term, this kind of system can reduce backlogs and inconsistent decisions, cut down on errors caused by human fatigue or outdated knowledge, and allow civil servants to focus on political and strategic choices instead of hunting for legal references all day. It's clear why a leader like Edi Rama would promote Diella as a symbol of a modern, efficient, and corruption-resistant state.


The Systemic Pain Points


Look further ahead, and you will see that the risks appear on different levels. For example, large language models like those powering Diella are known to hallucinate, generating seemingly plausible yet false statements with confidence. These models are also trained on vast datasets whose composition is complex to scrutinise, making their internal workings effectively opaque, even to experts. When these systems are integrated into legal and bureaucratic workflows, the state is essentially relying on a black box to interpret and apply its own rules. 


This poses a danger to political accountability. If a virtual minister pre-screens or formally awards procurement contracts, who is responsible when a decision proves to be flawed or unjust? The elected cabinet? The civil servants who designed the workflows? Or the foreign cloud provider? Commentators already warn that Diella risks becoming an example of "avatar democracy" and a spectacle of accountability: a visible digital face that conceals the ongoing influence of human actors while offering them a convenient excuse – "Because the AI said so."


The risks extend beyond procurement. In defence and security, AI platforms are quickly becoming part of operational decision-making processes. For example, Palantir has secured a long-term U.S. Army enterprise contract worth up to $10 billion. This agreement consolidates data analytics and AI tools into a single backbone for planning and targeting. Similar systems that shape policing, border management, or national security strategies in smaller states can dramatically widen the gap between a few large, private tech companies and the citizens subject to their decisions. 


There is also an environmental and infrastructural dimension. Running and scaling large AI systems requires energy-intensive data centres that often consume substantial amounts of water for cooling purposes. Recent research and industry analyses demonstrate that AI inference and training can significantly increase electricity demand and water usage. This raises difficult questions about sustainability and local environmental impacts. For countries that rely on foreign cloud infrastructure, this creates an invisible external dependency, with environmental costs often materialising elsewhere.


Finally, there is the risk of long-term dependency. If governments, particularly smaller ones, base their core administrative and decision-making processes on proprietary systems controlled by a few big tech companies, they will have less flexibility over time. Switching providers becomes both technically and politically costly, and policy choices may quietly be constrained by what the platform makes easy or difficult. 


Guardrails: How to Use AI Without Surrendering Democracy


Europe is starting to establish a regulatory framework that directly impacts experiments like Diella. The EU’s AI Act classifies many public-sector applications that affect fundamental rights as “high-risk” and imposes strict requirements on risk management, transparency, human oversight, and data quality. Article 27 requires public bodies deploying high-risk AI to conduct a fundamental rights impact assessment before initial use. This assessment must map who will be affected, how the system will be used, what risks will arise, and how humans will oversee it.


For Albania, which is seeking closer alignment with EU norms, this translates into establishing several types of guardrails. Legally, it means establishing clear boundaries on where AI can be used for decision support and where final decisions must be made by humans, particularly when rights, freedoms, sanctions, or access to essential services are at stake. Institutionally, every AI-assisted decision must have a clearly identified human owner, and citizens must be informed when AI is involved and how they can appeal. Technically, it requires detailed logging of inputs and outputs, regular bias and security audits, and prioritising narrow, domain-specific systems that can be more tightly governed than generic models, whose behaviour is more challenging to predict and control.


The key is not only to define guardrails, but also to ensure they are adaptive. As AI capabilities, political incentives, and social expectations evolve over the next decade, oversight mechanisms, legal frameworks, and technical standards must be regularly revisited rather than treated as one-time compliance exercises.


AI as Copilot, Not Pilot


The deeper question is cultural. In one plausible future, governments will embrace tools like Diella while continuing to value human judgment. Ministers and civil servants would use AI as a copilot, a powerful instrument for retrieving information, checking compliance, and spotting patterns. They would remain fully responsible for their decisions. When challenged, they can explain what the AI produced and why they agreed or disagreed with it.


In another future, AI quietly becomes the pilot. Officials, under time pressure and lacking expertise, and even out of laziness, reflexively defer to the system's recommendations. Citizens encounter a wall of "the AI says no," with little transparency or recourse. Meanwhile, political leaders find it convenient to hide behind a digital avatar when their choices prove unpopular.


Diella has not yet determined which of these futures will dominate, but it makes the stakes tangible. If humans remain visibly in charge, legal, technical, and institutional guardrails are enforced, and transparency and accountability are treated as design constraints rather than afterthoughts, AI can genuinely strengthen state capacity and citizen agency in governance. Otherwise, the first AI minister may be remembered less as a symbol of innovation and more as an early step toward a less comprehensible and more accountable form of governance.

bottom of page