From Albania to Alabama: Should AI Officials Shape Governance?

In a groundbreaking yet controversial move, Albania's Prime Minister Edi Rama has appointed the world's first artificial intelligence entity as a government official, tasked with combating corruption in public procurement. The new AI ‘Minister’ is named Diella, named after the Albanian word for “sun” and depicted as a virtual woman in traditional folk attire.

There is allegedly no wizard behind the curtain here. This is a fully autonomous AI agent that will oversee the procurement processes for the country of Albania. The hope is for  Diella to address longstanding governance issues allegedly providing unbiased, efficient oversight free from human frailties like bribery or favoritism.

While this development marks a fascinating update in the intersection of AI and public administration, it raises profound questions about accountability, ethics, and the human element in decision-making. For the United States, where AI is already infiltrating sectors like healthcare and law enforcement, Albania's experiment could inspire similar innovations or serve as a stark warning.

Here at the Southern Institute for Digital Futures (SIFDF), we explore what this means for American governance, with a particular eye on states in the South like Alabama, and why rushing to adopt such measures might be a misguided path.

Diella’s Role and Capabilities

Introduced in early 2025 on the e-Albania platform, Diella is an AI-powered virtual assistant designed to simplify access to state documents for citizens and businesses. Styled in traditional Albanian garb, she processes voice commands and issues digitally stamped documents, cutting through bureaucratic red tape.

Diella isn't a physical presence in cabinet meetings but a virtual AI construct, powered by advanced machine learning algorithms to analyze procurement bids, flag irregularities, and ensure compliance with anti-corruption standards. According to PM Rama, this non-human official will operate with transparency and speed, potentially rooting out the rampant corruption that has plagued Albania's public sector for decades.

Skeptics, however, point to potential pitfalls: Could Diella herself be "corrupted" through manipulated data inputs or algorithmic biases? And who holds ultimate responsibility if the AI errs, programmers, the government, or the machine itself?

Implications for U.S. Governance

On the surface, Albania's initiative aligns with growing U.S. interest in AI for public service. Federal agencies like the Department of Defense and the IRS already employ AI for tasks such as fraud detection and resource allocation. The Biden administration's 2023 executive order on AI emphasized safe and trustworthy deployment, but it stopped short of envisioning AI "officials" in decision-making roles.

If Albania's model succeeds, it could pressure U.S. policymakers to explore similar applications, perhaps in areas like regulatory compliance or budget oversight, where human error or bias has led to inefficiencies.

However, the risks are substantial.

AI systems, no matter how sophisticated, lack true empathy, contextual understanding, and moral reasoning, qualities essential for governance. In the U.S., where democratic accountability is paramount, an AI official could undermine public trust: Who votes for or removes a machine? Moreover, vulnerabilities to cyberattacks or embedded biases could exacerbate issues rather than solve them. While the use of AI as a TOOL in governance is inevitable and in some cases already being used by government departments, there is no need

Should Alabama and the South Follow Suit?

Turning to the South, states like Alabama might see appeal in AI governance as a cost-saving measure. Alabama has already started exploring AI for predictive policing and agricultural optimization. Implementing an AI "official" could theoretically streamline processes in areas prone to corruption.

Yet, from our perspective at SIFDF, this seems like a bad idea, at least in the near term.

The American system prioritizes human-centered governance, relying on elected community members to represent us in decision-making. When crimes occur, individuals are judged by a jury of their peers.

AI systems lack the nuance to address these distinctly human community needs. Whether AI could ever develop a sense comparable to human empathy or emotion remains uncertain, and even if it could, the question persists: would we want to relinquish our governance and decision-making to it?

A Call for Caution

Albania's appointment of Diella is a bold tech news headline, spotlighting AI's potential to disrupt traditional governance. But for the U.S. and particularly the South, it's a reminder to proceed with caution. Rather than jumping to appoint AI officials, we should prioritize robust regulations, ethical AI development, and pilot programs that augment, rather than replace, human leaders. At SIFDF, we'll continue monitoring this story and its ripple effects, advocating for digital futures that empower people, not machines.

 

 

Previous
Previous

Why AI in Football is Diminishing the Fan Experience.

Next
Next

Why Implementing Robotic Umpires Might Not Be Such a Ball