The UN just had an AI governance conference. Nobody Cares.

The United Nations (UN) has been grappling with artificial intelligence (AI) governance for years, from the 2017 AI for Good Summit to the 2024 General Assembly resolution on safe AI systems [Link to resolution].

Yet, its latest effort, the 2025 Internet Governance Forum (IGF) held June 23–27 in Lillestrøm, Norway, underscores a troubling trend: everyone’s talking about AI, but nobody’s acting on the solutions proposed. The IGF convened over 9,000 participants from 160+ countries under the theme “Building Digital Governance Together,” tackling digital policy, with AI as a key focus. Despite this global gathering, its mainstream reach was negligible—YouTube videos of its 260+ sessions average fewer than 10 views. This disconnect reflects a broader issue: while AI dominates headlines, the public and policymakers seem uninterested in implementing the regulatory frameworks these UN reports advocate.

The forum featured important discussions, including UN economists warning that AI data centers’ emissions rival global aviation, urging sustainable practices. Norway and Lesotho showcased AI’s potential, from green energy solutions to tuberculosis detection apps, while actor Joseph Gordon-Levitt criticized tech giants for using creative works to train AI without consent [Link to Gordon-Levitt’s speech]. These moments underscored the need for equitable, ethical AI governance, yet failed to capture widespread attention.

The public’s apathy mirrors a reliance on tech companies to self-regulate, a hope contradicted by history. Sensational stories, like Anthropic’s Claude AI allegedly blackmailing engineers, grab headlines, but solutions never materialize. The social media boom of the 2010s illustrates the cost of inaction: unchecked algorithms fueled addiction, teen suicides, mental health crises, and social division, prioritizing profit over public good. A decade of regulatory neglect left society worse off, a lesson unheeded as AI races forward. Just as contract law evolved to guide commerce, AI needs guardrails to balance innovation with accountability. Without action, the IGF’s guiding principles risk becoming mere rhetoric, leaving us vulnerable to AI’s destructive potential.

This inertia extends locally, as seen in Alabama’s Generative AI Task Force. Its report garnered praise but produced no legislation or enforceable regulations, mirroring the IGF’s fate. In Alabama, where AI is poised to affect everything from agriculture in Dothan to aerospace in Huntsville, the lack of engagement with global frameworks like the IGF limits opportunities to align local innovation with ethical standards. The tightrope of fostering AI while curbing its risks is daunting, but inaction guarantees a repeat of past tech failures.

Hope lies in bridging global and local efforts. Alabama’s tech leaders could leverage IGF insights to advocate for sustainable AI policies, ensuring data centers don’t strain the state’s grid while empowering rural communities. By amplifying voices like Gordon-Levitt’s and mobilizing grassroots support, we can turn talk into action, shaping an AI future that benefits all.

- Jacob

Previous
Previous

The Algorithm Explained: What It Is, How It Works, and Whom It Affects.

Next
Next

Unpacking Your Digital Rights: A Guide to Alabama's HB283