FeaturedIssues

AI Needs UN Oversight

By Peter G. Kirchschläger
Special to The Times Kuwait


Many scientists and tech leaders have sounded the alarm about artificial intelligence in recent years, issuing dire warnings not heard since the advent of the nuclear age. Elon Musk, for example, has said that “AI is far more dangerous than nukes,” prompting him to ask an important question: “Why do we have no regulatory oversight? This is insane.”

The late Stephen Hawking made a similar point: “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many.”

Given the potentially catastrophic consequences of unchecked AI, there is a clear need for international guardrails to ensure that this emerging technology — more accurately called data-based systems — serves the common good. Specifically, that means guaranteeing that human rights are upheld globally, including online.

To that end, governments should introduce regulations that promote data-based systems that seek to protect the powerless from the powerful by ensuring that human rights are respected, protected, implemented, and realized within such systems’ entire life cycle, including design, development, production, distribution, and use.

Equally important, the United Nations must urgently establish an International Data-Based Systems Agency (IDA), a global AI watchdog that would promote safe, secure, sustainable, and peaceful uses of these technologies, ensure that they respect human rights, and foster cooperation in the field. It would also have regulatory authority to help determine market approval for AI products. Given the similarities between data-based systems and nuclear technologies, the International Atomic Energy Agency (IAEA) would be the best model for such an institution, not least because it is one of the few UN agencies with ‘teeth’.

The success of the IAEA has shown that we are capable of exercising caution and prohibiting the blind pursuit of technological advances when the future of humanity and the planet are at stake. After the bombing of Hiroshima and Nagasaki revealed the devastating humanitarian consequences of nuclear war, research and development in the field of nuclear technology was curtailed to prevent even worse outcomes. This was made possible by an international regime, the IAEA, with strong enforcement mechanisms.

A growing number of experts from around the world have called for the establishment of an IDA and supported the creation of data-based systems founded on respect for human rights. The Elders, an independent group of global leaders founded by Nelson Mandela, have recognized the enormous risks of AI and the need for an international agency like the IAEA “to manage these powerful technologies within robust safety protocols” and to ensure that they are “used in ways consistent with international law and human-rights treaties.” Consequently, they encourage countries to submit a request to the UN General Assembly for the International Law Commission to draft an international treaty establishing a new AI safety agency.

Among the influential supporters of a legally binding regulatory framework for AI is Sam Altman, the CEO of OpenAI, whose public release of ChatGPT in late 2022 kicked off the AI arms race. Last year, Altman called for an international authority that can, among other things, “inspect systems, require audits, test for compliance with safety standards, [and] place restrictions on degrees of deployment and levels of security.” Even Pope Francis has emphasized the need to establish a multilateral institution that examines the ethical issues arising from AI and regulates its development and use by “a binding international treaty.”

The UN, for its part, has highlighted the importance of promoting and protecting human rights in data-based systems. In July 2023, the Human Rights Council unanimously adopted a resolution on ‘New and emerging digital technologies and human rights’, which notes that these technologies “may lack adequate regulation” and stresses the need “for effective measures to prevent, mitigate, and remedy adverse human-rights impacts of such technologies.” To that end, the resolution calls for establishing frameworks for impact assessments, for exercising due diligence, and for ensuring effective remedies and human oversight and legal accountability.

More recently, in March, the UN General Assembly unanimously adopted a resolution on “Seizing the opportunities of safe, secure and trustworthy artificial-intelligence systems for sustainable development.” This landmark resolution recognizes that “the same rights that people have offline must also be protected online, including throughout the lifecycle of artificial-intelligence systems.”

Now that the international community has recognized the imperative of protecting human rights in data-based systems, the next step is obvious. The UN must now translate this global consensus into action by establishing an IDA.


Peter G. Kirchschläger
Professor of Ethics and Director of the Institute of Social Ethics ISE at the University of Lucerne, is a visiting professor at the ETH Zurich’s AI Center.


Copyright: Project Syndicate, 2024.
www.project-syndicate.org



Read Today's News TODAY...
on our Telegram Channel
click here to join and receive all the latest updates t.me/thetimeskuwait




Back to top button