FeaturedFocus

Governments Must Shape AI’s Future

By Mariana Mazzucato and Fausto Gernone
Special to The Times Kuwait


Last December, the European Union set a global precedent by finalizing the Artificial Intelligence Act, one of the world’s most comprehensive sets of AI rules. Europe’s landmark legislation could signal a broader trend toward more responsive AI policies. But while regulation is necessary, it is insufficient. Beyond imposing restrictions on private AI companies, governments must assume an active role in AI development by designing systems and shaping markets for the common good.

To be sure, AI models are evolving rapidly. When EU regulators released the first draft of the AI Act in April 2021, they hailed it as ‘future-proof’, only to be left scrambling to update the text in response to the release of ChatGPT a year and a half later. But regulatory efforts are not in vain. For example, the law’s ban on AI in biometric policing will likely remain pertinent, regardless of advances in the technology. Moreover, the risk frameworks contained in the AI Act will help policymakers guard against some of the technology’s most dangerous uses. While AI will develop faster than policy, the law’s fundamental principles will not need to change — though more flexible regulatory tools will be needed to tweak and update rules.

But thinking of the state as only a regulator misses the larger point. Innovation is not just some serendipitous market phenomenon. It has a direction that depends on the conditions in which it emerges, and public policymakers can influence these conditions. The rise of a dominant technological design or business model is the result of a power struggle between various actors — corporations, governmental bodies, academic institutions — with conflicting interests and divergent priorities. Reflecting this struggle, the resulting technology may be more or less centralized, more or less proprietary, and so forth.

The markets that form around new technologies follow the same pattern, with important distributive implications. As the software pioneer Mitch Kapor puts it, “Architecture is politics.” More than regulation, a technology’s design and surrounding infrastructure dictate who can do what with it, and who benefits. For governments, ensuring that transformational innovations produce inclusive and sustainable growth is less about fixing markets, and more about shaping and co-creating them. When governments contribute to innovation through bold, strategic, mission-oriented investments, they can create new markets and crowd-in the private sector.

In the case of AI, the task of directing innovation is currently dominated by large private corporations, leading to an infrastructure that serves insiders’ interests and exacerbates economic inequality. This reflects a longstanding problem. Some of the technology firms that have benefited the most from public support, such as Apple and Google, have also been among those accused of using their international operations to avoid paying taxes. These unbalanced, parasitic relationships between big firms and the state now risk being further entrenched by AI, which promises to reward capital while reducing the returns to labor.

The companies developing generative AI are already at the center of debates about extractive behaviors, owing to their unfettered use of copyrighted text, audio, and images to train their models. By centralizing value within their own services, they will reduce value flows to the artists whom they rely on. As with social media, the incentives are aligned for rent extraction, whereby dominant intermediaries amass profits at others’ expense.

Today’s dominant platforms, such as Amazon and Google, exploited their position as gatekeepers by using their algorithms to extract ever larger fees (‘algorithmic attention rents’) for access to users. Once Google and Amazon became one big ‘payola’ scheme, information quality deteriorated, and value was extracted from the ecosystem of websites, producers, and app developers the platforms relied on. Today’s AI systems could take a similar route: value extraction, insidious monetization, and deteriorating information quality.

Governing generative AI models for the common good will require mutually beneficial partnerships, oriented around shared goals and the creation of public, rather than only private, value. This will not be possible with redistributive and regulatory states that act only after the fact; we need entrepreneurial states capable of establishing pre-distributive structures that will share risks and rewards ex ante. Policymakers should focus on understanding how platforms, algorithms, and generative AI create and extract value, so that they can create the conditions, such as equitable design rules, for a digital economy that rewards value creation.

The internet is a good example of a technology that has been designed around principles of openness and neutrality. Consider the principle of ‘end-to-end’, which ensures that the internet operates like a neutral network responsible for data delivery. While the content being delivered from computer to computer may be private, the code is managed publicly. And while the physical infrastructure needed to access the internet is private, the original design ensured that, once online, the resources for innovation on the network are freely available.

This design choice, coordinated through the early work of the Defense Advanced Research Projects Agency (among other organizations), became a guiding principle for the development of the internet, allowing for flexibility and extraordinary innovation in the public and private sector. By envisioning and shaping new domains, the state can establish markets and direct growth, rather than just incentivizing or stabilizing it.

It is hard to imagine that private enterprises developing the internet in the absence of government involvement would have adhered to equally inclusive principles. Consider the history of telephone technology. The government’s role was predominantly regulatory, leaving innovation largely in the hands of private monopolies. Centralization not only hampered the pace of innovation but also limited the broader societal benefits that could have emerged.

For example, in 1955, AT&T persuaded the Federal Communications Commission to ban a device designed to reduce noise on telephone receivers, claiming exclusive rights to network enhancements. The same kind of monopolistic control could have relegated the internet to being merely a niche instrument for a select group of researchers, rather than the universally accessible and transformative technology it has become.

Likewise, the transformation of GPS from a military tool to a universally beneficial technology highlights the need to govern innovation for the common good. Initially designed by the US Department of Defense to coordinate military assets, public access to GPS signals was deliberately degraded on national-security grounds. But as civilian use surpassed that of the military, the US government, under President Bill Clinton, made GPS more responsive to civil and commercial users worldwide.

That move not only democratized access to precise geolocation technology; it also spurred a wave of innovation across many sectors, including navigation, logistics, and location-based services. A policy shift toward maximizing public benefit had a far-reaching, transformational impact on technological innovation. But this example also shows that governing for the common good is a conscious choice that requires continuous investment, high coordination, and a capacity to deliver.

To apply this choice to AI innovation, we will need inclusive, mission-oriented governance structures with the means to co-invest with partners that recognize the potential of government-led innovation. To coordinate inter-sectoral responses to ambitious objectives, policymakers should attach conditions to public funding so that risks and rewards are shared more equitably. That means clear goals to which businesses are held accountable; high labor, social, and environmental standards; and profit sharing with the public. Conditionalities can, and should, require Big Tech to be more open and transparent. We must insist on nothing less if we are serious about the idea of stakeholder capitalism.

Ultimately, addressing the perils of AI demands that governments extend their role beyond regulation. Yes, different governments have different capacities, and some are highly dependent on the broader global political economy of AI. The best strategy for the United States may not be the best one for the United Kingdom, the EU, or any other country. But everyone should avoid the fallacy of presuming that governing AI for the common good is in conflict with creating a robust and competitive AI industry. On the contrary, innovation flourishes when access to opportunities is open and the rewards are broadly shared.


Mariana Mazzucato
Mariana Mazzucato, Founding Director of the UCL Institute for Innovation and Public Purpose, is Chair of the World Health Organization’s Council on the Economics of Health for All.

Fausto Gernone
Fausto Gernone, a PhD student at the UCL Institute for Innovation and Public Purpose, is on a research visit at the Haas School of Business at the University of California, Berkeley.


Copyright: Project Syndicate, 2024.
www.project-syndicate.org




Read Today's News TODAY...
on our Telegram Channel
click here to join and receive all the latest updates t.me/thetimeskuwait






Back to top button