From risk to revolution: How AI can revive democracy

srikumar sahoo
0


 

Members of industries and governments at the frontier of the latest technologies are meeting at Bletchley Park on November 1-2 to take part in Britain’s global AI Safety Summit.

Safety is a familiar theme in technology policy: technology can offer opportunities for business, but poses risks to society. First with the internet, then social media, the role of government has been to contain the fallout from the generational technologies deployed by industry.

Yet there is an alternative approach. AI could be the force behind a revolution in responsive, innovative public institutions – from  local libraries to international organizations, tackling some of the most complex global challenges. But to get there, the role of government must go beyond mere regulation and move towards the implementation of public AI systems.

AI can reinvigorate the democratic process



The United States, Sweden and France are recognizing AI does not have to be built to the blueprints of shareholders, but can and should be utilized by the public, for the public. While AI poses risks – potentially existential ones – it also offers opportunities to reverse decades of under-investment in public capacity while re-invigorating the democratic process.

Embracing these opportunities will rely on governments playing a central role in the design, development and deployment of technology that serves the public good. It will require investment in entrepreneurial government digital capacity, co-operation on technology and its governance with our friends and allies, and recognition that wealth and power do not confer legitimacy to decisions made in a boardroom.

AI is back in the limelight as the technology most in need of scrutiny, attention and investment

The latest technology to capture the attention of venture capitalists and headline writers is generative AI, best understood as software that looks for patterns in images or text or music and regurgitates plausible variations. These remarkable magic tricks have eclipsed cryptocurrencies and the metaverse, thrusting AI back into the limelight as the technology most in need of scrutiny, attention and investment.

Technology is transformative. The way we work, shop and understand the world around us has been reshaped by digital technology and the companies that profit from its design. AI has been part of that transformation for 20 years.

Heading out without Google Maps or Citymapper in your pocket is brave. Ignoring your credit score is foolish. Running for office without targeted advertising and predictive electoral models is impossible.

The question now is what kind of world this technology might help build. Who gets to decide the answer to this question is critical.

Governments, the story goes, have been too slow to adapt. Technology appears 
beyond them. This is borne out by the people who usher through technological milestones. In 2022, the Index on AI recorded 32 AI breakthroughs by industry, only three in academia and none in government.

This has not always been the case. Some governments, chiefly the US administration, were technology leaders. Over the past few decades that capacity has been hollowed out. Now, lengthy government budget cycles and lead times fail to match the pace of technological development and its need for constant improvement so technology tends to be outsourced.

As a result, decisions affecting citizens have been left to technology companies or technocratic bodies – or simply not made at all. The limits and norms on speech or privacy are negotiated through app stores and in boardrooms. Digitally enabled crime asks us what justice looks like without jurisdiction.

International cooperation on technology governance has been thin, while national approaches often fit global systems into parochial governance models. Regulation targeting the previous generation of technologies, such as social media platforms, has taken decades to appear, often emerging as unwieldy and half-baked.

The political power of technology companies

The power of technology companies in the political and geopolitical spheres is undeniable. Myanmar and Ethiopia, where digital platforms stand accused of fomenting ethnic violence, are two examples. Elon Musk, founder of one of the latest AI companies, has propelled himself on to the geopolitical stage through the Ukrainian army’s reliance on his Starlink technology.

This will be the future of AI governance as well, we are told. AI companies are so wealthy or so technologically advanced that they have captured the future. The contrast with how we talk about democracy or multilateral institutions couldn’t be starker: slow, underfunded or out of date.

The latest plea in an article in Foreign Affairs, co-written by Mustafa Suleyman, founder of Google’s DeepMind AI research lab, calls for AI companies to be invited to the international table as equals. The fact that these commercial organizations operate without the legitimacy of a social contract, democracy, or the provision of public goods might be saying the quiet part out loud.

To empower public institutions, AI needs new stewards, new design imperatives and new funding models

2024 will not be the year of the Singularity, a technological ‘Rapture’ that Silicon Valley Twitterati grimly predict will wipe out humanity or deliver (some of) us into Elysium. But it will in all likelihood represent another year in which the wealth and power that comes with building the tools and services we rely on to keep society on the rails continues to accumulate in the palms of a handful of technology companies who have tended to put the needs of society and democracy second.

It would be foolish to expect technology’s next prophets to do any better. If the next generation of AI is to empower public institutions, it needs new, capable stewards, new design imperatives and new funding models.

Reversing 20 years of what Mariana Mazzucato, the economist, calls the ‘infantilization of Whitehall’ won’t happen overnight. But governments around the world are looking at ways to build sovereign technology capacity, often delivering results at the kinds of budgets consultancy and outsourcing firms wouldn’t look twice at. Cina Lawson, Togo’s minister of digital economy and transformation, for example oversaw the building from scratch of a fully digital cash transfer programme serving a quarter of the population in just 10 days in response to the Covid pandemic.

Sweden is funding a Swedish large language model like the one behind Open-AI’s ChatGPT, recognizing the value of this technology to the public sector. The US’s quietly bipartisan CREATE AI Act aims to establish a national AI research resource to provide access to much-needed compute capacity and datasets for academics, researchers and start-ups.

A good start

To its credit, Bletchley’s AI Safety Summit press release commits £900 million to Britain’s compute capacity. This is a good start. An absence of talent, knowledge or ownership of technology creates a spiral in which skills, investment and capacity flow out of the public sector and governments become dependent on tech monopolies.

Agile, well-funded UK institutions such as Advanced Research and Invention Agency and the new Frontier AI Taskforce buck the trend by taking public institutions to the edge of innovation. Sovereign funding models, such as the Berlin-based Sovereign Tech Fund, support the German Federal Ministry for Economic Affairs in strengthening the foundations of public digital infrastructure.

The requirement for transparency, replicability and accuracy in the context of government procurement should have a positive effect on AI development, creating a market for socially beneficial and trusted AI tools. After all, an AI tool that supports asylum decisions or issues fines demands far higher standards of accuracy and trustworthiness than a video recommendation system found on social media platforms.

Public AI is achievable and, many now argue, necessary 

Far from a mirage in a desert of bureaucracy, public AI is both achievable and, many now argue, necessary. Without it governments will be left powerless to mitigate digital disruption while their systems and processes become anachronistic.

At the same time scale matters. Critics of governments looking to strengthen their AI capacity point to the enormous disparity between the capacities of some states and the technology firms leading the charge. As Togo’s example shows, the size of a country isn’t everything, but international collaboration on AI provides a way of making public AI more robust.

There is a clear role for international rules to contain the risks from AI, in the near term by ensuring transparency about which datasets are used to train artificial intelligence, or in the longer term to monitor the use of AI to synthesize novel pathogens, or the threat from a human-unfriendly superintelligence.

The opportunity afforded by AI is to build and boost capacity, not just to react and regulate. There is no reason why public AI capacity could not be pooled across nations, much like the multinational research that takes place at CERN.

Given current disparities in wealth and power, global governance must listen carefully to companies at the forefront of AI development. But legitimacy matters and decisions directly affecting the lives of people around the world should be made in the open, not in a boardroom. From warfare to healthcare, from housing to prisons, the use of AI must be publicly accountable.

The example of Taiwan 



If the use of technology by governments has fallen behind in recent years, its use in democracy has stood still. The modern world may bewilder a time-travelling Edwardian but he or she might feel a little more at home come election day. The way governments make decisions – how they wield power – and the relationship between citizen and state have changed comparatively little in the past century.

Attempts to improve the way democracy works have been isolated. The people behind vTaiwan, a neutral discussion platform that brings together politicians, academics civil society groups and citizens to discuss legislation, was praised as a useful way of finding consensus but their methods have not been widely taken up. Innovations such as participatory budgets, crowdsourcing and digital democracy have struggled to gain traction in most democracies.


In Britain, parliamentary debates still require speakers to be physically present and conclude with votes counted by walking through corridors. And on we go: a vote every few years, or a meeting around a table. AI promises much here, too. From Chile to Indonesia, experiments in using AI to allow citizens to reach collective decisions through referendums or participatory budgets have been enormously successful.

vTaiwan is one such example. One of the digital platforms vTaiwan uses is Pol.is. Discussions on this platform have led to legislative change on everything from the regulation of the ride-hailing app Uber to non-consensual intimate image sharing. Colin Megill, founder of Pol.is, said: ‘The application of machine intelligence to deliberative democracy presents a generational opportunity to transform policymaking worldwide.

Technology shapes society. Technological emaciation starves states and democracies of their credibility, both as geopolitical actors and in the eyes of their citizens. It is time now to halt that slide.

Post a Comment

0Comments

Thank you commenting!
If you have any doubts let me Know..

Post a Comment (0)