The Opportunity

  • There’s a window of opportunity to influence policy-making, and balancing urgency with interrogation will be key to co-creating better policies.

    The sense of a moment of opportunity to influence policy-making emerged strongly from almost all the roundtables. To an unusual degree, government officials at the highest level are deeply engaged in questions of AI governance. The call to action to civil society, academia and those creating the technology was clear: Now is the time to present policy frameworks and ideas that work. At the same time, and particularly in the case of the sector-or-topic specific roundtables, participants emphasised that rushing to apply AI to complex systems without fully understanding the historic challenges and dynamics that characterise these systems, won’t work. Education, food security and disability in the workforce are complicated topics, with challenges and intricacies that predate the advent of AI-powered technologies. Likewise, the forces that make the global data landscape unequal, or those that contribute to a possible stalling in ‘disruptive’ scientific research, are multifaceted and not all their facets can be addressed by AI.

    For most individuals and groups, AI is one strand in a web of social systems impacting their lives and opportunities. Applying AI to systems that are fundamentally broken won’t lead to equitable outcomes. As one roundtable participant, representing a not-for-profit organisation working on AI in education, put it, “AI needs to help reframe the paradigm, not support it”. That said, it’s hard to ignore the sense of possibility inherent in policy-makers’ appetite for new thinking. Somehow, a balance must be struck between exploring and understanding complex societal and economic issues, and taking timely action so that communities can enjoy the benefits of AI.

    The tension between urgency and interrogation also arises in the question of defining terms. The fast tempo of AI progress means that certain key terms have been ushered into use before they’ve been fully defined. There is currently no agreed definition for the term ‘frontier models,’ and ‘data equity’ can be - and is - understood in a variety of ways. Even apparently self-explanatory terms, like ‘education,’ were questioned during the roundtables. (What does it mean to be educated, and does this change in a changing society?)
    This may not matter - fixating on the need to define terms that are inherently broad and fluid may impede progress. In some contexts, though, it may help industry, governments and civil society to work together if all parties share a clear understanding of a term and if definitions are intentionally inclusive. While AI-powered technologies can entrench inequalities, they also have the potential to address them. The fact that AI-powered systems and technologies entrench existing biases and inequalities is well documented. These risks, and the urgent need to address them, featured prominently in this year’s discussions.

    Less obviously, it emerged that there are ways in which AI could help increase equity, if policymakers make that a goal. AI could increase employment and earnings for people with disabilities, making the workforce more inclusive. By developing tools that increase targeted recruiting of people with disabilities, or that help disabled job-seekers find the right opportunities, AI-powered tools could also improve a disabled employee’s experience on the job. Personalised learning could help level the playing field for children and adults alike, improving educational outcomes across the board. Generative AI opens up the potential for more genuinely personalised learning than we’ve seen in the past, while large AI models offer scope to improve the capabilities of AI tutors.

    Re-thinking the collection, governance, architecture and management of data could unlock benefits across virtually all applications of AI. Data governance and equity was the explicit subject of one roundtable, but it was striking how prevalent data was across all of them. Within science, data is the key difference between fields that use AI and those that don’t. If the data generated and held by fields like materials science, physics, chemistry, healthcare and criminology were structured the way much data is in the life sciences, those fields would likely see huge advances in the ways AI could be applied. They may have their own ‘AlphaFold’ moment or similar, where a longstanding challenge is met and progress accelerated. To date, most AI-enabled advances have been in structural biology and genomics, partly because the life sciences have more historical experience in and more developed frameworks for dealing

    with data. AI alone can’t address the fact that the world is unequal, meaning that the global data landscape is too. But intentional effort to ensure datasets are representative, and improve data collection practices in the Global South, could lead to greater equity in terms of who gets to access the progress and benefits of AI.

  • While some obstacles to equitable distribution of the benefits and risks of AI relate geopolitical and geo-strategic realities and will require political action to overcome, AI itself could help address others. Firstly, AI could usher in greater collaboration between disciplines, and between the sciences and the humanities. The roundtables universally reflected the need for more multidisciplinary collaboration as a prerequisite for equitable distribution of the benefits of AI. Yet some conversations also raised the hope that AI could enable that collaboration. AI for scientific discovery is one area where more collaboration is needed, including between software development and scientific research. There is likewise a need for collaboration between people with different mindsets, such as builders and explorers, and different backgrounds, such the arts and humanities. AI could help bridge gaps that currently exist. One participant pointed out that engineering as a career is currently very focused on analysis and modelling, whereas traditionally it could also be thought of as a creative field, in that it’s about conceiving and making things. Engineering deals intimately with social structures and patterns, such as how people use transport systems. It’s possible that AI will make engineering more creative again, and that in ten years’ time, science and the arts and humanities won’t look so far apart as they do now. Secondly, AI and machine learning can offer insights into patterns and gaps in data, including in the data related to AI itself - in which disciplines and to address which problems it’s being used most successfully, for example. AI systems could illuminate the question of where funding should go to enable further progress, or TBC
    In several contexts, participants advised that we focus on the potential of AI to enable solutions and and fuel progress, rather than viewing it as a single answer to long-standing and complex policy questions.

  • There are clear actions governments and policymakers can take to help ensure responsible and beneficial AI While the challenges of governing AI responsibly, inclusively and iteratively were very present in each discussion, ideas also emerged for all actors. The role of government in particular was discussed, as several policymakers explicitly invited input from the gathered representatives of other parts of the ecosystem. From developing privacy-enhancing technologies to unlock the public value of AI and data, to ensuring that global governments learn from each other, to incentivising talent into important but overlooked areas, the roundtables surfaced many ideas for policy and governance. Some are relatively obvious but not necessarily straightforward, while others require paradigm-shifts and new ways of thinking about long-standing challenges.

    The role of government, and how it may adapt, was discussed in several contexts. As has been suggested by the Tony Blair Institute and others [ref], some participants saw an opportunity to re-think government’s role. Some suggested that we need to start thinking of government as a platform, or the steward of many platforms. Others felt that this framing may be appropriate for the digital economy, but that a ‘control tower’ model of government may still be appropriate for other sectors, and that governments may have to have multiple identities in the future. Others pointed out that government has “moved from a consumer to a driver of digital transformation.” The idea was expressed that we have to be really clear about the role we want governments to play. Government has to nurture the core infrastructure, the “public goods” of AI, some felt - the things that no-one else in the ecosystem can provide (including the rules). In addition to its role, actions that governments, and international institutions, can take were discussed at length. The upshots of these conversations are reflected in Section 3 below.