Multilateralism and Artificial Intelligence:
What Role for the United Nations?
Eugenio V. Garcia *
Orcid https://orcid.org/0000-0002-7207-4653
Chapter for the forthcoming book The Global Politics of Artificial Intelligence
Preprint, 10 August 2020
Abstract
The objective of this chapter is to discuss how multilateralism, and the United Nations (UN) in
particular, can play a role in encouraging further engagement in artificial intelligence (AI) at the
global level. In a fragmented landscape, despite numerous initiatives on AI principles by civil
society, the industry, and some governments, the international governance of AI lacks
coordination and has been plagued by competition. The UN has been active on several fronts,
including UNESCO’s Ad Hoc Expert Group on AI ethics and the follow-up of the Secretary-
General’s High-Level Panel on Digital Cooperation. The UN is well positioned to take advantage
of ongoing initiatives to offer a credible multilateral track, under the auspices of the
Organisation, towards ethical, human-centred, and peaceful uses of AI systems. At this juncture,
advisory bodies and voluntary platforms can help pave the way for the UN to bridge differences
by promoting international cooperation, prevention, and foresight.
Keywords
United Nations. Artificial Intelligence. Multilateralism. Global Governance.
*Senior adviser on peace and security, Office of the President of the United Nations General Assembly, New York.
Diplomat and PhD in International Relations from University of Brasilia. Academic researcher currently focused
on artificial intelligence and the impact of new technologies in world politics. I thank Maurizio Tinnirello for the
invitation and Thomas Campbell, Irakli Beridze, and the anonymous reviewers for their insightful comments on
the first draft. The views expressed herein are those of the author. Email: egarcia.virtual@gmail.com
Introduction
Applications of artificial intelligence (AI) in business, government, and everyday life
have been developing at a breakneck pace, fuelled by powerful computing hardware, abundant
data, and online training of machine learning algorithms. International relations, global security,
and diplomacy are likely to be profoundly affected as well, and many anticipate that such
ground-breaking technological developments will be capable of, inter alia, reshaping the
character of warfare in the 21st century.1
The level of AI readiness by governments and companies will have a dramatic effect upon
competitiveness and will probably become a critical factor in investments and economic growth.
AI-driven industry automation in developed countries of labour-intensive jobs could arguably
displace traditional comparative advantages of developing countries, such as cheap workforce
and raw materials. A widening gap in prosperity and wealth would mostly affect those
countries unable to develop digital skills and infrastructure to reap the rewards of AI
opportunities in productivity and innovation. 2 If not long ago global inequality was gauged in
terms of have and have-not countries, a new divide could be emerging between AI-ready and not-
AI ready.3
As a general-purpose technology with multiple capabilities, AI can pose challenges that
need to be prevented or mitigated by pooling resources and expertise in defining agreed
parameters to safely develop its full potential.4 AI governance is likely to gain traction over the
next years and the United Nations (UN), the most representative and wide-ranging multilateral
organisation in global politics, will eventually get further involved in providing space for
international cooperation and facilitating negotiations on how to deal with controversies
surrounding AI policymaking. With its unmatched range and universality, the UN can offer a
neutral, nonpartisan, multilateral platform for intergovernmental discussions on AI in several
domains. Some remain sceptical of any substantive agreement among the major players,
envisioning scenarios of inter-State jostling, rivalry redux, AI “arms race,” and “decoupling,”
1 C. Brose, “War’s sci-fi future: the new revolution in military affairs,” Foreign Affairs 98, no. 3 (May-June 2019): 122-
134; P. Gasser, et al., Assessing the strategic effects of artificial intelligence (Center for Global Security Research,
Lawrence Livermore National Laboratory, and Technology for Global Security, 2018),
https://www.tech4gs.org/assessing-the-strategic-effects-of-artificial-intelligence.html; K. F. Lee, AI superpowers:
China, Silicon Valley, and the new world order (Boston: Houghton Mifflin Harcourt, 2018); D. Wagner and K. Furst,
AI supremacy: winning in the era of machine learning (Scotts Valley, CA: CreateSpace, 2018); K. Payne, “Artificial
intelligence: a revolution in strategic affairs?” Survival 60, no. 5 (2018): 7-32.
2 Digital economy report 2019, UNCTAD, 2019, https://unctad.org/en; J. Manyika and J. Bughin, The promise and
challenge of the age of artificial intelligence (McKinsey Global Institute, 2018), 4,
https://www.mckinsey.com/featured-insights/artificial-intelligence.
3 There are no Latin American, Caribbean, or African countries in the top 20 ranking of government AI readiness.
Government artificial intelligence readiness index 2019 (Oxford Insights and Canada’s International Development
Research Centre, 2019), https://ai4d.ai/wp-content/uploads/2019/05/ai-gov-readiness-report_v08.pdf.
4 This chapter is not the right place for a discussion on the “definition” of AI, a controversial subject that would
require lengthy explanations. Please refer to S. Bringsjord and N. S. Govindarajulu, “Artificial Intelligence,”
Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/entries/artificial-intelligence, and R. V.
Yampolskiy, ed. Artificial intelligence, safety and security (Boca Raton, FL: Chapman and Hall/CRC, 2018).
2
which could seriously undermine multilateralism.5 It is not difficult to foresee, nonetheless, that
the absence of collaborative forms of governance, combined with mistrust towards the
multilateral system, will render more cumbersome for the international community to cope with
today’s large-scale challenges.6
A broad definition of AI policymaking strategy has been proposed as “a research field
that analyses the policymaking process and draws implications for policy design, advocacy,
organisational strategy, and AI governance as a whole.”7 Comparing with the growing number
of studies addressing AI risk in domestic policy, there is much less research on foreign policy
and international governance. This chapter will discuss possible strategic approaches for the UN
to overcome obstacles and promote more engagement on this matter at the global level. What
are the next concrete steps the UN should take on AI policy? Should Member States support any
sort of AI oversight mechanism? Is there political will or a minimum consensus to do so? If not,
are there any alternatives?
The international governance of AI: a fragmented landscape
Concerns about the impact of disruptive technologies in society, particularly AI, can be
roughly divided into two basic categories: a) long-term trajectories that could lead to an artificial
general intelligence (AGI) being created with human-level cognitive capabilities and the
possibility of reaching a critical threshold through recursive self-improvement to achieve
superintelligence, thus potentially posing an existential risk to humanity; 8 and b) near-term AI
policy regarding safety measures, technical standards, performance metrics, norms, policies,
institutions, and other governance tools deemed necessary in view of the all-encompassing legal,
ethical, and societal implications of this technology, including codes of conduct, regimes, or
other normative instruments to be adopted at the international level.9
5 As discussed in a conference organised by the Foresight Institute, “general scepticism prevails about the chances
of success for any effort to engage national actors in a conversation about decreased application of AI in the military.”
A. Duettmann et al., Artificial general intelligence: coordination & great powers (San Francisco: Foresight Institute,
White paper, 2018), 6, https://foresight.org/wp-content/uploads/2018/11/AGI-Coordination-Great-Powers-
Report.pdf. See also The AI arms race, Financial Times, FT Series, https://www.ft.com/content/21eb5996-89a3-
11e8-bf9e-8771d5404543.
6 As one expert put it, “for the three gravest planetary challenges – technology, ecology, and nuclear annihilation
– we need an accurate, just, and timely multilateral approach.” A. H. Bajrektarevic, “The answer to AI is
intergovernmental multilateralism,” New Europe, 13 March 2020, https://www.neweurope.eu/article/the-
answer-to-ai-is-intergovernmental-multilateralism.
7 B. Perry and R. Uuk, “AI governance and the policymaking process: key considerations for reducing AI risk,” Big
Data and Cognitive Computing 3, no. 2 (June 2019): 3, https://www.mdpi.com/2504-2289/3/2.
8 N. Bostrom, Superintelligence: paths, dangers, strategies (Oxford: Oxford University Press, 2014); C. Chace,
Surviving AI: the promise and peril of artificial intelligence (San Mateo, CA: Three Cs Publishing, 2015); J. Barrat,
Our final invention: artificial intelligence and the end of the human era (New York: St. Martin’s Press, 2013); S. D.
Baum, A survey of artificial general intelligence projects for ethics, risk, and policy, Global Catastrophic Risk Institute,
Working Paper 17-1, November 2017, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3070741.
9 This near- and long-term distinction does not deny the fact that these problems are somehow interconnected and
there are also medium-term risks to reckon with. Cf. A. Dafoe, AI governance: a research agenda, Future of Humanity
Institute, University of Oxford, 2017, https://www.fhi.ox.ac.uk/wp-content/uploads/GovAIAgenda.pdf. A fine
overview of international governance arrangements for global catastrophic risks can be found in L. Kemp and C.
3
The fundamental question to be asked is how the UN can successfully articulate both
categories and be realistically proactive going forward, despite considerable scepticism about
its ability to deliver in such complex and unchartered domain. It would be advisable for the
Organisation to start addressing near-term AI policy concerns in the first place. Progress
achieved in this area could later help prepare future discussions on long-term risks, but the
opposite is much less palatable for Member States in the present circumstances. Besides, more
focused conversations could be easily derailed or glossed over if competing with speculative
anxieties addressed to a different audience.
One does not need to wait for ideal, Goldilocks conditions to frame the problem in search
for a desirable outcome in a reasonable timespan. Experts have been wrestling with policy
dilemmas, so that AI governance can work for all societies.10 To many, a do-nothing policy is
hardly an option.11 In a normative vacuum, governments and private companies, chasing after
strategic advantage and profit, may push even harder for rapid AI development regardless of
considerations based upon law, ethics, safety, or security. Driven by a logic of “AI race,”
perceived hostility would increase distrust among States and more investments could be
channelled to defence budgets, possibly turning confrontation into a self-fulfilling prophecy.
Responsible governance strategies could help prevent or minimise the impact of more
disturbing scenarios. Norms and other regulatory approaches to mitigate risks are one of the
possible responses to unintended consequences of AI technologies. Certainly, incentivising
predictability by means of norm-setting is not just a question of inducing good behaviour by
States or protecting the weak from the powerful. Rather, it is a matter of building commonly
accepted rules for all and minimum standards to avert, for instance, strategic uncertainty,
undesirable escalations, and unforeseen crises spinning out of control. Facing the danger of
unsafe AI systems without proper oversight, demands will grow stronger to set in motion
international cooperation to avoid mutual harm.12 From a military perspective, if left without
proper human control, smart machines running wild on a mass scale would be a commander’s
worst nightmare. Their destabilising fallout could increase turbulence rather than provide more
security to States.13
When it comes to laws, norms, and regulations, the AI landscape today is fraught with
fragmentation at all levels. It is worth asking the question whether a fragmented international
Rhodes, The cartography of global catastrophic governance (Stockholm, Global Challenges Foundation, 2019),
https://globalchallenges.org/the-cartography-of-global-catastrophic-governance.
10 For current initiatives, see J. Butcher and I. Beridze, “What is the state of artificial intelligence governance
globally?” The RUSI Journal 164, no. 5-6 (2019): 88-96, https://doi.org/10.1080/03071847.2019.1694260.
11 P. Engelke, AI, society, and governance: an introduction (Washington, DC: The Scowcroft Center for Strategy and
Security, March 2020), https://www.atlanticcouncil.org/wp-content/uploads/2020/03/Final-AI-Policy-Primer-
0220.pdf.
12 Danzig stressed this point when referring to pathogens, AI systems, computer viruses, and radiation released by
accident: “Agreed reporting systems, shared controls, common contingency plans, norms, and treaties must be
pursued as means of moderating our numerous mutual risks.” R. Danzig, Technology roulette: managing loss of
control as many militaries pursue technological superiority (Washington, DC: CNAS, June 2018), 2,
https://www.cnas.org/publications/reports/technology-roulette.
13 Military powers might have little choice but begin discussions over “whether some applications of AI pose
unacceptable risks of escalation or loss of control” and take measures to improve safety. P. Scharre, “Killer apps:
the real dangers of an AI arms race,” Foreign Affairs 98, no. 3 (May-June 2019): 135-145.
4
governance of AI could be effective or, not being the case, there is a need for a centralised
international organisation. It has been argued that the risk of creating a slow and brittle
institution speaks against it, but a well-designed centralised regime covering a set of coherent
issues could be “beneficial.”14 Although fragmentation is likely to persist for the time being, if
States fail to coordinate their responses and adopt international standards, as Turner put it, then
regulation for AI can become “Balkanised, with each territory setting its own mutually
incompatible rules.” Norms, therefore, should be seen as an enabler of predictable and orderly
interactions reducing uncertainty in everyone’s interest.15
Despite the relative lack of clarity on the way forward, it should be expected that AI
governance will sometime go global and require more international coordination. 16 Research
and policy proposals on this topic are beginning to shed light upon the likelihood of
international cooperation on transformative AI-related issues, incentives needed for the parties
to reach meaningful agreements, proper conditions for compliance, and costs of defection or
unilateral, non-cooperative measures. 17 Proposals range from informal mechanisms in
narrowly-focused domains to much more ambitious, institutionalised forums, such as the
establishment of an international regulatory agency, to be named, for example, “International
Artificial Intelligence Organisation” (IAIO), aimed at setting standards and benchmarks across
areas to be regulated. However, even the proponents of such broad-spectrum organisation
concede that reaching a workable international consensus on this idea seems rather a remote
possibility in the short-term.18
Another proposition aims at creating a new body modelled on the UN Intergovernmental
Panel on Climate Change (IPCC) to give policymakers technical, neutral assessments, subject to
review by States, underlining the opportunities, implications, and potential risks of AI, always
through evidence-based research by the tech and scientific communities. 19 Miailhe made the
case for an “Intergovernmental Panel on Artificial Intelligence” (IPAI) to gather a large
interdisciplinary group of experts with a mandate to collect, organise, and analyse up-to-date
14 P. Cihon, M. M. Maas and L. Kemp, “Should artificial intelligence governance be centralised? Design lessons
from history,” Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, February 2020, 228–234,
https://doi.org/10.1145/3375627.3375857. On informal organisations, trans-governmental networks, and
transnational public–private partnerships, see K. Abbott and B. Faude, “Choosing low-cost institutions in global
governance”, International Theory, June 2020,
https://www.researchgate.net/publication/342109046_Choosing_low-cost_institutions_in_global_governance.
15 J. Turner, Robot rules: regulating artificial intelligence (London: Palgrave Macmillan, 2019), 239-240.
16 It is important to say that in some domain-specific AI applications, a certain degree of coordination has been
under way within a few multilateral institutions, such as the International Telecommunications Union (ITU),
the International Civil Aviation Organization (ICAO), and the International Maritime Organization (IMO).
17 Dafoe, AI governance, 46.
18 O. J. Erdelyi and J. Goldsmith, “Regulating artificial intelligence: proposal for a global solution,” Paper presented
at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society, New Orleans, 1–3 February 2018, 6,
http://www.aies-conference.com/2018/accepted-papers.
19 The IPCC is the UN body for assessing the science related to climate change, created to provide policymakers
with regular scientific assessments and put forward adaptation and mitigation options. Cf. https://www.ipcc.ch.
5
information.20 As emerging technologies become more and more sophisticated, sound technical
advice will be in high demand if intergovernmental negotiations are to be launched to tackle
these thorny issues.
Civil society organisations have been putting forward numerous proposals on principles
and AI ethical frameworks, including the landmark 2017 Asilomar AI principles, championed
by the Future of Life Institute. 21 A recent paper surveyed dozens of contributions around the
world on normative guidance in ethical and rights-based approaches to principles for AI. 22
Linking government and business networks, the World Economic Forum (WEF) has been
bolstering public-private partnerships on the Fourth Industrial Revolution and the future of
technology governance (e.g. by suggesting an “AI regulator for the 21st century”).23 Also worth
recalling are the Partnership on AI, established by leading companies to study and formulate
best practices, and the Future Society’s Global Data Commons. 24 Another work in progress
relates to the outcome of the International Congress for the Governance of Artificial Intelligence
(ICGAI), to be held in May 2021 in Prague, and its focus upon initial steps towards putting in
place international mechanisms for an “agile governance” of AI.25
Motivations for adopting AI policy and ethics documents by governments, the private
sector, and civil society are manifold: genuine social responsibility, search for competitive
advantage, signalling leadership, seeking to influence narratives on what should be done,
marketing tool for business corporations to promote their brand or to pre-empt restrictive laws.
Ethics boards are indeed a welcome development for the industry, but self-regulation is not a
20 N. Miailhe, “AI & global governance: why we need an intergovernmental panel for artificial intelligence,” Articles
& insights, Centre for Policy Research, UNU, 20 December 2018, https://cpr.unu.edu/ai-global-governance-why-
we-need-an-intergovernmental-panel-for-artificial-intelligence.html.
21 The Asilomar principles call inter alia for “race avoidance” (teams developing AI systems should actively
cooperate to avoid corner-cutting on safety standards); “human values” (AI systems should be designed and
operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity); preventing
an “AI arms race” (an arms race in lethal autonomous weapons systems should be avoided); and AI for the
“common good” (AI should only be developed in the service of widely shared ethical ideals and for the benefit of
all humanity rather than one State or organisation). Future of Life Institute, https://futureoflife.org/ai-principles.
22 J. Fjeld et al., Principled artificial intelligence: mapping consensus in ethical and rights-based approaches to
principles for AI (Cambridge, MA: Berkman Klein Center for Internet & Society at Harvard University, Research
Publication, 15 January 2020), http://nrs.harvard.edu/urn-3:HUL.InstRepos:42160420. Also World Economic
Forum, AI governance: a holistic approach to implement ethics into AI (Geneva: White Paper, January 2019),
https://www.weforum.org/whitepapers/ai-governance-a-holistic-approach-to-implement-ethics-into-ai.
23 WEF, Shaping the future of technology governance: artificial intelligence and machine learning,
https://www.weforum.org/platforms/shaping-the-future-of-technology-governance-artificial-intelligence-and-
machine-learning.
24 The Global Data Commons aims at supporting the achievement of the Sustainable Development Goals and it has
been envisioned as a precursor for the AI for SDGs Center (AI4SDG), designed by its proponents to become “an
engine for practical experimentation and scaling up of governance models for AI.” The Future Society,
https://thefuturesociety.org/2019/05/28/ai-for-sdgs-center-ai4sdg. For the Partnership on AI see
https://www.partnershiponai.org.
25 One of their proposals is the creation of a multi-stakeholder Global Governance Network for AI (GGN-AI),
drawing inspiration from the “governance coordinating committees” first put forward by W. Wallach and G.
Marchant, “Toward the agile and comprehensive international governance of AI and robotics,” Proceedings of the
IEEE 107, no. 3 (March 2019), https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8662741.
6
panacea, since “the interests of companies are rarely fully aligned with the interests of society
as a whole.”26 On the positive side, even if these initiatives lack enforcement authority, new
ideas on principles, institutions, mechanisms, or informal settings may contribute to the debate
through cross-pollination and act as building blocks for more ambitious projects when the
momentum for it arises.
Better coordination can often occur among like-minded States, such as in the European
Union, and cross-regional groups or organisations with a widely shared agenda (but a restricted
membership), such as the G7, the Organisation for Economic Cooperation Development (OECD)
or, to a lesser degree, the G20. 27 There is non-negligible activity within these groups on AI
principles and policy design. The G7 leaders, for example, agreed upon the Charlevoix Common
Vision for the Future of Artificial Intelligence, in June 2018, outlining commitments to inter alia
promote human-centric AI and its commercial adoption, generate public trust, support lifelong
learning, and respect privacy and data protection frameworks in AI design and
implementation.28
Some countries are taking action at the bilateral and regional levels, in a move that can
accelerate the transition from private stakeholder initiatives to governmental policymaking at
the global level. In December 2018, Canada and France bilaterally proposed a Global Partnership
for AI (GPAI) to “support and guide the responsible adoption of AI that is human-centric and
grounded in human rights, inclusion, diversity, innovation and economic growth.” Their
ultimate goal was to create a standing forum, involving governments, the industry, and
academia, to monitor and debate the policy implications of AI globally. 29 Other countries were
invited to join and the proposal was later discussed within the G7. In June 2020, a group of 14
States and the EU officially launched the enlarged GPAI initiative, pledging their support to
responsible AI development, in a manner consistent with “human rights, fundamental
freedoms, and our shared democratic values.”30
26 As noted by G. Marcus, In The global AI agenda (Cambridge, MA: MIT Technology Review Insights, 26 March
2020), 7, https://mittrinsights.s3.amazonaws.com/AIagenda2020/GlobalAIagenda.pdf.
27 The 2019 Osaka Declaration of the G20 embraced the principle of human-centred AI by recognising that these
technologies can help promote inclusive economic growth, bring great benefits to society and empower individuals:
“The responsible development and use of AI can be a driving force to help advance the SDGs and to realise a
sustainable and inclusive society, mitigating risks to wider societal values.” Japan’s Ministry of Foreign Affairs,
https://www.mofa.go.jp/policy/economy/g20_summit/osaka19/en/documents/final_g20_osaka_leaders_decl
aration.html.
28 G7 Summit, Canada 2018, “Charlevoix Common Vision for the Future of Artificial Intelligence,”
http://www.g7.utoronto.ca/summit/2018charlevoix/ai-commitment.html.
29 Cf. the original document, “Mandate for the International Panel on Artificial Intelligence,” Canada and France,
https://pm.gc.ca/en/news/backgrounders/2018/12/06/mandate-international-panel-artificial-intelligence.
30 GPAI’s 15 founding members are Australia, Canada, France, Germany, India, Italy, Japan, Mexico, New Zealand,
Republic of Korea, Singapore, Slovenia, United Kingdom, United States, and the European Union. Drawing from
the OECD recommendations on AI policy, four working groups will be created to address the following themes: 1)
responsible AI; 2) data governance; 3) the future of work; and 4) innovation and commercialization. GPAI will be
supported by a Secretariat, to be hosted by the OECD, as well as by two “centers of expertise” (in Montreal and
Paris). Cf. “Joint Statement from founding members of GPAI,” https://www.diplomatie.gouv.fr/en/french-
foreign-policy/digital-diplomacy/news/article/launch-of-the-global-partnership-on-artificial-intelligence-by-15-
founding.
7
With the net advantage of several fully functioning institutional structures, Europe has
been moving faster than other regions. The European Commission created in June 2018 a High-
Level Expert Group on Artificial Intelligence (AI HLEG) to support the implementation of the
European Strategy on AI, including recommendations on future-related policy development
and ethical, legal, and societal issues related to this technology. In addition to guidelines on AI
and data protection, the Council of Europe set up in 2019 an Ad Hoc Committee on Artificial
Intelligence (CAHAI) to conduct consultations on a legal framework for the development,
design, and application of AI, based on the Council’s standards on human rights, democracy,
and the rule of law. Also, the European Commission has been preparing to introduce AI
legislation in civilian applications and will want its new regulatory framework to have long-
reaching global effect, akin to the General Data Protection Regulation, which was adopted in
2016 to enable rules on privacy and transfer of personal data.31
In May 2019, the OECD adopted recommendations to promote and implement far-
reaching AI principles, the first ever intergovernmental standard of its kind. Governments
signed up to actively cooperate and work together to advance these principles by encouraging
international, cross-sectoral, and open multi-stakeholder initiatives on a consensual basis. The
recommendations included two substantive sections: 1- Principles for responsible stewardship
of trustworthy AI: i) inclusive growth, sustainable development and well-being; ii) human-
centred values and fairness; iii) transparency and explainability; iv) robustness, security and
safety; and v) accountability; and 2- National policies and international cooperation for
trustworthy AI: i) investing in AI research and development; ii) fostering a digital ecosystem for
AI; iii) shaping an enabling policy environment for AI; iv) building human capacity and
preparing for labour market transformation; and v) international cooperation for trustworthy
AI.32 In February 2020, the OECD.AI Policy Observatory was launched as a platform to “shape
and share” AI policies across the globe, so as to provide policymakers with reliable guidance on
the implementation of AI principles in real-world situations.33
Thanks to the countless lists of AI principles laid out over the last few years, as Newman
argued, a sort of “normative core” is emerging, but much remains to be done in terms of
translating these guidelines into practice. Hence the importance of intergovernmental initiatives
that can function as a hub for AI governance, allow for international coordination in
implementing shared principles, and serve as a counterpoint to “AI nationalism.”34 These steps,
however, have been heavily concentrated upon a few States, notably Western developed
countries. Some major players, such as China and Russia, are often absent from these circles of
31 GDPR is a regulation, not a directive, directly binding and applicable, and became a model for many national
laws outside the European Union, including Chile, Japan, Brazil, South Korea, Argentina, and Kenya. First meeting
of the Ad hoc Committee on Artificial Intelligence (CAHAI), Council of Europe, Strasbourg, 15 November 2019,
https://www.coe.int/en/web/human-rights-rule-of-law. See also European Commission’s AI HLEG,
https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence.
32 Recommendation of the Council on Artificial Intelligence (Paris: OECD/LEGAL/0449, 21 May 2019),
https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
33 For further information and additional documents, visit the OECD website at https://oecd.ai.
34 J. C. Newman, Decision points in AI governance: three case studies explore efforts to operationalize AI principles
(U.C. Berkeley, Center for Long-Term Cybersecurity, White Paper Series, 2020, 30-40),
https://cltc.berkeley.edu/2020/05/05/new-cltc-report-decision-points-in-ai-governance.
8
like-minded States, with the notable exception of the G20, which in June 2019 endorsed in Japan
non-binding AI principles explicitly drawing from the OECD recommendations.35 Even more
striking is the fact that the Global South is underrepresented in this debate, with many areas of
Africa, Asia, Latin America, and the Caribbean completely away from this vital conversation.
Taking as a yardstick, for instance, the number of AI national strategies adopted at the domestic
level, the overwhelming majority comes from high-income countries.36
At the global level, there is wide convergence on general ideas, but little agreement on
the details of exactly how they should be interpreted, prioritised, or implemented. Moving from
soft law to hard regulation will be a tough nut to crack, especially when the goal is to frame a
regulatory policy before the technology is fully understood and developed. In matters
pertaining to power, wealth, and security among States, the continuation of self-help, beggar-
thy-neighbour policies may increase incentives for national trajectories that reinforce great-
power competition and thwart attempts at galvanising multilateral efforts. Greater inter-
stakeholder cooperation is sorely needed to mutually align different AI ethics agendas in key
areas, such as transparency, accountability, fairness, privacy, and responsibility. Multilateralism
cannot provide all the answers, but its long-established architecture for broad representation
and diplomatic bargain can be a readily available alternative to work out much-needed
compromise where conflict seems to prevail. Can the UN come to the rescue?
What the United Nations has been doing, could it do more?
In his address to the General Assembly in September 2018, UN Secretary-General
António Guterres warned that “multilateralism is under fire precisely when we need it most.”
He highlighted two “epochal challenges” in particular: climate change, which is of course of
great concern, but not the subject of this chapter; and risks associated with advances in
technology, from mass economic unemployment to cybercrime and malicious use of digital
tools.37 Guterres cautioned against the weaponisation of AI and the possibility of a dangerous
arms race, including the development of lethal autonomous weapons systems, going so far as
to contend that “the prospect of machines with the discretion and power to take human life is
morally repugnant.” Less oversight over these weapons, he added, could severely compromise
efforts to contain threats, prevent escalation, and ensure compliance with international
humanitarian law and human rights law. The Secretary-General urged Member States “to use
35 Emblematic of differences of opinion, the G20 did not extend its support to Section 2 recommendations on
“national policies and international cooperation,” G20 Ministerial Meeting on Trade and Digital Economy,
Tsubuka, 8–9 June 2019, https://www.mofa.go.jp/files/000486596.pdf.
36 Campbell noted that “there are few states with AI national strategies or plans or significant investments in several
geographic regions across the globe, including: South America, Central America, Eastern Europe, Central Asia,
Southeast Asia, and Africa.” T. A. Campbell, Artificial intelligence: an overview of state initiatives, UNICRI and
FutureGrasp, 2019, cf. Executive Summary, http://www.unicri.it/in_focus/files/Report_AI-
An_Overview_of_State_Initiatives_FutureGrasp_7-23-19.pdf.
37 Address of the UN Secretary-General to the 73rd General Assembly, 25 September 2018,
https://www.un.org/sg/en/content/sg/speeches/2018-09-25/address-73rd-general-assembly.
9
the United Nations as a platform to draw global attention to these crucial matters and to nurture
a digital future that is safe and beneficial for all.”38
An electrical engineer himself by background, Guterres was the leading force behind the
High-Level Panel on Digital Cooperation, which I will discuss in the next section. Back in 2018,
he had launched his own Strategy on New Technologies, with the objective of defining how the
UN system will support the use of these technologies to accelerate the achievement of the 2030
Sustainable Development Agenda and to facilitate their alignment with the values enshrined in
the UN Charter, the Universal Declaration of Human Rights, and the norms and standards of
international law. 39 In the following year, the UN Chief Executives Board for Coordination
adopted guidelines for a system-wide strategic approach for supporting capacity development
on AI. 40 By the same token, 50 UN entities jointly designed in 2020 a Data Strategy as a
“comprehensive playbook based on global best practice” to improve coordination on building
the data, digital, technology and innovation capabilities the UN needs to succeed in the 21st
century. The announcement of the Data Strategy by the Secretary-General was somewhat
obscured in the news by the COVID-19 pandemic, but its long-term vision will remain a
reference for the Organisation for the next years.41 Guterres also gives his backing to UN Global
Pulse, a hands-on initiative established in 2009 to harness the power of big data, increasingly
working in recent years to develop AI tools for development, humanitarian action, and peace.
Its staff work through a network of innovation labs, which operate in New York, Jakarta, and
Kampala.42
The main downsides of a scattered environment of policies and programmes are
duplication of efforts, overlap, and inter-bureaucratic feuding among UN agencies to occupy
niches and exert leadership. Most of the UN initiatives on new technologies are focused upon
the implementation of the Sustainable Development Goals (SDGs), such as the annual Science,
Technology, and Innovation Forum of the Economic and Social Council (ECOSOC). On the AI
front, the flagship UN event for global dialogue with the wider public is the AI for Good Global
38 See also Current developments in science and technology and their potential impact on international security and
disarmament efforts, Report of the UN Secretary-General, General Assembly, A/73/177, 17 July 2018,
https://www.un.org/disarmament/publications/library/73-ga-sg-report.
39 Among the pledges and commitments to pursue this strategy were the following: deepening the UN’s internal
capacities and exposure to emerging technologies; increasing understanding, advocacy, and dialogue; supporting
dialogue on normative and cooperation frameworks; and enhancing UN system support to government capacity
development. Cf. UN Secretary-General’s strategy on new technologies, New York, September 2018, 3-5,
https://www.un.org/en/newtechnologies/images/pdf/SGs-Strategy-on-New-Technologies.pdf.
40 This document, “A United Nations system-wide strategic approach and road map for supporting capacity
development on artificial intelligence,” put a significant emphasis on capacity-building for developing countries.
Geneva, 17 June 2019, https://digitallibrary.un.org/record/3811676.
41 Their goal is to implement a “data action” agenda for a data-driven transformation of the Organisation engaging
the whole UN system. Data Strategy of the Secretary General for action by everyone, everywhere, New York, May 2020,
https://www.un.org/en/content/datastrategy.
42 The UN Global Pulse seeks to utilise AI and digital data to gain a better understanding of changes in human well-
being and to “get real-time feedback on how well policy responses are working.” Cf.
https://www.unglobalpulse.org and E-analytics guide: using data and new technology for peacemaking, preventive
diplomacy, and peacebuilding, UN Global Pulse, April 2019, https://www.unglobalpulse.org/document/e-
analytics-guide-using-data-and-new-technology-for-peacemaking-preventive-diplomacy-and-peaceuilding.
10
Summit, hosted every year in Geneva by the International Telecommunications Union (ITU), in
partnership with other UN agencies, the XPrize Foundation (organisation offering incentivise
prize competitions), and the Association for Computing Machinery (ACM).43
The UN Educational, Scientific, and Cultural Organisation (UNESCO) has been
promoting a humanistic approach on ethics, policy, and capacity-building in response to
emerging challenges related to AI, including philosophical reflections on what it means to be
human in the face of disruptive technologies. Playing a key role in this regard, the World
Commission on the Ethics of Scientific Knowledge and Technology (COMEST), an advisory
body created in 1998 to give policy advice for decision-makers, pioneered workshops and
roundtables leading up to the publication of a report on robotics ethics, with a view to
positioning UNESCO as a valuable instrument for intellectual exchanges on this matter. 44
Furthermore, the first International Research Centre of Artificial Intelligence (IRCAI), under the
auspices of UNESCO, shall have its seat in Slovenia, aiming to provide a coordination point,
funding route, and exploitation accelerator for approaches to the SDGs that make use of AI. 45
More recently, an Ad Hoc Expert Group of 24 members was tasked by UNESCO to
produce, in a two-year process, the first draft of a global standard-setting instrument on ethics
of AI. In a preliminary study released in July 2019, it was stressed that AI is not confined to a
tangible location, which makes regulation of AI technology more challenging nationally and
internationally. Due to their transnational character, durable solutions need to be found at the
global level. A normative instrument on the ethics of AI should serve as a means of
mainstreaming universal values into AI systems, which must be compatible with internationally
agreed human rights and standards, and be aligned to a human-centred vision.46 Some experts
believe that UNESCO’s contribution, within its mandate, could be complementary to other
initiatives under way, such as by the OECD, but with a focus upon aspects that are generally
neglected: culture, education, science, and communication. The definitive format of this
normative document (either a declaration, recommendation, or a convention to be approved by
Member States) will be decided by UNESCO’s General Conference by the end of 2021. However,
a non-binding recommendation on basic principles has been considered so far more flexible and
better suited to the complexity of the ethical questions raised by AI.
Following internal consultations and virtual meetings, the Ad Hoc Expert Group made
available online, in May 2020, a “zero draft” of its outcome document to invite commentaries by
the public. In addition to values and principles drawn from or inspired by the consensus
achieved in the 80+ published frameworks related to AI ethics, areas of policy action were given
special attention: promoting diversity and inclusiveness; addressing labour market changes and
43 The AI for Good Global Summit brings together speakers from governments, the industry, academia, and civil
society to discuss how AI can be used to achieve inter alia results in ending poverty, alleviating hunger, promoting
health, and identifying development solutions. AI for Good Global Summit 2019 (Summit Insights, Geneva, 28-31
May 2019, International Telecommunication Union and XPRIZE Foundation),
https://itu.foleon.com/itu/aiforgood2019/home.
44 See in particular Report of COMEST on robotics ethics (Paris: UNESCO, 2017),
https://unesdoc.unesco.org/ark:/48223/pf0000253952.
45 International Research Centre of Artificial Intelligence (IRCAI), https://ircai.org.
46 Preliminary study on a possible standard-setting instrument on the ethics of artificial intelligence, UNESCO General
Conference, 40 C/67, 30 July 2019, https://unesdoc.unesco.org/ark:/48223/pf0000369455.
11
the social, economic, cultural, and environmental impact of AI; fostering education, awareness,
international cooperation, governance mechanisms, and AI ethics research and development;
and ensuring trustworthiness of AI systems, responsibility, accountability, and privacy. 47 The
experts of the Group are especially worried about the need for a multidisciplinary, holistic
approach, with due regard to human rights. They mostly agree that the UN has a significant role
to play, akin to a “beacon,” empowering people and increasing participation by all sectors of
society.48
Also important, the UN Interregional Crime and Justice Research Institute (UNICRI)
established in 2017 a Centre for Artificial Intelligence and Robotics in The Hague, with the aim
of disseminating information, undertaking training activities, and promoting public awareness.
Directed by Irakli Beridze, the Centre has been active in cybercrime, law enforcement (in
partnership with Interpol), criminal justice, counterterrorism, and malicious use of AI from the
perspective of its mandate. In 2018, the website “AI & Global Governance” of the Centre for
Policy Research of the United Nations University (UNU) began to publish online cross-
disciplinary insights and “inform existing debates from the lens of multilateralism,” as a tool for
Member States, multilateral agencies, funds, programs, and other partners. 49 Other noteworthy
activities include the following: research on AI-related technologies conducted by the UN
Institute for Disarmament Research (UNIDIR); the UN Innovation Network, connecting a
collaborative community within the UN system; the Innovation Cell of the UN Department of
Political and Peacebuilding Affairs (DPPA), aimed at incubating and leveraging use cases in
peace and security; as well as a myriad of projects developed by UN agencies to apply AI in
their daily practice in the field.50
At this stage, a distinction must be made between civilian AI applications and their
deployment for military purposes. Civilian and military uses will presumably follow different
multilateral tracks on the road to future international norms. Finding common ground on AI-
enabled military capabilities will be no easy task. Amid growing polarisation and frictions on
various hotspots, the international security environment does not seem conducive to sweeping
global agreements in a very short time.51 In disarmament and arms control, unilateralism and
dissent among major players render political commitments more troublesome and undermine
47 Outcome document: first version of a draft text of a recommendation on the ethics of artificial intelligence. Ad Hoc
Expert Group (AHEG) for the preparation of a draft text of a recommendation on the ethics of artificial intelligence,
UNESCO, SHS/BIO/AHEG-AI/2020/4 Rev, 15 May 2020,
https://unesdoc.unesco.org/ark:/48223/pf0000373434.
48 Interview with Professor Edson Prestes, member of UNESCO’s Ad Hoc Expert Group and member of the UN
High-Level Panel on Digital Cooperation, 2 June 2020. The work of the IEEE is an important reference in this area:
Ethically aligned design: a vision for prioritising human well-being with autonomous and intelligent systems (New
York: Institute of Electrical and Electronics Engineers, 2018), https://ethicsinaction.ieee.org.
49AI & Global Governance, Centre for Policy Research, UNU, https://cpr.unu.edu/tag/artificial-intelligence.
50 United Nations activities on artificial intelligence (Geneva: International Telecommunication Union, 2019),
https://www.itu.int/dms_pub/itu-s/opb/gen/S-GEN-UNACT-2019-1-PDF-E.pdf.
51 R. Gowan, “Muddling through to 2030: the long decline of international security cooperation,” Articles & insights,
Centre for Policy Research, UNU, 24 October 2018, https://cpr.unu.edu/muddling-through-to-2030-the-long-
decline-of-international-security-cooperation.html.
12
attempts at negotiating multilateral solutions. UN-sponsored processes on security and stability
in cyberspace are another example of remarkable difficulties in making meaningful headway.52
In international security, one of the most pivotal AI-related multilateral discussions has
been taking place in the open-ended Group of Governmental Experts (GGE) on emerging
technologies in lethal autonomous weapons systems, under the UN Convention on Certain
Conventional Weapons (CCW). These painstaking deliberations in Geneva are revealing of
opportunities and predicaments encountered along the way. As decisions must be made by
consensus, diverging views hamper swift progress. Meetings move around lengthy debates on
methodology, definitions, and whether States should negotiate principles or constraints (if any)
on fully autonomous weapons. Military powers that are actively pursuing ways to mobilise AI
capabilities have been opposing the introduction of any restrictions on these technologies. Other
countries support a pre-emptive ban on the grounds that “killer robots” will not be able to
comply with international humanitarian law and maintain human responsibility for the use of
force.53 No agreement has been reached so far on proposals for a legally binding instrument to
ensure “meaningful human control” over such weapons. The GGE will continue to hold
meetings until 2021.54
Admittedly, such diplomatic talks are sensitive to political susceptibilities and need to
receive continued reassurance to prevent pushbacks. The UN Secretary-General has a principled
position against lethal autonomous weapons systems, but creating another track at this moment
will probably meet resistance from some Member States. In the 11 guiding principles agreed by
the GGE in 2019, the delegations reaffirmed that “the CCW offers an appropriate framework for
dealing with the issue,” within the context of the objectives and purposes of the CCW, “which
seeks to strike a balance between military necessity and humanitarian considerations.” 55
Alternatively, the UN leadership can push for international cooperation for the peaceful uses of
responsible AI or some other language discouraging its militarisation. Yet, if these principles
were openly championed by the Organisation, Member States could read them in terms of
political implications and their likely impact in governmental AI policymaking, a scenario which
may end up raising questions regarding their convenience and suitability. All in all, any
outcome on autonomous weapons is presently dependent upon the timeframe of the GGE.
In the meantime, there would still be political space for States to address international
regulation of civilian AI applications and how to ensure that guardrails and governance tools will
be in place sooner rather than later. To begin with, ongoing initiatives, such as those mentioned
52 C. Ruhl et al., Cyberspace and geopolitics: assessing global cybersecurity norm processes at a crossroads
(Washington, DC: Carnegie Endowment for International Peace, Working Paper, February 2020),
https://carnegieendowment.org/2020/02/26/cyberspace-and-geopolitics-assessing-global-cybersecurity-norm-
processes-at-crossroads-pub-81110.
53 Critics of the ban claimed that this alternative would be “impractical.” F. Slijper et al., State of AI: artificial
intelligence, the military and increasingly autonomous weapons (Utrecht: PAX, April 2019),
https://www.paxvoorvrede.nl/media/files/state-of-artificial-intelligence--pax-report.pdf.
54 Except from a few more active delegations, the number of experts from the Global South taking the floor and
making proposals in the GGE is remarkably low. Report of the 2019 session of the Group of Governmental Experts
on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, Geneva, 21 August 2019,
CCW/GGE.1/2019/CRP.1/Rev.2, https://www.unog.ch.
55 Report of the 2019 session of the GGE.
13
in the previous section, could be utilised as stepping-stones to persuade leaders in the field to
move forward and create the conditions for a credible UN multilateral process in the long run.
Accordingly, one way for the UN to have a more prominent role is to seek ways to make the
transition from civil society and private sector initiatives to governmental policymaking at two
levels of implementation: domestic and international.
At the domestic level, one should expect more calls for national legislation on civilian and
commercial AI applications in many countries. Hard regulation on standards and certifications
may face opposition in some quarters. In the end, if and when such norms are ever adopted, in
most cases national governments will have the power to enforce them. The UN can facilitate the
debate by sharing knowledge and best practices whenever possible, but national choices will
ultimately belong to each State concerned.
At the international level, it is still too early to predict whether there will be room to move
from voluntary or confidence-building measures to legally binding instruments anytime soon.
Higher stakes, economic interests, and political considerations may increase the probability that
diverging views and other factors might come into play to stall breakthroughs in the
international governance of AI. Yet, the contributions of the OECD, European Union, and other
groups could perhaps give the UN leverage to call for discussions that are truly global in scope
and representation. The UN would be in a privileged position to secure an inclusive platform
for cooperation with the participation of multiple stakeholders in the AI community of business,
research, and development. The next section will assess current perspectives for concrete action
in this domain.
The High-Level Panel on Digital Cooperation and the way ahead
The High-Level Panel (HLP) on Digital Cooperation was established by the UN
Secretary-General in 2017 to look into proposals to build trust and cooperation between Member
States and other stakeholders, the private sector, research centres, civil society, and academia. In
its report, released in June 2019, the HLP envisaged potential roles for the UN to add value in
the digital transformation: as a convener; providing a space for debating values and norms;
generating standard setting; holding multi-stakeholder or bilateral initiatives on specific issues;
developing the capacity of Member States; ranking, mapping, and measuring cybersecurity; and
making available arbitration and dispute-resolution mechanisms.56
The HLP report put forward several recommendations, such as inviting stakeholders to
commit to a “Declaration of Digital Interdependence,” creating regional and global digital help
desks, and adopting in 2020 a “Global Commitment for Digital Cooperation” to consolidate in
a single political document shared values, principles, understandings, and objectives regarding
the governance of cyberspace. Also, in recommendation 3C, it did not shy away from making
principled statements of ethical significance, the stipulation that “life and death decisions should
not be delegated to machines” being a case in point. 57
56 The age of digital interdependence: Report of the UN Secretary-General’s High-Level Panel on Digital Cooperation,
New York, 10 June 2019, 4-5, https://digitalcooperation.org/report.
57 Recommendation 3C reads as follows: “We believe that autonomous intelligent systems should be designed in
ways that enable their decisions to be explained and humans to be accountable for their use. Audit and certification
14
Recommendation 3C, the most relevant guidance for AI governance contained in the HLP
report, called for engineering and ethical standards to be developed through multi-stakeholder
and multilateral approaches. No indication was given, however, on how multilateral
mechanisms could be put in motion to implement this recommendation. Risks can be mitigated
by means of norm-setting, agreed rules and standards, and responsible governance strategies to
promote safe and beneficial AI. But for the UN to effectively encourage implementation, a
strategic approach is required to avoid the expected pitfalls when powerful technologies meet
international politics.
One of the submissions to the HLP, presented by experts from Oxford and Cambridge,
maintained that the international governance of AI should be anchored to a regime under the
UN meeting the following criteria: being “inclusive (of multiple stakeholders), anticipatory (of
fast-progressing AI technologies and impacts), responsive (to the rapidly evolving technology
and its uses), and reflexive (critically reviews and updates its policy principles).” It could be
centred around a dedicated, legitimate, and well-resourced regime, possibly taking numerous
forms: a UN specialised agency (such as the World Health Organisation), a related organisation
to the UN (such as the World Trade Organisation), or a subsidiary body to the General Assembly
(such as the UN Environment Programme). This regime should fulfil the objectives of
coordination, comprehensive coverage, cooperation over competition, and collective benefit.
They recommended, among its key components, the creation of a coordinator to develop a
system-wide AI engagement strategy to catalyse efforts, multilateral treaties, and arrangements
to govern AI; an Intergovernmental Panel to provide an authoritative voice on the state and
trends of AI technologies; and a “UN AI Research Organisation (UNAIRO)” to focus upon
building AI tools “in the public interest,” as well as conducting basic research on improving
these techniques in a safe, careful, and responsible environment.58
For these ideas to become operational, Member States must be on board. Within an
adversarial context permeated by political quarrels, clashing views, and other stumbling blocks,
too many layers and institutions tend to be more expensive and slow coordination in an already
fragmented ecosystem. Even if new structures could be brought together under an umbrella
body, budgetary constraints, which have been causing chronic financial stress to the UN, would
trigger opposition by many Member States. Streamlined, low-cost arrangements would be
preferable to perform a coordinating role, building upon existing institutions and initiatives in
the UN system.
Digitalization and AI will definitely stay on the international agenda in the foreseeable
future. The HLP report endorsed “multi-stakeholderism” to get a more diverse spectrum of
schemes should monitor compliance of AI systems with engineering and ethical standards, which should be
developed using multi-stakeholder and multilateral approaches. Life and death decisions should not be delegated
to machines. We call for enhanced digital cooperation with multiple stakeholders to think through the design and
application of these standards and principles such as transparency and non-bias in autonomous intelligent systems
in different social settings.” Ibid. 38.
58 They suggested an “innovative model of multipartite representation and voting,” mirrored in the International
Labour Organisation. L. Kemp et al., “UN High-level Panel on Digital Cooperation: a proposal for international AI
governance,” Submission by Centre for the Study of Existential Risk, University of Cambridge, and Centre for the
Governance of AI, Future of Humanity Institute, Oxford University, 2019,
https://digitalcooperation.org/responses.
15
participants involved in governance in the digital sphere. One of the options was to revamp the
Internet Governance Forum, in order to advance the concept of an IGF+ model, open to all
stakeholders and institutionally connected to the UN system.59 As Kaljurand pointed out, the
IGF+ would comprise of five bodies: 1) an advisory group appointed by Secretary General; 2) a
cooperation accelerator; 3) a policy incubator to propose norms and policies; 4) an observatory
and help desk to give advice on drafting legislation or tackling crisis situations; and 5) a trust
fund linked to the Executive Office of the Secretary General “to reflect its interdisciplinary and
system-wide approach.”60
Cross-references with the cyber domain are indeed relevant for AI governance, to the
extent that digital technologies increasingly intersect with machine learning and other AI-
enabled techniques.61 Moving from “Internet governance” to “digital cooperation” has already
been, in a way, a conceptual enlargement of the terrain the UN would seek to chart. Could this
broadening of the intended scope backfire? It is clear by now that the IGF+ model is the option
of choice on the table, since the IGF “already has a UN mandate, an institutional form of sorts,
and governmental and stakeholder support.”62 Some questions remain though: would the UN
ultimately embrace AI and cyberspace in a single undertaking as far as governance is concerned?
Would this avoid duplication, institutional fatigue, or more bureaucratic layers that can prove
inefficient? Or, contrariwise, are these two domains of interest utterly different or too complex
to be treated as a unified field?
Against this background, the 75th anniversary of the UN in 2020 was expected to be a key
milestone for a relaunching of the Organisation. The theme chosen for the commemorations
(“The future we want, the UN we need: reaffirming our collective commitment to
multilateralism”) set the tone and invited Member States to acknowledge and seriously examine
those technologies most likely to have a decisive impact in our lives, linking these concerns with
the need for a revitalised multilateral system. It was a timely opportunity to counter frequent
criticism of flaws attributed to the UN, but preparations were severely affected by the COVID-
19 pandemic and the ensuing interruption of the regular day-to-day work at the UN
Headquarters in New York. Consequently, in-person meetings were cancelled, others
postponed, and projects put on hold. With a staggering death toll, the pandemic demonstrated
tragically that cross-border issues respect no political boundaries and call for collective action to
manage its most dramatic consequences.63
59 The HLP report put forward three potential models for global digital cooperation: a strengthened and enhanced
IGF+; a distributed co-governance architecture; and a digital commons architecture.
60 M. Kaljurand, “From IGF to IGF+,” In W. Kleinwächter et al., Towards a global framework for cyber peace and digital
cooperation: an agenda for the 2020s (Berlin: Federal Ministry of Economics and Technology, 2019), 54-56,
https://www.hans-bredow-institut.de.
61 See, for instance, Y. Lannquist et al., “The intersection and governance of artificial intelligence and cybersecurity,”
The Future Society, 21 May 2020, https://www.researchgate.net.
62 W. J. Drake, Considerations on the High-Level Panel’s “Internet Governance Forum Plus” model, CircleID, 4
November 2019, http://www.circleid.com/posts.
63 Turner cogently explained this point: “Given the arbitrary nature of international borders, there is no reason why
AI’s impacts should be self-contained within the country in which it originates. Instead, much like a wildfire,
tsunami or virus, AI’s impacts will cross man-made boundaries with impunity. The danger of a country being cross-
16
Despite the adversities, after consultations with champions and key constituents, 64 as a
follow-up of the HLP on Digital Cooperation, the Secretary-General released a roadmap, in June
2020, in which the UN’s willingness to be a convener and platform for multi-stakeholder policy
dialogue was again reiterated. A number of recommendations were outlined on universal
connectivity, digital public goods, human rights, digital inclusion, surveillance technologies
(facial recognition), online harassment and violence, trust, security and stability, and for
strengthening the IGF. Guterres will appoint an Envoy on Technology in 2021 to advise the senior
leadership of the Organisation on key technological trends and serve as an advocate and focal
point for digital cooperation (a “first port of call” for the broader UN system). 65
Concerning recommendation 3C specifically, the roadmap highlighted three outstanding
challenges: a) lack of representation and inclusiveness in AI global discussions; b) inadequate
overall coordination of AI-related initiatives, “in a way that is easily accessible to other countries
outside the existing groupings;” and c) the need to build capacity and AI expertise, particularly
in the public sector, “to bring national oversight or governance to the use of such technologies.”
On the way forward, in order to address issues raised around “inclusion, coordination, and
capacity-building,” the Secretary-General declared his intention to establish a multi-stakeholder
advisory body on global AI cooperation to provide guidance that is “trustworthy, human-rights
based, safe and sustainable, and promotes peace.”66
The roadmap echoed two common sense perceptions: a) that the UN is willing to have a
more pro-active role in AI governance; and b) that it has also been cautious and refraining from
taking measures that are premature without substantial support from the UN membership.67 As
an intergovernmental organisation relying upon decisions by Member States, the UN itself has
limited power, considering above all the fact that “national governments are reluctant to impede
innovation in an emerging technology by pre-emptory regulation in an era of intense
international competition,” as Marchant noted.68
A desirable prerequisite is to keep the major players engaged, so that AI governance can
be instrumental in providing safety, security, and stability to safeguard workable international
infected ought to encourage its national leaders to promote international standards as a matter of national self-
preservation as much as anything else.” Turner, Robot rules, 244.
64 The author of this chapter had the honour to join some of these consultations and contribute with inputs in the
preparations for the Secretary-General’s roadmap. A full list of participants in the round-table discussions is
available at www.un.org/en/digital-cooperation-panel.
65 Roadmap for digital cooperation: implementation of the recommendations of the independent High-Level Panel on
Digital Cooperation, Report of the Secretary-General, New York, 11 June 2020, 15,
https://www.un.org/en/content/digital-cooperation-roadmap.
66 According to the roadmap, this advisory body will comprise Member States, relevant UN entities, interested
companies, academic institutions, and civil society groups. Ibid. 17-18.
67 As a commentator put it, “much like its role during nuclear non-proliferation discussions, the UN must be able
to navigate the social disruptions resulting from ubiquitous AI adoption with finesse.” Nicholas Wright, “AI &
global governance: three distinct AI challenges for the UN,” Articles & insights, Centre for Policy Research, UNU,
7 December 2018, https://cpr.unu.edu/ai-global-governance-three-distinct-ai-challenges-for-the-un.html.
68 For these reasons, he added, “it is safe to say there will be no comprehensive traditional regulation of AI for some
time, except perhaps if some disaster occurs that triggers a drastic and no doubt poorly-matched regulatory
response.” G. Marchant, “Soft law” governance of artificial intelligence, AI Pulse, UCLA School of Law, 25 January
2019, https://aipulse.org/soft-law-governance-of-artificial-intelligence.
17
regulatory regimes. Governments pursuing nationalist agendas and large corporations
concerned with profitability may resist too invasive multilateral processes. 69 Many States,
nonetheless, attach great value to the UN as a broker to bridge differences among the
membership. The informal core group on exponential technological change has gathered 50+
Member States since 2017 to address tech issues in the UN and its links to the SDGs. Following
the HLP report, the group decided to transition to a new expanded configuration to go beyond
the 2030 Agenda and include in the discussion all three pillars of the UN: peace and security,
human rights, and development. As a result, the first meeting of the new Group of Friends on
Digital Technologies took place in November 2019, in New York, co-chaired by the Permanent
Representatives of Finland, Mexico, and Singapore, to exchange views on its programme of
work for 2020 and explore the interlinkages between digital technologies and the three pillars,
including cooperation and governance in cross-cutting issues. Again, COVID-19 disrupted the
original plans and the effectiveness of the Group of Friends is still to be tested. 70
At this juncture, even though some proposals may require more time for maturation,
others could be implemented in a more straightforward manner, without the need for a fully-
fledged, previous commitment from States on the preconditions for establishing new criteria,
benchmarks, and rules at the global level. Consultative and non-binding international settings,
strictly on a voluntary basis, mindful of the principle of “do no harm,” could be feasible if
initially focused upon information gathering, independent analyses, and recommendations
geared at prevention instead of regulation per se.
Traditional, institutionally based intergovernmental diplomacy seems too slow and time-
consuming if compared with the astounding pace of technological innovation. There is currently
a “governance gap,” as AI technology has been evolving much faster than international law’s
ability to keep up. This quandary partially explains why “an amorphous and constantly
evolving set of informal soft law governance mechanisms” are coming in to fill the void. 71
Informal, ad-hoc, plurilateral initiatives spurred by like-minded countries (“coalitions of the
willing”) may at times bring about added value in governance, but they usually lack universal
appeal and raise suspicious of their agendas in the eyes of States left outside these groups.
Trying to be too ambitious from the very beginning may lead to a dead-end and risk
backlash. A better solution to escape short-term paralysis is for the UN, with the authority and
legitimacy conferred to it by Member States, to offer a collective space, open to all, to encourage
69 In the age of global politics, normative guidance for the safe deployment of AI systems would benefit from
insights brought by tech-leaders and tech-takers alike. E. Pauwels, “How can multilateralism survive the era of
artificial intelligence?”, UN Chronicle LV, no. 3-4 (December 2018), https://unchronicle.un.org/article/how-can-
multilateralism-survive-era-artificial-intelligence.
70 During the first meeting, many delegations pledged their support to the Group of Friends, but a few Member
States warned that “duplication” should be avoided and, for this reason, the Group would better wait for the results
of ongoing processes in the UN General Assembly, such as the Open-Ended Working Group (OEWG) and the
Group of Governmental Experts (GGE) on developments in the field of information and telecommunications in the
context of international security. For more information on these two processes, see UNODA’s website:
https://www.un.org/disarmament/ict-security. On multilateral frameworks in cybersecurity, see UNIDIR’s
portal: https://cyberpolicyportal.org/en/multilateral-legislation.
71 R. Hagemann et al., “Soft law for hard problems: the governance of emerging technologies in an uncertain future,”
Colorado Technology Law Journal, 5 February 2018, https://ssrn.com/abstract=3118539.
18
cooperation and advance recommendations. 72 Besides mobilising leading experts and
exchanging information, the new AI advisory body announced in the Secretary-General’s
roadmap can collect evidence to reduce both misinformation and fear by the general public
about AI risks, so that myth, hype, and misunderstandings are dissipated and duly separated
from the body of technical knowledge available so far.73 Although not mandated to engage in
normative deliberations, the invited experts could promote best practices and start discussing
standardisation and compliance in search for commonalities, which can over time generate
critical mass to expand their work to other areas as appropriate.
There is a long way before the conditions are ripe for the adoption of international rules
on audit and certification schemes applicable to AI technologies. Advisory bodies, such as the
one the Secretary-General decided to establish, can be conceived as a first step in that direction.
Taking advantage of expertise and knowledge produced in the UN system, priority should be
given to close coordination with UNESCO’s Ad Hoc Expert Group. An informal coalition could
be built to articulate the more forthcoming Member States in a building-block strategy towards
a multilateral process, under the auspices of the UN, in a place and timeframe yet to be
determined. If successful, this incremental approach could hopefully enjoy increasing support
and become a precursor for advancing institution-building in this area. It may eventually give
rise to a comprehensive, IPCC-modelled “Intergovernmental Panel on Artificial Intelligence,”
or another permanent structure that can respond to the demands and expectations of our time.
Conclusion
On closer inspection, AI governance is not just about regulation or imposing restrictions,
but encouraging prevention, horizon scanning, and foresight as well. It may take more time than
expected for the international community to start moving in earnest towards consequential
measures on AI policymaking. Effective multilateralism essentially means that international
issues should ideally be addressed in good-faith by all interested parties, following procedures
commonly agreed upon, upholding the rule of law, fairness, and both geographical and gender
balance, in order to reach political solutions that can accommodate all views and concerns as
much as possible. Against all the odds, the UN is well positioned to offer a structured machinery
for cooperation among Member States and other stakeholders to address the impact of emerging
technologies.74
72 An example along these lines would be the creation of a “Global Foresight Observatory” on the convergence of
AI with other emerging technologies, i.e. a multi-stakeholder platform to foster cooperation in technological and
political preparedness for responsive innovation. This sort of initiative fits well with what the UN can do even in
hard-knock situations. E. Pauwels, The new geopolitics of converging risks: the UN and prevention in the era of AI,
Centre for Policy Research, UNU, 2 May 2019, 53, https://cpr.unu.edu/the-new-geopolitics-of-converging-risks-
the-un-and-prevention-in-the-era-of-ai.html.
73 Many private companies may welcome this approach, since too much emphasis on the “dark side” of the
technology does not make justice to the resources being invested in AI for the public good by the industry.
74 Concerning the internal, bureaucratic machinery of the Organisation, not being historically an entity readily
recognized for innovation, the UN has been taking some steps to incorporate new approaches to technology in its
toolkit, as previously noted in this chapter. A sustained effort will be needed to continue and expand this trajectory
going forward.
19
Again, these all-important issues concern all societies and its consideration should not be
confined to a few influential actors. The larger AI debate would benefit from the participation
of more scholars, politicians, and policymakers of the Global South as a means of “bringing the
Rest in.”75 When stakes are too high and entail worldwide externalities, all countries are likely
to be affected in one way or another. Dealing with AI risks in the near- and long-term will
require political will, capacity-building, inclusiveness, and more diverse representation in a
plurality of settings. With the right amount of support from Member States, the UN can help
bridge the gap at the global level and provide a legitimate, representative, and policy-oriented
locus for deeper international cooperation on AI matters in the years ahead.
***
75A. Acharya and B. Buzan, The making of global international relations: origins and evolution of IR at its centenary
(Cambridge: Cambridge University Press, 2019), 302.
20