top of page

Did UK's AI Summit Meet Expectations or Fall Short? A Closer Look

Updated: Nov 9, 2023



BLETCHLEY PARK, England — The UK government hosted AI Safety Summit on 1-2 November 2023 in once the top-secret home of the World War Two Codebreakers.


The US International Trade Administration values the UK AI market at $21 billion, making it the third largest AI market in the world after the US and China.

The pivotal global summit on artificial intelligence (AI), showcasing the recognition of AI's potential transformative impact on the economy, society, and international affairs. This summit is distinctively focused on the 'frontier risks' associated with AI, which emanate from the training and development of advanced AI models, as opposed to risks from specific applications. Over 350 industry leaders have previously voiced their concerns through an open letter, warning about the existential threats AI could pose to humanity. The summit seeks to navigate through the divergent global approaches to AI regulation and governance, ranging from self-regulatory models in the US and UK to the prescriptive approach of the EU, and the state-led model of China. Here are some of the key expectations framework for the summit to address;


1. International Cooperation and Roadmap Development: Expectation: The summit should result in a clear and detailed roadmap for international cooperation on AI safety.

Professor Robert F. Trager highlighted the lack of details on cooperation, stating,

'We resolve to work together' to ensure safe AI, but is short on details of how countries will cooperate on these issues.

2. Regulatory Frameworks Beyond Voluntary Compliance: The summit should move towards establishing binding regulatory frameworks rather than relying solely on voluntary regulation.

Professor Robert F. Trager also noted the summit's lean towards voluntary regulation, which he believes is likely insufficient, especially when compared to the US executive order that creates binding requirements.

"actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systems."

Professor Robert F. Trager


3. Understanding and Steering AI Development: The summit should emphasize the development of AI in ways that are understandable and controllable, addressing both safety and ethical concerns. Professor Vincent Conitzer expressed concern about the current trajectory of AI development, stating,

"Unfortunately, much of this technical progress has come along a branch of AI that makes it very difficult for us to understand or carefully steer what exactly the AI is doing."

4. Interdisciplinary Expertise and Comprehensive Risk Assessment: The summit should recognize the need for interdisciplinary expertise to understand and mitigate the vast array of AI risks, many of which are not yet fully understood. Professor Vincent Conitzer also mentioned the need for interdisciplinary expertise, emphasizing that

"the variety of concerns raised by AI, across both AI safety and AI ethics, is enormous."

5. Balancing Innovation with Democratic Values: The summit should aim to create a regulatory regime that supports open development of AI while ensuring alignment with democratic values and norms. Professor Keegan McBride stressed the importance of regulation that does not inhibit innovation, stating,

"To ensure that AI remains aligned with democratic values and norms it is essential that fears over the perceived risks of AI do not lead to policies which inhibit innovation."

On the first day of the UK AI Summit, leading AI nations convened to establish a shared understanding of the opportunities and risks posed by frontier AI, a significant stride towards international cooperation. Despite this progress, experts from the University of Oxford noted the declaration's lack of detailed roadmap for such cooperation, hinting at a continued preference for voluntary regulation—a stance seen as potentially insufficient. The Summit's declaration stresses the onus on developers of frontier AI capabilities to ensure safety, echoing sentiments from the recent US executive order that advocates for binding requirements.


The Summit marks a crucial platform for global dialogue on AI safety and ethics amidst rapid technological advancements, underscoring the need for a multidisciplinary approach to understand and mitigate AI risks. As governments grapple with regulatory frameworks for AI, the decisions made now are poised to have lasting geopolitical ramifications, emphasizing the necessity for regulatory regimes that foster open development of innovative AI systems while aligning with democratic values and norms. Here are some of the highlights according to major news agencies.







Politico


The Summit was a significant achievement for British Prime Minister Rishi Sunak. Despite initial criticism and doubts, Sunak managed to gather nearly 30 countries, including the United States and China, to sign a shared communiqué addressing AI risks. The summit also led to the establishment of a global network of AI researchers and a groundbreaking agreement allowing governments to delve into advanced AI technologies.


Marion Messmer from Chatham House emphasized the importance of international cooperation in tackling AI challenges.
French Finance Minister Bruno Le Maire hailed it as a key milestone in regulating artificial intelligence effectively.

The announcement that South Korea and France would host future AI safety summits further underscored the success of the Bletchley Park summit, vindicating Sunak's decision to convene it.

Guardian

Despite differing opinions on the existential risks of AI, there was consensus on the immediate fears of disinformation, especially concerning upcoming elections in various countries. The varying pace of AI regulation across countries was evident, underscoring the importance of international summits to foster a shared understanding and approach towards AI governance and safety.


Reuters


At the recent AI summit held at Bletchley Park, AI developers and government leaders reached a significant milestone by agreeing to collaborate on testing new frontier AI models before their release, aiming to address the risks associated with rapidly advancing AI technology. This achievement was described as a "landmark achievement" and included key figures from the United States, the European Union, and China, who jointly committed to identifying and mitigating AI risks.

The event gathered politicians, academics, and tech executives to shape the future of AI and explore the possibility of establishing an independent global oversight body.
While China initially participated, it did not sign the agreement on testing, highlighting the complexities of international AI cooperation.

Notably, entrepreneurs like Elon Musk cautioned against rushing AI legislation and suggested that companies could play a crucial role in uncovering and addressing AI-related issues, promoting a balanced approach to AI's future.

Financial Times


The underlying tensions emerged concerning AI development. US Vice-President Kamala Harris asserted America's leadership in AI innovation and signaled an intention to set its own rules for AI. she said,

“Let us be clear: when it comes to AI, America is a global leader. It is American companies that lead the world in AI innovation. It is America that can catalyse global action and build global consensus in a way that no other country can,”

The debate also revolved around whether AI models should be "open" or "closed," with some advocating open-source models for broader access and others favoring closed models for better security and control. While debates between open and closed AI models continued, the focus shifted to the need for equitable access to AI advancements especially to ensure developing nations would not be left behind in the digital economy. Rajeev Chandrasekhar, the Indian minister of electronics and IT, emphasized that access to technology should be available to every country.


The future summits in South Korea and France will delve into concrete regulation and evaluation of AI models, indicating a commitment to address these global challenges comprehensively.


Silicon


Furthermore the UK government's AI summit showcased significant achievements in AI infrastructure development, with the announcement of a £225 million investment in the Isambard-AI supercomputer, designed to enhance AI and simulation compute capacity for research and industrial applications, including healthcare, green fusion energy development, and climate modeling.


The supercomputer will be linked to the University of Cambridge's Dawn supercomputer, making British AI supercomputing 30 times more powerful when combined.

This investment also supports the work of the Frontier AI Taskforce and the AI Safety Institute, focusing on analyzing advanced AI models for safety features and supporting government policy. The AI Research Resource, backed by a tripled investment of £300 million, aims to boost UK AI capabilities, positioning Isambard-AI as Britain's most advanced computer, using 5,000 advanced AI chips from Nvidia. This initiative reflects the UK's commitment to lead in adopting AI technology safely.



AI in Africa


According to Ehia Erhaboh - UK based AI researcher and Co-Convener, AI in Nigeria, AI safety is particularly important for Africa, considering one of the most potent risks that AI presents – disinformation. Given the significantly lower literacy rates in many sub-Saharan African countries compared to global average, ‘mis’ and ‘dis’ information will have more impacts in leading African societies in directions that could be harmful. Furthermore, he said:

African governments who have demonstrated interest in this general-purpose technology should ‘walk the talk’ and take actions to build the capabilities for effective regulations that underpins AI innovation that leads to the wellbeing, peace, and prosperity of their citizens.

Eliud Owallo, Kenya’s Cabinet Secretary for Information Communications and Digital Economy said:

Through this broad coalition of partners, AI potential benefits will open opportunities and the risks preparedness broadened. This partnership will benefit all countries and ensure that developing countries are not left behind in the AI revolution.

Paula Ingabire, Rwanda’s Minister of Information Communication Technology and Innovation said:

Africa has historically lagged behind in previous technological revolutions due to a lack of local production and value addition capacity. Rwanda is fully committed to harnessing the transformative power of AI to drive our nation’s and continent’s social and economic development agenda by becoming the proof-of-concept hub that Africans produce from, for the continent.

We must take necessary steps to prevent the disenfranchisement of people due to unpleasant experiences of abuse and also misuse. This risk can further exacerbate existing social economic inequalities that limit the significant impact that AI brings to our world.

We must therefore ensure the benefit of AI is assessable to all without discrimination or exclusion.

By doing this we will create a world that presents opportunity as a collective birthright and not as privilege for selected few. - Dr. Bosun Tijani, Nigeria’s Honorable Minister for Communications, Innovation and Digital Economy.

“The benefits of tech should be available to . . . every country in the world.” said Rajeev Chandrasekhar, Indian minister of electronics and IT.

UK and partners to fund safe and responsible AI projects for development around the world, beginning in Africa, with £80 million collaboration announced at AI Safety Summit.



Did the Summit meet expectations?


While the summit made significant strides in fostering international cooperation and highlighting the importance of AI safety, it also exposed certain areas of concern.


One of the key achievements of the summit was the signing of the Bletchley Declaration, in which nearly 30 countries committed to addressing AI risks and collaborating on the testing of new frontier AI models. This represents a notable step towards international consensus on AI safety. It is worth noting China only participated but did not sign the agreement on testing.


Additionally, the establishment of the AI Safety Institute and the appointment of Prof. Yoshua Bengio to lead a 'State of the Science' consensus report are significant contributions to advancing AI safety research and regulation. Furthermore, the inclusion of China in the discussions is crucial for meaningful global AI regulation.


However, the summit also revealed some challenges. There was a lack of detailed guidance on how international cooperation would be implemented, leaving questions about the effectiveness of voluntary regulation. The focus on long-term existential risks, while important, should not overshadow the pressing issues of AI-induced harms, such as disinformation and surveillance. It is essential to strike a balance between fostering innovation and ensuring democratic values in AI development.


In conclusion, Sunak deserves credit for taking on the task of first mover and his legacy continues with South Korea and France confirmed hosting of future AI safety summits. The UK's AI Safety Summit made considerable progress in addressing AI safety and ethics on a global scale. It emphasized the need for cooperation, interdisciplinary expertise, and regulatory frameworks to navigate the complexities of frontier AI. While challenges remain, the summit laid the groundwork for future discussions and actions to make AI safe and beneficial for all.



References


  1. Expert Comment: Leading AI nations convene for day one of the UK AI Summit. https://www.ox.ac.uk/news/2023-11-01-expert-comment-leading-ai-nations-convene-day-one-uk-ai-summit

  2. Sunak’s AI summit scores ‘diplomatic coup’ but exposes global tensions https://www.ft.com/content/1719ec70-e183-4d83-8491-f5a76d9f5a78

  3. Five takeaways from UK’s AI safety summit at Bletchley Park https://www.theguardian.com/technology/2023/nov/02/five-takeaways-uk-ai-safety-summit-bletchley-park-rishi-sunak

  4. Sunak the influencer: How the UK’s AI summit surprised the skeptics. https://www.politico.eu/article/sunak-the-influencer-how-the-uks-ai-summit-surprised-the-skeptics/

  5. At UK's AI Summit developers and govts agree on testing to help manage risks. https://www.reuters.com/world/uk/uk-pm-sunak-lead-ai-summit-talks-before-musk-meeting-2023-11-02/

  6. AI Safety Summit 2023: UK To Invest £225m For AI Supercomputer. https://www.silicon.co.uk/cloud/server/ai-safety-summit-2023-uk-to-invest-225m-for-ai-supercomputer-537233

  7. United Kingdom Artificial Intelligence Market 2023. https://www.trade.gov/market-intelligence/united-kingdom-artificial-intelligence-market-2023

  8. Expert comment: Oxford AI experts comment on the outcomes of the UK AI Safety Summit. https://www.ox.ac.uk/news/2023-11-03-expert-comment-oxford-ai-experts-comment-outcomes-uk-ai-safety-summit

  9. The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023 - GOV.UK

  10. UK unites with global partners to accelerate development using AI https://www.gov.uk/government/news/uk-unites-with-global-partners-to-accelerate-development-using-ai



0 comments
bottom of page