AI Safety – Future of Life Institute Advocates for AI Safety Policies to the Global AI Safety Summit

Intellectual Anarchy

UK will host the world's first Global AI Safety Summit in November 2023.

As Artificial Intelligence’s (AI) adoption by businesses and society has proliferated, concerns about its risks have escalated. Notably, in March 2023, more than 1,000 AI researchers and tech leaders signed an open letter to all AI labs, highlighting the potential dangers. They urged firms, such as OpenAI, Google, and Microsoft, to halt further development of advanced AI models “more powerful than GPT-4” for six months until the establishment of global safety protocol.

Now six months later, a global protocol has still not been implemented and heightened discussions surrounding AI have prompted the UK to hold the world’s first AI safety Summit in late 2023. In anticipation of this summit, Future of Life Institute (FLI), a non-profit dedicated to mitigating AI risks, has taken proactive steps. They have submitted a letter to the UK government, offering AI safety recommendations to consider during the summit.

UK Global AI Safety Summit

Scheduled for November 1-2, 2023, the Summit will commence at Bletchley Park, chosen for its historic importance. Bletchley Park is the birthplace of one of the world’s first programmable digital computers, Colossus. It is also associated with Alan Turing, a founding father of artificial intelligence and modern cognitive science. The UK has invited key nations, including China, and prominent AI organizations to foster internationally coordinated actions to ensure AI safety. The UK has outlined five summit objectives:

  • A shared understanding of the risks posed by frontier AI and the need for action.
  • A forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks.
  • Appropriate measures which individual organizations should take to increase frontier AI safety.
  • Areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance.
  • Showcase how ensuring the safe development of AI will enable AI to be used for good globally.

Within the letter sent to the UK Prime Minister, Rishi Sunak, FLI shared additional recommendations emphasizing the need for a post-summit framework on AI governance.


Established in 2015, FLI aspires to harness powerful technologies, like AI, for humanity’s benefit, avoiding potential catastrophic outcomes. It advocates for AI governance to mitigate harm from accidental or intentional misuse of AI.

To address AI risks, FLI focuses their efforts on policy advocacy, public outreach, grantmaking, and event organization. Their contributions, such as the Asilomar AI Principles, have solidified their voice in AI governance. In 2017, FLI organized an invite-only conference, Beneficial AI, with leading AI experts to improve the collective understanding and responsible future developments of AI. FLI compiled 23 principles that have garnered widespread endorsement from tech giants and governments, with California adopting them in 2018.

FLI’s policy team is currently focusing on the upcoming summit’s recommendations. FLI has three sets of objectives that closely align with the UK objectives for the summit.

  1. Highlight the critical and immediate nature of AI risks.
  2. Acknowledge that AI challenges are global, necessitating collective global action.
  3. Stress the importance of timely government intervention, including enacting stringent regulations when needed.

Furthermore, FLI has offered detailed suggestions for pre-summit preparations. Encouraging governments not to shy away from AI’s complexities and to reconsider their national AI safety strategies. They also offered country-specific recommendations:

For the UK, they emphasized global inclusivity, suggesting moving beyond U.S. AI norms to focus on civilian AI safety and avoiding autonomous weapon discussions. As for China and the U.S., FLI encouraged both countries to maintain an open dialogue on AI safety with one another. For the E.U., FLI highlighted that the E.U. AI Act should be informed and flexible for future adaptations.

FLI also expresses concerns about potential disparities in summit participants’ understanding of AI. To bridge this gap, they recommend involving independent experts to elucidate global risks. Topics should address AI’s large-scale harms and potentially catastrophic outcomes from misuse and accidents.

Post-Summit Roadmap

FLI has outlined post-summit commitment recommendations for participating nations. These include biannual follow-up meetings and a blueprint draft for a new global AI agency. The global AI agency would coordinate advanced AI governance, advise on safety standards, and ensure worldwide compliance.

Furthermore, they envision AI strategies that prioritize safety alongside economic interests at the national level. They recommend establishing safety standards and extensive AI training licenses, among other measures.

Through their recommendations, FLI aspires to lead the way to shared safety protocols that have eluded the AI sector. As Professors Max Tegmark and Anthony Aguirre state, “The need for public sector involvement has never been clearer.”

The world stands at the precipice of a new technological era, and the decisions made now will shape AI’s safe and beneficial integration into our global society. The UK’s initiative in hosting this summit, complemented by the forward-thinking insights from organizations like FLI, signifies a hopeful step towards a harmonized and responsible future with artificial intelligence.

To read about Oceanit’s stance on AI safety, go to the article here written by Jeffrey Watumull.