Safeguarded AI is a UK government-funded project to incorporate safety measures into AI systems. It was created to use AI monitoring systems to assess the safety of other client AI systems deployed in crucial UK sectors. The effort is supported by the UK’s Advanced Research and Invention Agency (ARIA), which will receive £59 million over four years for potentially transformative scientific research. Safeguarded AI hopes to develop AI systems that can provide quantitative assurances, such as a risk score, regarding their impact on the real world, and to create a “gatekeeper” AI to understand and reduce the safety risks of other AI agents functioning in high-stakes sectors.
Additionally, ARIA is offering funding to individuals or organizations in high-risk UK sectors such as transportation, telecommunications, supply chains, and medical research, to develop applications that could benefit from AI safety mechanisms. The Safeguarded AI project is part of the UK’s initiative to position itself as a leader in AI safety. Although the funding program favors UK-based applicants, an ARIA spokesperson expressed a desire for international collaboration on AI safety, and for a global discussion of AI risks and solutions.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…