UK government launches AI assurance platform for enterprises
ภาà¸à¸ ูมิ à¸à¸±à¸
The platform is designed to drive demand for the UK’s artificial intelligence assurance sector and help build greater trust in the technology by helping businesses identify and mitigate a range of AI-related risks
The UK government is launching an artificial intelligence (AI) assurance platform to help businesses across the country identify and mitigate the potential risks and harms posed by the technology, as part of a wider push to bolster the UK’s burgeoning AI assurance sector.
Noting that 524 firms currently make up the UK’s AI assurance market – employing more than 12,000 people and worth more than £1bn – the government said the platform would help raise awareness of and drive demand for the sector, which it believes could grow sixfold to around £6.5bn by 2035.
Launched on 6 November 2024, the platform is intended to act as a one-stop shop for AI assurance by bringing together existing assurance tools, services, frameworks and practices in one place, including the introduction to AI assurance and the portfolio of AI assurance techniques guidance previously created by the Department for Science, Innovation and Technology (DSIT).
The platform will also set out “clear steps” for businesses on how to carry out impact assessments and evaluations, as well as how to review data used in AI systems for bias, so as to generate trust in the technology’s day-to-day operations.
Digital secretary Peter Kyle said while AI has “incredible potential” to improve public services, boost productivity and rebuild the economy, “to take full advantage, we need to build trust in these systems which are increasingly part of our day-to-day lives”.
“The steps I’m announcing today will help to deliver exactly that – giving businesses the support and clarity they need to use AI safely and responsibly while also making the UK a true hub of AI assurance expertise.”
While DSIT plans to develop new resources for the platform over time – including an “AI Essentials toolkit” to distil key tenants of relevant governance frameworks and standards so they are “comprehensible for industry” – the department has already launched an open consultation for a new AI assurance self-assessment tool.
“AI Management Essentials [AIME] will provide a simple, free baseline of organisational good practice, supporting private sector organisations to engage in the development of ethical, robust and responsible AI,” said a DSIT report on the future of AI assurance in the UK.
“The self-assessment tool will be accessible for a broad range of organisations, including SMEs. In the medium term, we are looking to embed this in government procurement policy and frameworks to drive the adoption of assurance techniques and standards in the private sector.”
It added that insights gathered from the AIME self-assessment tool would also help public sector buyers make better and more informed procurement decisions involving AI, and that the general suite of products on offer through the platform would further “help support organisations to begin engaging with AI assurance” and “establish the building blocks for a more robust ecosystem”.
The development of safe and responsible AI systems is central to the UK government’s vision for the technology, which it sees as an area where the country can carve out a competitive advantage for itself.
According to DSIT’s AI assurance market report, the department will also seek to support this goal by increasing the supply of third-party AI assurance, which it will do in part by developing a “roadmap to trust third-party AI assurance” with industry; and enabling the interoperability of assurance by developing a “terminology tool for responsible AI”, which it said would help assurance providers navigate the international governance ecosystem.
In further support of the government’s vision, the UK’s AI Safety Institute (AISI) – launched by former prime minister Rishi Sunak in the run-up to his government’s AI Safety Summit in November 2023 – will be running the Systemic AI Safety Grants programme, which will make up to £200,000 of funding available to researchers working to make the technology safer.
Shared AI policies, standards and guidance
On the same day as the assurance platform launch, the AISI announced it had signed a partnership agreement with Singapore, which will see both countries’ AI safety institutes collaborate to drive forward research and work towards a shared set of policies, standards and guidance.
“We are committed to realising our vision of AI for the Public Good for Singapore, and the world. The signing of this Memorandum of Cooperation with an important partner, the United Kingdom, builds on existing areas of common interest and extends them to new opportunities in AI,” said Singapore’s minister for digital development and information, Josephine Teo.
“Of particular significance is our joint support of the international network of AI Safety Institutes (AISI). Through strengthening the capabilities of our AISI, we seek to enhance AI safety so that our people and businesses can confidently harness AI and benefit from its widespread adoption.”
Ian Hogarth, chair of the UK AISI, added: “An effective approach to AI safety requires global collaboration. That’s why we’re putting such an emphasis on the international network of AI Safety Institutes, while also strengthening our own research partnerships.
“Our agreement with Singapore is the first step in a long-term ambition for both our countries to work closely together to advance the science of AI safety, support best practices and norms to promote the safe development and responsible use of AI systems.”
Read more on Technology startups
UK government unveils AI safety research funding details
By: Sebastian Klovig Skelton
UK government adds datacentres to CNI regime: Why did it take so long?
By: Caroline Donnelly
Tech secretary pledges to ‘rewire Whitehall’ and plug the digital gap
By: Lis Evenstad
Labour drops Edinburgh exascale supercomputer but funds more AI
By: Cliff Saran