Proactive Investors - Britain and America have formally agreed to collaborate in developing safety tests for artificial intelligence (AI) models.
Under the deal, each will align scientific approaches to rapidly build ways of evaluating various forms of AI, a statement said on Tuesday.
“We have always been clear that ensuring the safe development of AI is a shared global issue,” UK science, innovation and technology secretary Michelle Donelan commented.
“Only by working together can we address the technology’s risks head-on and harness its enormous potential to help us all live easier and healthier lives.”
At least one joint test will be performed on a publicly available model under the agreement, which comes after November’s AI summit at Bletchley Park.
Adoption of AI has rapidly grown over the past year, following the release of OpenAI’s ChatGPT in late 2022.
However, a lack of regulation over the emerging technology, alongside fears over mass jobs cuts and its influence on the the likes of elections, have prompted concern.
President Joe Biden signed an executive order aimed at clamping down on such risks in October as a result, while the Commerce Department proposed rules in January for US cloud firms to clarify whether public data was being used to train foreign models.
Some £100 million was set aside by the UK in February to develop nine research hubs set on training regulators, meanwhile.
“Time is of the essence because the next set of models are about to be released, which will be much, much more capable,” Donelan added in an interview with Reuters.