
As an outcome of an accord signed by OpenAI and Anthropic with the AI Safety Institute under the National Institute of Standards and Technology (NIST), the companies' AI models will be sent to the US government to conduct safety inspections.
Involves collaboration on AI model safety research, testing and evaluation, the agreement enables the AI Safety Institute to be provided with new AI models from OpenAI and Anthropic before and after public release.
This sort of safety assessment was also implemented in the UK, wherein AI developers make AI models accessible for testing before they hit the market.
Read more: Nvidia's AI model 'Eagle' processes visual data in Ultra-HD
“With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” stated AI Safety Institute Director Elizabeth Kelly in a press release.
“These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI,” the press release added.
After conducting a thorough evaluation of the tech giant's AI models, the AI Safety Institute will also give OpenAI and Anthropic feedback “on potential safety improvements to their models, in close collaboration with its partners at the U.K. AI Safety Institute.”
“We strongly support the US AI Safety Institute’s mission and look forward to working together to inform safety best practices and standards for AI models,” said OpenAI’s chief strategy officer Jason Kwon.
The AI developers said that sealing a contract with the AI Safety Institute will indicate how the US develops a secure and well-checked AI ecosystem.