In a significant move towards enhancing artificial intelligence safety, leading AI developers OpenAI and Anthropic have entered agreements to provide their latest AI models to the U.S. government for rigorous safety testing.
The collaborations were established with the U.S. AI Safety Institute, a division of the National Institute of Standards and Technology, announced on Thursday. This initiative marks a pivotal step in addressing the growing concerns surrounding AI regulation since the introduction of OpenAI's ChatGPT.
Elizabeth Kelly, director of the U.S. AI Safety Institute, emphasized the importance of these agreements, stating, \"These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.\" The agency aims to offer feedback on potential safety enhancements to both companies' models before and after their public release, working in tandem with the UK AI Safety Institute.
Jack Clark, co-founder and head of policy at Anthropic, highlighted the benefits of this partnership, saying, \"Our collaboration with the U.S. AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment. This strengthens our ability to identify and mitigate risks, advancing responsible AI development.\"
This initiative aligns with a 2023 White House executive order on AI, which seeks to establish a legal framework for the swift deployment of AI technologies in the United States. While the U.S. government advocates for a hands-off approach to foster innovation, contrasting with the European Union's stringent AI Act, states like California are taking their own steps. On Wednesday, California lawmakers approved a state AI safety bill, now awaiting the governor's signature.
OpenAI CEO Sam Altman, in a social media post, welcomed the company's agreement with the U.S. government, emphasizing the need for national-level regulation. He subtly criticized the newly passed California law, arguing that it could hinder research and innovation by imposing penalties for violations.
As the global landscape of AI development continues to evolve, partnerships like those between OpenAI, Anthropic, and governmental bodies underscore the collaborative efforts to balance innovation with safety and ethical considerations.
Reference(s):
OpenAI and Anthropic to share AI models with U.S. government
cgtn.com