AI tools

Microsoft, Google And xAI To Allow US Government Test AI Models Before Public Release

Major technology companies including Microsoft, Google, and xAI have agreed to provide the US government with access to unreleased artificial intelligence models for security testing before they are launched publicly.

The move was announced by the National Institute of Standards and Technology as part of efforts to address growing concerns about the cybersecurity risks linked to advanced AI systems.

Under the arrangement, the Center for AI Standards and Innovation (CAISI) will evaluate upcoming AI models for potential threats to national security and public safety before they become widely available.

The agency said it would also continue conducting assessments after deployment. According to officials, more than 40 AI model evaluations have already been completed.

CAISI Director Chris Fall said independent testing is necessary to better understand the implications of increasingly powerful AI systems.

“Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” Fall said in a statement.

The new collaboration follows heightened concern sparked by Anthropic’s latest AI model, Mythos, which the company reportedly described as significantly more advanced in cybersecurity capabilities than existing systems.

Anthropic has limited public access to the model, citing safety concerns, and has reportedly briefed senior US government officials about its capabilities. The development has raised alarm among governments, banks, and utility companies over the possibility of AI-powered cyber threats.

Last week, OpenAI also announced plans to make some of its most advanced AI models available to approved government agencies to help identify and respond to emerging AI-related security risks.

Experts say the partnership could strengthen the government’s ability to evaluate advanced AI technologies. Jessica Ji, Senior Research Analyst at Georgetown’s Center for Security and Emerging Technology, noted that government agencies often lack the same level of computing power, staffing, and technical resources available to large technology firms.

Meanwhile, the White House is reportedly considering the creation of a formal review framework for advanced AI models. According to reports, officials are consulting experts on possible oversight measures, signalling a potential shift from the administration’s previously limited approach to AI regulation.

A White House spokesperson said no final policy decision has been announced and described reports of possible executive orders as speculative.

Microsoft Chief Responsible AI Officer Natasha Crampton said collaboration with CAISI would complement Microsoft’s own internal AI testing efforts by adding scientific and national security expertise to the evaluation process.

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every week.

We don’t spam!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *