What Happened
Government Collaboration Essential for AI Security
Microsoft Corp. is calling for developers of frontier AI models to work with governments to assess cybersecurity risks, citing the need for collaborative testing to ensure national security and public safety. According to Microsoft’s chief responsible AI officer, Natasha Crampton, the company has reached agreements with the National Institute of Standards and Technology’s Center for AI Standards and Innovation and the United Kingdom’s AI Security Institute to test its frontier AI models. This development was reported by VitalLaw.com.