United States President Joe Biden issued an executive order on Oct. 30 establishing new standards for artificial intelligence (AI) safety and security.
Biden’s order stated it is building off previous actions taken, including AI safety commitments from 15 leading companies in the industry. The new standards have six primary points, along with plans for the ethical use of AI in the government, privacy practices for citizens and steps for protecting consumer privacy.
The first standard requires developers of the most powerful AI system to share safety test results and “critical information” with the government. Secondly, the National Institute of Standards and Technology will develop standardized tools and tests for ensuring AI’s safety, security and trustworthiness.
The administration also aims to protect against the risk of AI usage to engineer “dangerous biological materials” through new biological synthesis screening standards.
Another standard includes working toward protection from AI-enabled fraud and deception. It says standards and best practices for detecting AI-generated content and authenticating official content will be established.
It also plans to build on the administration’s ongoing AI Cyber Challenge by advancing a cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software. Finally, it ordered the development of a national security memorandum, which will further direct actions on AI security.
The order also touched on the privacy risks of AI:
To this, the president officially called on Congress to pass bipartisan data privacy legislation to prioritize federal support for the development and research of privacy techniques and technologies.
Related: Adobe, IBM, Nvidia
Read more on cointelegraph.com