The Biden administration has secured voluntary commitments from major developers of artificial intelligence (AI) systems, including Google, Amazon, Microsoft, and OpenAI, to prioritize safe and secure development practices. Moreover, in a meeting with top executives of these companies, the administration emphasized the importance of transparency as well. To reinforce this commitment, the White House is preparing an executive order to advance trustworthy AI and manage potential risks.
In coordination with Congress, the White House is working on establishing a legal and regulatory regime for AI. Senate Majority Leader Charles E. Schumer has organized a series of briefings on AI systems for senators and intends to draft legislation in the coming months. He has welcomed the voluntary commitments from technology companies, but he believes that legislation is necessary to effectively harness AI’s potential while addressing its challenges.
Senator Mark Warner, Chair of the Senate Intelligence Committee, also supports the need for regulation in AI. He emphasized the importance of prioritizing security, combating bias, and responsible technology rollouts. In April, Warner wrote to CEOs of major tech companies, urging them to prioritize security in the design and development of their technologies.
As part of their commitments, these companies will ensure the safety of their products before public release, prioritize security in system development, and provide transparency to users about AI-generated content. They will publicly report any misuse of the technology and work towards consistency in adopting similar practices.
To advance safety and trust, companies are exploring the development of a watermarking system for AI-generated audio and video content. This system would distinguish it from content created by humans. Microsoft, taking a step further, plans to collaborate with the National Science Foundation to establish a national AI research resource for independent academic research on AI safety. They also intend to support the creation of a national registry for high-risk AI systems.
Moreover, the companies have committed to sharing best practices and risk management information with each other, as well as with academic researchers, government agencies, and civil society groups. Third-party researchers will be encouraged to identify and report vulnerabilities in AI systems, fostering a culture of transparency and collaboration.
These voluntary commitments highlight the collective efforts of major tech companies and the Biden administration in ensuring the safe and secure development of AI technologies. By prioritizing transparency and responsible practices, they aim to address the risks associated with AI while permitting its further positive development of the technology.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…