WordPress Ad Banner

Biden-Harris Administration Collaborates with Leading AI Companies to Address AI Risks


In a significant move towards responsible AI development, the Biden-Harris Administration has secured voluntary commitments from seven prominent AI companies. The commitments were made during a meeting at the White House where representatives from OpenAI, Amazon, Anthropic, Google, Inflection, Meta, and Microsoft gathered to sign the agreements.

The voluntary commitments obtained encompass various measures aimed at mitigating short- and long-term risks associated with AI models. One of the key provisions includes ensuring the safety of AI products before their release to the public. This involves subjecting AI systems to rigorous internal and external security testing. Additionally, the companies pledge to share valuable information on managing AI risks.

WordPress Ad Banner

To bolster cybersecurity, the companies commit to protecting proprietary and unreleased model weights. They also express willingness to facilitate third-party discovery and reporting of vulnerabilities in their AI systems. These initiatives seek to enhance transparency and collaborative efforts in identifying potential risks.

Furthermore, the commitments underscore the development of systems like watermarking to distinguish AI-generated content. Public reporting of AI system capabilities, limitations, and appropriate/inappropriate use is another important aspect of the agreements. Moreover, the companies promise to prioritize research on societal AI risks, including addressing biases and safeguarding privacy.

Beyond risk management, the commitments also highlight the commitment of these AI companies to deploy advanced AI systems to tackle some of society’s most pressing challenges. These include initiatives related to cancer prevention and combating climate change.

The response from the AI industry to the voluntary safeguards has been generally positive. Mustafa Suleyman, CEO, and co-founder of Inflection AI, welcomed the announcement, considering it a positive step. He expressed that the journey to creating genuinely safe and trustworthy AI is in its early phases and considers these commitments as a stepping stone to achieving more.

OpenAI, one of the key participants, published a blog post appreciating the voluntary safeguards and emphasizing their significance in advancing effective AI governance globally.

However, it’s essential to note that these commitments are not legally enforceable, nor do they introduce new regulations. Despite their voluntary nature, Paul Barrett, the deputy director of the NYU Stern Center for Business and Human Rights, views these commitments as an important first step. He particularly appreciates the emphasis on thorough testing before the release of AI models, ensuring that potential safety issues are addressed proactively rather than after the models are in public use.

While the voluntary commitments are valuable, experts agree that they must be complemented by concrete legislation. Suresh Venkatasubramanian, a former White House AI policy advisor to the Biden Administration, highlights the role of voluntary efforts in conjunction with legislation, executive orders, and regulations. He believes that these voluntary initiatives help organizations understand the significance of AI governance and its impact on innovation.

In light of the voluntary commitments, the Biden-Harris Administration is actively working on an executive order and bipartisan legislation to further promote responsible AI innovation and safeguard American citizens from potential harm and discrimination.

Looking ahead, the industry’s voluntary commitments precede Senate efforts in the coming months to address complex AI policy issues and build consensus around legislation. Senate Majority Leader Chuck Schumer announced a series of AI “Insight Forums” to be held in September and October. These forums will engage top experts in discussions on various AI-related topics, including copyright, workforce issues, national security, privacy, and transparency.

The AI industry’s proactive voluntary commitments combined with the forthcoming Senate forums and legislation pave the way for a more secure and responsible AI landscape, ensuring that the potential of AI is harnessed for societal benefits while minimizing risks.