WordPress Ad Banner

Former White House Advisors and Tech Researchers Unite in New Statement Against AI Harms

Two former White House AI policy advisors, alongside a coalition of over 150 AI academics, researchers, and policy practitioners, have recently endorsed a “Statement on AI Harms and Policy” published by ACM FaaCT (Conference on Fairness, Accountability, and Transparency). This development coincides with the ongoing annual conference in Chicago. Alondra Nelson, former deputy assistant to President Joe Biden, and Suresh Venkatasubramanian, former White House advisor for the “Blueprint for an AI Bill of Rights,” are among the signatories. The statement focuses on the current detrimental impacts of AI systems and calls for policy grounded in existing research and tools.

In contrast to previous petitions, the ACM FaaCT statement places emphasis on the real-world harmful effects of AI systems and advocates for policy measures informed by extensive research. It highlights concerns such as inaccurate or biased algorithms denying crucial healthcare and language models contributing to manipulation and misinformation. The signatories, comprising scholars and practitioners from the Conference on Fairness, Accountability, and Transparency, assert that their body of work not only anticipates the risks associated with AI but also provides guidance on designing, auditing, and resisting AI systems to safeguard democracy, social justice, and human rights. The statement underscores the importance of utilizing existing tools to shape a safer technological future and calls upon policymakers to take immediate action.

WordPress Ad Banner

Upon sharing the statement on Twitter, Alondra Nelson cited the opinion of the AI Policy and Governance Working Group at the Institute for Advanced Study, where she currently holds a professorship following her departure from the Biden Administration in February. Nelson highlighted the necessity and feasibility of addressing the myriad concerns arising from the expanding use of AI systems and tools, as well as the urgency in addressing present-day harms, unattended risks, and future uncertainties to ensure public safety.

The statement has garnered support from various influential figures in the AI research community. Notable signatories include Timnit Gebru, founder of the Distributed AI Research Institute (DAIR), as well as researchers from Google DeepMind, Microsoft, Stanford University, and UC Berkeley. This diverse coalition of experts underscores the widespread consensus among AI researchers on the need for policy measures that address the harmful implications of AI technology and mitigate potential risks.

The collective call for action reflects the growing recognition of the social, ethical, and policy dimensions surrounding AI. As AI continues to shape numerous aspects of our lives, it is imperative that policy frameworks align with the research conducted in the field to ensure the responsible and beneficial deployment of AI systems. With the support of leading figures in AI research, the push for comprehensive policies that protect public interests and fundamental rights gains substantial momentum. The engagement of policymakers, industry stakeholders, and the public at large is now essential to translate these calls into effective policies that steer the future development and deployment of AI towards a safer and more equitable future