Organizations like the European Union (EU) are taking the lead in formulating new regulations for AI, which could potentially establish a global standard. However, the enforcement of these regulations is expected to be a time-consuming process that spans several years.
“In the absence of specific regulations, governments can only resort to the application of existing rules,” stated Massimiliano Cimnaghi, a European data governance expert at consultancy BIP, in a statement to Reuters.
As a result, regulators are turning to already-established laws, such as data protection regulations and safety measures, to tackle concerns related to personal data protection and public safety. The necessity for regulation became evident when national privacy watchdogs across Europe, including the Italian regulator Garante, took action against OpenAI’s ChatGPT, accusing the company of violating the EU’s General Data Protection Regulation (GDPR).
In response, OpenAI implemented age verification features and provided European users with the ability to block their data from being used to train the AI model.
However, this incident prompted additional data protection authorities in France and Spain to initiate investigations into OpenAI’s compliance with privacy laws.
Consequently, regulators are striving to apply existing rules that encompass various aspects, including copyright, data privacy, the data utilized to train AI models, and the content generated by these models.
Proposals for the AI Act
In the European Union, proposals for the AI Act will require companies like OpenAI to disclose any copyrighted material used to train their models, exposing them to potential legal challenges. However, proving copyright infringement may not be straightforward, as Sergey Lagodinsky, a politician involved in drafting the EU proposals, explains.
“It’s like reading hundreds of novels before you write your own,” he said. “If you actually copy something and publish it, that’s one thing. But if you’re not directly plagiarizing someone else’s material, it doesn’t matter what you trained yourself on.
Regulators are now urged to “interpret and reinterpret their mandates,” says Suresh Venkatasubramanian, a former technology advisor to the White House. For instance, the U.S. Federal Trade Commission (FTC) has used its existing regulatory powers to investigate algorithms for discriminatory practices.
Similarly, French data regulator CNIL has started exploring how existing laws might apply to AI, considering provisions of the GDPR that protect individuals from automated decision-making.
As regulators adapt to the rapid pace of technological advances, some industry insiders call for increased engagement between regulators and corporate leaders.
Harry Borovick, general counsel at Luminance, a startup that utilizes AI to process legal documents, expresses concern over the limited dialogue between regulators and companies.
He believes that regulators should implement approaches that strike the right balance between consumer protection and business growth, as the future hinges on this cooperation.
While the development of regulations to govern generative AI is a complex task, regulators worldwide are taking steps to ensure the responsible use of this transformative technology.