ChatGPT creators, OpenAI, have announced ten $100,000 grants for anyone with good ideas on how artificial intelligence (AI) can be governed to help address bias and other factors. The grants will be awarded to recipients who present the most compelling answers for some of the most pressing solutions around AI, like whether it should be allowed to have an opinion on public figures.
This comes in light of arguments around whether AI systems such as ChatGPT may have a built-in prejudice because of the data they are trained on (not to mention the opinions of human programmers behind the scenes). Reports have revealed instances of discriminatory or biased results generated by AI technology. There is a growing apprehension that AI, when working alongside search engines like Google and Bing, might generate misleading information with great conviction.
OpenAI, backed by a significant $10 billion investment from Microsoft, has long been a proponent of responsible AI regulation. However, the organization recently expressed apprehension regarding proposed rules in the European Union (EU) and even hinted at the possibility of withdrawing support. OpenAI’s CEO, Sam Altman, stated that the current draft of the EU AI Act appears to be overly restrictive, although there are indications that it might undergo revisions. “They are still discussing it,” Altman mentioned in an interview with Reuters.
Reuters noted that the $1 million grants offered by OpenAI might not fully address the needs of emerging AI startups. In the current market, most AI engineers earn salaries exceeding $100,000, and exceptional talent can command compensation surpassing $300,000. Nevertheless, OpenAI emphasized the importance of ensuring that AI systems benefit humanity as a whole and are designed to be inclusive. “To take an initial step in this direction,” OpenAI stated in a blog post, “we are launching this grant program.”
Altman, a prominent advocate for AI regulation, has been updating ChatGPT and image-generator DALL-E. However, he recently expressed concerns about potential risks associated with AI technology during his appearance before a U.S. Senate subcommittee. Altman emphasized that if something were to go wrong, the consequences could be significant.
Recently, Microsoft joined the call for comprehensive regulation of AI. However, the company remains committed to integrating the technology into its products and competing with other major players like OpenAI, Google, and various startups to deliver AI solutions to consumers and businesses.
AI’s potential to enhance efficiency and reduce labor costs has piqued the interest of almost every sector. However, there are also concerns that AI might spread misinformation or factual inaccuracies, which industry experts call “hallucinations.”
There have been instances where AI has also been involved in creating popular hoaxes. For example, a recent fake image of an explosion near the Pentagon caused a momentary impact on the stock market. Although there have been numerous requests for stricter regulations, Congress has been unsuccessful in enacting new laws that significantly limit the power of Big Tech.