WordPress Ad Banner

Very Powerful AI May Be Banned, Warns UK Govt Adviser

As the development of artificial intelligence (AI) continues at an accelerated pace, experts and prominent figures in the field, often referred to as AI ‘godfathers,’ are raising concerns about the potential risks it poses to privacy, human rights, and overall safety.

In response to these concerns, the United Kingdom government, in collaboration with the European Union and the United States, is taking steps to regulate this transformative technology. A member of the UK government’s non-statutory AI Council, Marc Warner, CEO of AI company Faculty, has expressed the view that highly powerful artificial general intelligence (AGI) systems may ultimately need to be banned.

Warner, a respected member of the AI Council, an independent committee providing guidance to the UK Government on the AI ecosystem, discussed the concept of AGI in an interview with the BBC. AGI refers to systems that surpass human intelligence, possessing the ability to reason, plan, and learn from experience at a level equal to or potentially exceeding human capabilities.

Expressing valid concerns, Warner emphasized that AGI poses much greater worries and requires an entirely different set of rules. He highlighted the significance of human intelligence in our position of prominence on this planet and questioned the safety implications of creating objects that are as intelligent as or even surpass human intelligence, without a solid scientific justification.

On the other hand, Warner suggested that narrow AI systems, which are designed for specific tasks like text translation or machine learning-based identification of bacteria, could be regulated similarly to existing technologies.

However, AGI systems have the potential to match or surpass human intelligence across various tasks. Warner called for prudent decision-making regarding AGI, emphasizing the need for strong limitations on the amount of computing power that can be applied arbitrarily to such systems.

Warner is also a signatory to the Center for AI Safety statement, which advocates for action to mitigate the risks of potential human extinction due to AI. Notable figures who have also signed the statement include Geoffrey Hinton, an AI pioneer, Yoshua Bengio, a renowned AI scientist and professor, Sam Altman, CEO of OpenAI, Bill Gates, co-founder of Microsoft, Dario Amodei, CEO of Anthropic, Demis Hassabis, CEO of Google DeepMind, and others.

While the EU Artificial Intelligence Act, one of the earliest attempts to regulate AI, is still undergoing legislative processes, the European Union Commissioner Margrethe Vestager stated that it would take two to three years for various pieces of legislation to come into effect. She stressed the urgency of addressing the rapid technological acceleration in AI.

Europe is leading the way in articulating regulations to govern AI in a safe manner, ahead of the United States. CEOs of prominent AI companies have called for the establishment of rules to manage this powerful technology. It is now more crucial than ever for countries like the U.S. to step up their efforts if they wish to be actively involved in international AI governance discussions.