UK government urged to ease up AI regulation to avoid missing the “AI goldrush”
In a report released last week, the U.K. government has been cautioned against adopting a "narrow" approach to AI safety, potentially compromising its position in the rising AI landscape.
The report, presented by the parliamentary House of Lords’ Communications and Digital Committee, stems from extensive consultations with stakeholders ranging from tech giants to academia, venture capitalists, media entities, and government bodies.
Central to the report is the recommendation that the government shift its focus from speculative existential threats posed by AI to more immediate concerns like copyright infringement and misinformation.
The committee argues that hyperbolic concerns about apocalyptic scenarios may distract from practical issues, urging the government to strike a balance and avoid missing out on the potential AI gold rush.
“The rapid development of AI large language models is likely to have a profound effect on society, comparable to the introduction of the internet — that makes it vital for the Government to get its approach right and not miss out on opportunities, particularly not if this is out of caution for far-off and improbable risks,” the Communications and Digital Committee’s chairman Baroness Stowell said in a statement. “We need to address risks in order to be able to take advantage of the opportunities — but we need to be proportionate and practical.
The global surge in AI development, initiated by OpenAI's ChatGPT, has sparked debates worldwide on AI governance with a divide emerging as to whether to take a closed or open approach to regulation.
Meta's chief AI scientist Yann LeCun joined a group of other experts in signing an open letter pushing for increased transparency in AI development to prevent regulatory capture by industry incumbents.
“Open models can inform an open debate and improve policymaking. If our objectives are safety, security, and accountability, then openness and transparency are essential ingredients to get us there,” the letter reads.
This issue of closed vs open AI governance was also a major pain point in the House of Lords' report with the committee noting that market competition also needs to be addressed in the regulation to prevent regulatory capture by current leaders like OpenAI and Google.
On their end, the leading AI companies support openness but raise concerns about security risks.
To conclude, the report acknowledges the potential security risk posed by the availability of powerful AI models but rejects an outright ban, suggesting that monitoring and mitigation efforts are more practical.
“It is almost certain existential risks will not manifest within three years, and highly likely not within the next decade,” the report read. “As our understanding of this technology grows and responsible development increases, we hope concerns about existential risk will decline. The Government retains a duty to monitor all eventualities — but this must not distract it from capitalising on opportunities and addressing more limited immediate risks.”