US starts exploring “accountability measures” to regulate AI companies
A few weeks after Italy banned ChatGPT over privacy concerns, the US has started developing a framework to hold AI companies accountable for how their software is used.
The process is set to start with a public probe on AI products and services by The National Telecommunications and Information Administration (NTIA). This is the agency charged with advising the government on technology policies.
NTIA says that the information gathered from this probe will guide the government in the creation of a “cohesive and comprehensive federal government approach to AI-related risks and opportunities.
“NTIA’s ‘AI Accountability Policy Request for Comment’ seeks feedback on what policies can support the development of AI audits, assessments, certifications, and other mechanisms to create earned trust in AI systems that they work as claimed,” the agency said in a statement before adding that the audits could work similarly to financial audits to prove that an AI system is trustworthy.
The move comes as experts worldwide led by Elon Musk continue to raise their fears over the economic and societal impacts of generative AI systems like chatGPT.
ChatGPT has already been banned in Italy after the Italian data protection authority voiced its concerns and said it would investigate the AI startup with “immediate effect.”
However, some experts believe banning the innovative technology is an overreaction and it negatively affects companies in the country looking to leverage the technology.
Other governments in Europe starting with Germany are also considering cracking down on AI systems.