Microsoft Calls for a New US Agency and Licensing for AI

Microsoft Corp. is calling for a new US agency to regulate artificial intelligence and licensing requirements to operate the most powerful AI tools, company President Brad Smith said Thursday.

(Bloomberg) — Microsoft Corp. is calling for a new US agency to regulate artificial intelligence and licensing requirements to operate the most powerful AI tools, company President Brad Smith said Thursday. 

Smith compared AI to the printing press, elevators and food safety for both the transformative power of a new technology and the regulatory need to protect against the greatest potential harms. His call for a new agency echoes proposals from OpenAI, the startup behind the wildly popular ChatGPT, which received a $10 billion investment from Microsoft. 

“We would benefit from a new agency,” Smith said in a speech in Washington. “That is how we will ensure that humanity remains in control of technology.”

The idea for a government agency with responsibility to set the ground rules for AI gained attention last week in a Senate hearing with Sam Altman, chief executive officer of OpenAI. Altman and many of the senators questioning him agreed that the legislative process is too slow and partisan to keep pace with AI capabilities and potential applications, and an agency would be better-positioned to set rules to protect users.

Although that proposal has sparked conversations on Capitol Hill, it is still far from being turned into legislation. Calls in recent years to regulate social media went nowhere in Congress.

Read more: When Altman Went to Washington and Asked for AI Rules

Critical Infrastructure

Smith also said that rapidly developing AI technology must be transparent, with developers partnering with government and academic researchers to address societal challenges that will emerge. He proposed “safety brakes” for AI technology used in high-risk applications such as critical infrastructure. 

“New laws would require operators of these systems to build safety brakes into high-risk AI systems by design,” Smith said in a blog post accompanying his speech. “The government would then ensure that operators test high-risk systems regularly to ensure that the system safety measures are effective.”

The Biden administration has released several non-binding guides for developing and using AI products, although the US lags far behind Europe’s regulatory efforts. The EU’s AI Act was in the final stages of debate when the release of ChatGPT and other generative AI applications cast doubt on rules that focus on how the technology is used, rather than how it is initially developed.

European Conflict

Altman told reporters in London Wednesday that OpenAI could pull its products from Europe if it can’t comply with new rules that have been proposed for general-purpose AI. In a tweet Thursday, EU Commissioner Thierry Breton accused Altman of “attempting blackmail.”

Read more: OpenAI’s Altman Clashes With EU Commissioner Over AI Regulation

Asked about Altman’s threat, Smith said it’s important for the tech industry to explain how proposed regulation would work in practice. He said he’s optimistic that “reason will prevail” in the final version of Europe’s AI Act.

“The legislative process in every democratic country inevitably has its twist and turns,” Smith said in Washington after his speech. “There are days when those of us who might know more about a technical field get up and see something that we quite rightly would want to point out is not likely to work the way that people who wrote it actually intended.”

US tech companies have praised a framework released in January by the National Institute of Standards and Technology, which is focused on how AI technology is used — and the risk level of that application — rather than how it’s developed. Smith held that model up in his speech as  a “new intellectual discipline for artificial intelligence” to help measure and manage this technology. 

Smith’s speech was attended by several members of Congress. When Democratic Representative Ritchie Torres of New York asked how Congress should balance regulation with innovating to keep ahead of China, Smith urged western democracies to stick together to set a global standard for AI regulation.

“I do share the concern that there may be other parts of the world that don’t adopt the same kinds of guardrails that we do,” Smith said. “It’s so important to bring the European Union and the United Kingdom and the United States and other countries together to say, here is a model, here is a model that not only promotes innovation but protects people, protects humanity, preserves fundamental rights.”

Microsoft’s push into artificial intelligence, including its support for OpenAI, has pressured competitors such as Alphabet Inc.’s Google to more quickly release their own AI applications and integrate the technology into existing products. Last week, Google published its own policy recommendations for responsible development of AI that it said would take advantage of its economic potential while curbing some of the risks to society.

(Updates with more on policy proposals and Altman in Europe beginning in sixth paragraph)

More stories like this are available on bloomberg.com

©2023 Bloomberg L.P.