The Biden administration has imposed tough export controls to restrict China’s access to the semiconductors and hardware needed to train powerful AI systems. US officials now are preparing a similar approach to AI models.
It is a controversial move. Leading AI firms, civil society groups, and researchers argue that the plan is both impractical and counterproductive. Export controls could undermine US leadership in AI innovation. US courts could revive debates about the constitutionality of certain export controls given First Amendment protections for computer source code.
The key question is how such controls could ever be enforced. The US and Europe are finding it difficult enough to halt the flow of physical products — a plethora of export controls are not stopping Russia from using Western chips to aim its missiles and drones at Ukrainian civilians. How can the US restrict the movement of software bits and bytes, particularly if they are open source?
Despite these concerns, key Biden administration figures privately emphasize that AI’s security risks outweigh the costs. AI enhances military equipment, intelligence collection, and cyber capabilities. It enables non-state actors to develop deepfakes and biological weapons. The European Commission agrees that AI poses “sensitive and immediate risks” and is finalizing a risk assessment to determine whether to impose export controls.
What would the new US AI-focused export controls look like? Early signs point to US officials basing restrictions on models’ computing power. President Biden’s late 2023 executive order tasked the US Department of Commerce and other executive agencies with creating a process to require firms building AI models above a certain computing power level to report the models’ details.
In September, the Bureau of Industry and Security — the agency within Commerce that leads US export control policies — published its reporting requirements for leading AI developers and cloud providers. Any AI model utilizing more than 1026 integer or floating-point operations triggers reporting requirements. The agency estimates that only a small number of firms possess such sophisticated AI models. Public data suggest no commercially available models meet this threshold, though proprietary models from US firms such as Google, Meta, OpenAI, NVIDIA, Anthropic, Amazon, and Inflection AI may meet the standard.
While not export controls, the proposed reporting rules would give the US government data needed to determine which AI models to restrict. A Reuters report in May suggested the US will first target closed-source AI models. But the Bureau of Industry and Security did not distinguish between open or closed AI models in its September rule, leaving the door open for restrictions on both.
Get the Latest
Sign up to receive regular Bandwidth emails and stay informed about CEPA's work.
A stalled bipartisan bill in Congress intended to bolster new controls on AI models if enacted also does not say which models should be restricted. The Administration may not need the bill — existing US export control rules can block the sale of AI models that assist in “the development, production, or use of WMD or conversational weapons.” But the bill adds to a flood of Congressional pressure on the Biden administration to stymie the flow of US tech to China regardless of outcry from US allies or the global AI industry — or proper analysis of the risks and benefits of new controls.
Microsoft and Google argue that restricting open models would damage global collaboration on talent, research, and safety. Meta claims controls on AI would cause the US to cede an opportunity to embed US “values in the fabric of the AI revolution” by setting global standards. The DC-based Center for Data Innovation points to US restrictions on encryption software in the 1990s as a cautionary tale for any export controls on open-source technology like AI or RISC-V.
The Clinton administration loosened US encryption controls in 1996 after it was clear the controls “created a double standard that left innocent Internet users abroad less secure” and “undermined US economic competitiveness […] with little evidence that they were actually achieving their stated goals.”
Some cracks within the US government are emerging. In a July report, another Department of Commerce agency recommended not restricting the export of currently available dual-use foundation models with “open weights.” But the agency also argued the US should preserve the ability to do so in case “countries of concern” leverage the tech for strategic gains.
China is already doing just that. China’s artificial intelligence industry is betting big on open-source AI technology to build more efficient AI models that do not rely on barred US chips. Beijing’s gamble on open-source, paired with massive state-directed investment in AI, appears to be working. Multiple Chinese AI models are outperforming their US peers and boast hundreds of millions of users.
Some argue that Chinese success cuts both ways. The more Chinese firms use Western open AI models, the less they invest into developing domestic alternatives. Yet China’s AI industry may be proving it can chew gum while running: in June, researchers at Stanford admitted they used Chinese open-source AI tech to develop a powerful model.
US officials faced similar trade-offs with semiconductors, ultimately deciding to deprive China of access to Western chip tech rather than foster Beijing’s dependency on it. The US chip controls are backfiring by trading short-term pain for long-term leverage — at great cost to US diplomacy and industry.
The White House must answer key questions before it can levy export controls on AI models: should it control open or closed models? Will controls be based on models’ compute power or capabilities? But as the US lays the foundation for future AI controls, President Biden — or any future US president — will have to decide whether fostering the world’s dependency on US AI models is worth more than depriving China, Russia, Iran, and others access to the technology.
Matthew Eitel is Special Assistant to the President & CEO at the Center for European Policy Analysis.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.
Technology is defining the future of geopolitics.
Learn More
Read More From Bandwidth
CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.
Read More