The split-screen events on either side of the Atlantic underscore the challenges of regulating artificial intelligence, a rising priority for governments around the world in the year since the release of the AI-powered chatbot ChatGPT sparked a global frenzy.
Congress lags far behind its counterparts in Brussels, where a framework to regulate AI was first proposed in 2021. But after years of work, the future of the EU’s AI legislation remains uncertain amid a lobbying blitz and opposition from the EU’s largest nations — France, Germany and Italy.
After more than half a year of work on AI policy, Senate Majority Leader Charles E. Schumer (D-N.Y.) told reporters that the bipartisan group was “starting to really begin to work on legislation,” though he offered few specifics about what such a bill would include.
The comments came during Congress’s last two AI forums of 2023, where lawmakers huddled with top tech executives, including former Google CEO Eric Schmidt, to better understand topics including the risks of an AI doomsday and national security.
Sen. Mike Rounds (R-S.D.), a member of the bipartisan working group that Schumer assembled to craft AI policy, said the senators are pursuing an “incentive-based” approach in an effort to retain AI developers in the United States.
“If [European policymakers] look at this as a regulatory activity, they will chase AI development to the United States,” he told reporters after the pair of forums. “What we don’t want to is to chase AI development to our adversaries.”
Meanwhile, officials in the European Union sought a late-stage breakthrough on the EU AI Act, which would largely take a “risk-based” approach to limiting the uses of AI applications based on how dangerous lawmakers predict they could be.
Representatives of the European Parliament are scrambling to counter attempts by the largest nations in the 27-member bloc to water down the historic bill. In recent weeks, the so-called trilogues between the EU Council, the European Parliament and the European Commission have become plagued by divisions that have jeopardized an act years in the making. Officials went into the negotiations optimistic that a compromise could be reached, and negotiations were still ongoing as it approached midnight in Brussels.
If no deal is reached in marathon talks expected to drag into Thursday morning Brussels time, negotiations would probably move to a last-ditch effort in January, after which experts say it may be difficult to get any bill passed in Parliament ahead of legislative elections in June.
“If we go beyond January, I think we are lost,” said Brando Benifei, one of two lawmakers running lead on the act in the European Parliament. “It will be at least another nine months before we could have the AI Act.”
The EU’s largest nations have sought to remove a part of the bill that would impose binding regulations and transparency rules on foundation models, like the technology underlying ChatGPT, which generates answers based on models trained by scraping data from across the internet. Arguing those rules could stifle innovation and put Europe further behind the United States in the race to develop such models, those countries were instead pushing for industry self-regulation.
People familiar with the talks who spoke on the condition of the anonymity to describe delicate negotiations said France appeared to be the strongest obstacle to a deal, based in part on its desire to protect a burgeoning company developing AI foundation models: Paris-based Mistral, as well as other French AI firms. A bid to limit AI in police work, meanwhile, comes as France is set to deploy AI-powered smart cameras for policing and security at the 2024 Summer Olympics and as French cities have already entered legal gray areas by deploying or testing such technology.
Asked about French opposition, France’s digital minister, Jean-Noël Barrot, said that European governments broadly opposed restrictions on AI use for policing and national security, and that onerous regulations on foundation model developers could seriously hinder European innovation.
“There is a unanimous consensus within the council that the use of AI for national security purposes should not be included in the regulation,” he said.
He added, “The [AI] industry in Europe has expressed its concerns that adding too much of a burden on the shoulders of foundational model developers was equivalent to not having those models developed in Europe,” he said.
Barrot insisted that even the kind of compromise being sought by the French would still result in the world’s strongest law governing AI. He called the bill a beginning, as opposed to an end, of European regulations on the technology.
“I dare anyone to present me with a piece of regulation that is as tough as the EU AI Act around the world,” he said.
Going into Wednesday’s negotiating session, Benifei said the push by France and other countries to allow industry to self-regulate would nix one of the most important elements of the bill, arguing that a compromise imposing real restrictions must be found.
“The most powerful models will become the basis of all AI,” he said. “If we regulate their security and their transparency on how they work and data … used to train them, then we will make it safer for all AI systems down the chain.”