On Wednesday morning, Li was on seated on a small stage in a stately dining hall on Stanford’s serene Palo Alto campus, next to Condoleezza Rice, the director of Stanford University’s Hoover Institution, a conservative think tank. The women were discussing AI’s impact on democracy, the final panel in a three-day boot camp on the technology.
In front of them, a bipartisan audience of more than two dozen D.C. policy analysts, lawyers and chiefs of staff sat in their assigned seats, cutting into their individual fruit tarts.
Hosted by Stanford’s Institute for Human-Centered AI (HAI), where Li serves as co-director, the event offered a crash course on AI’s benefits and risks for information-starved staffers staring down the possibility of legislating a fast-moving technology in the middle of a gold rush.
Hundreds of Capitol Hill denizens applied for the camp’s 28 slots, a 40 percent increase from 2022. Attendees included aides for Rep. Ted Lieu (D-Calif.) and Sen. Rick Scott (R-Fla.), as well as policy analysts and lawyers for House and Senate committees on commerce, foreign affairs, strategic trade with China and more.
Stanford’s boot camp for legislators began in 2014 with a focus on cybersecurity. As the race to build generative AI sped up, the camp pivoted exclusively to AI last year.
The curriculum covered AI’s potential to reshape education and health care, a primer on deepfakes, as well as a crisis simulation where participants had to use AI to respond to a national security threat in Taiwan.
“We’re not here to tell them how they should legislate,” said HAI’s director of policy, Russell Wald. “We’re simply here to just give them the information.” Faculty members disagreed with one another and directly challenged corporations, said Wald, pointing to a session on tech addiction and another on the perils of collecting the data necessary to fuel AI.
But for an academic event, the camp was also inextricably tied to industry. Li has done stints at Google Cloud and as a Twitter board member. Google’s AI ambassador, James Manyika, spoke at a fireside chat. Executives from Meta and Anthropic spoke to the audience Wednesday afternoon for the camp’s final session, discussing the role industry can play in shaping AI policy. HAI’s donors include LinkedIn founder Reid Hoffman, a Democratic megadonor whose start-up, Inflection AI, released a personalized chatbot in May.
The cost of the boot camp was primarily paid for by the Patrick J. McGovern Foundation, said Wald, who said his division of HAI does not take corporate funding.
Reporters were only allowed to attend the closing festivities on the condition that they not name nor quote congressional aides to allow them to speak freely.
The boot camp is one of many behind-the-scenes efforts to educate Congress since ChatGPT launched in November. Chastened by years of inaction on social media, regulators are trying to get up to speed on generative AI. These all-purpose systems, trained on large amounts of internet-scraped data, can be used to spin up computer code, designer proteins, college essays or short films based on user’s commands.
Back in D.C., legislators are crafting guardrails around this technology. The White House is preparing an AI-related executive order and introduced a voluntary pledge instructing AI companies to identify manipulated media, while Senate Majority Leader Charles E. Schumer (D-N.Y.) is commandeering an “all hands on deck” effort to write new rules for AI.
Even among experts, however, there is little consensus around the limitations and social impact of the latest AI models, raising concerns including exploitation of artists, child safety and disinformation campaigns.
Tech companies, billionaire tech philanthropists and other special interest groups have seized on this uncertainty — hoping to shape federal policies and priorities by shifting the way lawmakers understand AI’s true potential.
Civil society groups, who also want to present lawmakers with their perspective, don’t have access to the same resources, said Suresh Venkatasubramanian, a former adviser to the White House Office of Science and Technology Policy and a professor at Brown University, who engages on these issues alongside the nonprofit Algorithmic Justice League.
“One thing we have learned over the years is that we honestly do not know about the harms — about impacts of technology — until we talk to the people who experience those harms,” Venkatasubramanian said. “This is what civil society tries to do, bring the harms front and center,” as well as the benefits, when appropriate, he said.
During a Q&A with Meta and Anthropic, a legislative director for a House Republican said the group had seen a presentation on how effective AI could be at pushing misinformation and disinformation. In light of that, he asked the panel, what should AI companies do before the 2024 election?
Anthropic co-founder Jack Clark said it would be helpful if AI companies received FBI briefings or other intel on election-rigging efforts so that companies know what terms to look for.
“You’re in this cat-and-mouse game with people trying to subvert your platform,” Clark said.
During the panel on AI and democracy, Li said her hope when co-founding HAI was to work closely with Stanford’s policy centers, such as the Hoover Institution, adding that she and Rice discuss the implications of AI in the hands of authoritarian regimes when they have drinks. “Wine time,” Rice said, clarifying.
By the end of their talk, Stanford’s ability to sway Washington sounded almost as powerful as any tech giant. After Rice commented that “a lot of the world feels like this is being done to them,” Li shared that she visited the State Department a couple months ago and tried to emphasize the boon this technology could be to the health-care and agriculture sectors. It was important to communicate those benefits to the global population, Li said.
This story has been updated to reflect that the White House is preparing an AI-related executive order and has already introduced voluntary company pledge.