As increasingly sophisticated artificial intelligence systems with the potential to reshape society come online, many experts, lawmakers and even executives of top A.I. companies want the U.S. government to regulate the technology, and fast.
“We should move quickly,” Brad Smith, the president of Microsoft, which launched an A.I.-powered version of its search engine this year, said in May. “There’s no time for waste or delay,” Chuck Schumer, the Senate majority leader, has said. “Let’s get ahead of this,” said Senator Mike Rounds, a South Dakota Republican.
Yet history suggests that comprehensive federal regulation of advanced A.I. systems probably won’t happen soon. Congress and federal agencies have often taken decades to enact rules governing revolutionary technologies, from electricity to cars. “The general pattern is it takes a while,” said Matthew Mittelsteadt, a technologist who studies A.I. at George Mason University’s Mercatus Center.
In the 1800s, it took Congress more than half a century after the introduction of the first public, steam-powered train to give the government the power to set price rules for railroads, the first U.S. industry subject to federal regulation. In the 20th century, the bureaucracy slowly expanded to regulate radio, television and other technologies. And in the 21st century, lawmakers have struggled to safeguard digital data privacy.
It’s possible that policymakers will defy history. Members of Congress have worked furiously in recent months to understand and imagine ways to regulate A.I., holding hearings and meeting privately with industry leaders and experts. Last month, President Biden announced voluntary safeguards agreed to by seven leading A.I. companies.
But A.I. also presents challenges that could make it even harder — and slower — to regulate than past technologies.
The hurdles
To regulate a new technology, Washington first has to try to understand it. “We need to get up to speed very quickly,” Senator Martin Heinrich, a New Mexico Democrat who is part of a bipartisan working group on A.I., said in a statement.
That typically happens faster when new technologies resemble older ones. Congress created the Federal Communications Commission in 1934, when television was still a nascent industry, and the F.C.C. regulated it based on earlier rules for radio and telephones.
But A.I., some advocates for regulation argue, combines the potential for privacy invasion, misinformation, hiring discrimination, labor disruptions, copyright infringement, electoral manipulation and weaponization by unfriendly governments in ways that have little precedent. That’s on top of some A.I. experts’ fears that a superintelligent machine might one day end humanity.
While many want fast action, it’s hard to regulate technology that’s evolving as quickly as A.I. “I have no idea where we’ll be in two years,” said Dewey Murdick, who leads Georgetown University’s center for security and emerging technology.
Regulation also means minimizing potential risks while harnessing potential benefits, which for A.I. can range from drafting emails to advancing medicine. That’s a tricky balance to strike with a new technology. “Often the benefits are just unanticipated,” said Susan Dudley, who directs George Washington University’s regulatory studies center. “And, of course, risks also can be unanticipated.”
Overregulation can quash innovation, Professor Dudley added, driving industries overseas. It can also become a means for larger companies with the resources to lobby Congress to squeeze out less established competitors.
Historically, regulation often happens gradually as a technology improves or an industry grows, as with cars and television. Sometimes it happens only after tragedy. When Congress passed, in 1906, the law that led to the creation of the Food and Drug Administration, it didn’t require safety studies before companies marketed new drugs. In 1937, an untested and poisonous liquid version of sulfanilamide, meant to treat bacterial infections, killed more than 100 people across 15 states. Congress strengthened the F.D.A.’s regulatory powers the following year.
“Generally speaking, Congress is a more reactive institution,” said Jonathan Lewallen, a University of Tampa political scientist. The counterexamples tend to involve technologies that the government effectively built itself, like nuclear power development, which Congress regulated in 1946, one year after the first atomic bombs were detonated.
“Before we seek to regulate, we have to understand why we are regulating,” said Representative Jay Obernolte, a California Republican who has a master’s degree in A.I. “Only when you understand that purpose can you craft a regulatory framework that achieves that purpose.”
Brain drain
Even so, lawmakers say they’re making strides. “I actually have been very impressed with my colleagues’ efforts to educate themselves,” Mr. Obernolte said. “Things are moving, by congressional standards, extremely quickly.”
Regulation advocates broadly agree. “Congress is taking the issue really seriously,” said Camille Carlton of the Center for Humane Technology, a nonprofit that regularly meets with lawmakers.
But in recent decades, Congress has changed in ways that could impede translating studiousness into legislation. For much of the 20th century, the leadership and staff of congressional committees dedicated to specific policy areas — from agriculture to veterans’ affairs — served as a kind of institutional brain trust, shepherding legislation and often becoming policy experts in their own right. That started to change in 1995, when Republicans led by Newt Gingrich took control of the House and slashed government budgets. Committee staffs stagnated and some of the committees’ power to shape policy devolved to party leaders.
“Congress doesn’t have the kind of analytic tools that it used to,” said Daniel Carpenter, a Harvard professor who studies regulation.
For now, A.I. policy remains notably bipartisan. “These regulatory issues we’re grappling with are not partisan issues, by and large,” said Mr. Obernolte, who helped draft a bipartisan bill that would give researchers tools to experiment with A.I. technologies.
But partisan infighting has already helped snarl regulation of social media, an effort that also began with bipartisan support. And even if lawmakers agreed on a comprehensive A.I. bill tomorrow, next year’s elections and competing legislative priorities — like funding the government and, perhaps, impeaching Mr. Biden — could consume their time and attention.
A Department of Information?
If federal regulation of A.I. did emerge, what might it look like?
Some experts say a range of federal agencies already have regulatory powers that cover aspects of A.I. The Federal Trade Commission could use its existing antitrust powers to prevent larger A.I. companies from dominating smaller ones. The F.D.A. has already authorized hundreds of A.I.-enabled medical devices. And piecemeal, A.I.-specific regulations could trickle out from such agencies within a year or two, experts said.
Still, drawing up rules agency by agency has downsides. Mr. Mittelsteadt called it “the too-many-cooks-in-the-kitchen problem, where every regulator is trying to regulate the same thing.” Similarly, state and local governments sometimes regulate technologies before the federal government, such as with cars and digital privacy. The result can be contradictions for companies and headaches for courts.
But some aspects of A.I. may not fall under any existing federal agency’s jurisdiction — so some advocates want Congress to create a new one. One possibility is an F.D.A.-like agency: Outside experts would test A.I. models under development, and companies would need federal approval before releasing them. Call it a “Department of Information,” Mr. Murdick said.
But creating a new agency would take time — perhaps a decade or more, experts guessed. And there’s no guarantee it would work. Miserly funding could render it toothless. A.I. companies could claim its powers were unconstitutionally overbroad, or consumer advocates could deem them insufficient. The result could be a prolonged court fight or even a push to deregulate the industry.
Rather than a one-agency-fits-all approach, Mr. Obernolte envisions rules that accrete as Congress enacts successive laws in coming years. “It would be naïve to believe that Congress is going to be able to pass one bill — the A.I. Act, or whatever you want to call it — and have the problem be completely solved,” he said.
Mr. Heinrich said in his statement, “This will need to be a continuous process as these technologies evolve.” Last month, the House and Senate separately passed several provisions about how the Defense Department should approach A.I. technology. But it is not yet clear which provisions will become law, and none would regulate the industry itself.
Some experts aren’t opposed to regulating A.I. one bill at a time. But they’re anxious about any delays in passing them. “There is, I think, a greater hurdle the longer that we wait,” Ms. Carlton said. “We’re concerned that the momentum might fizzle.”
Be the first to comment