Artificial intelligenceBig TechDonald TrumpFeaturedFree MarketGoogleMetaMicrosoftOpenaiPeter thielregulations

The age of AI is here — how should it be regulated?

Big Tech leaders have increasingly aligned themselves with the government on adopting a centralized framework to regulate artificial intelligence, even as experts remain divided on whether consolidating federal power would minimize or accelerate the technology‘s risks.

Some industry specialists believe expanding federal power could lead to dystopian outcomes. Others tout common law as the best way to incentivize AI innovation while penalizing adverse consequences. Still more experts told the Washington Examiner they believe more government oversight is the only way to rein in powerful corporations from exploiting the public. 

Meta, Microsoft, OpenAI, and Google are among the companies that were supportive of President Donald Trump’s efforts this year to pass a 10-year ban on individual states from regulating AI. The legislative provision, which sought a streamlined universal federal standard overriding varying regulations across the 50 states, failed to cross the finish line, despite efforts to brand it as a safeguard for innovation.

Divides over the issue were further exposed after Trump responded to the congressional gridlock by signing an executive order seeking to discourage states from regulating AI, prompting pushback from deep-blue California to bright-red Florida

While political figures such as Govs. Gavin Newsom (D-CA) and Ron DeSantis (R-FL) expressed hesitation about Washington playing the dominant role in AI regulation, Kevin Frazier, an AI innovation and law fellow at the University of Texas School of Law, argued that having “50 different regulatory battles at the state levels … typically don’t lend themselves to civil discourse or productive discourse.”

“If we’re going to realize those beneficial outcomes of AI, a harmonized approach at the federal level is the sort of clear regulatory approach that will ensure that labs don’t have to deal with 50 different versions of complex and perhaps vague, and even contradictory state laws,” he said. 

Daniel Cochrane, a senior research associate at the Heritage Foundation’s Center for Technology and Human Persons, agreed that a more centralized model for AI regulation will likely have positive impacts for society.

He rebuffed the argument that doing so erodes the free market system undergirding the United States, arguing that the tech industry has erased true consumer freedom and manipulated market forces. Powerful corporations have been allowed to create products posing addictive and other harmful qualities for the public, while simultaneously creating a culture where the “choice” to reject such products is no longer a truly feasible option, because doing so promises “social penalties” in a society oriented around technology, Cochrane insisted. 

“What we have right now is a market that is fundamentally misaligned with the values and the interests of the end users,” he said. “That’s not a market that benefits our society. … So it’s not about so much intervention. It’s about making sure the market is actually working in the way that it should be.” 

Cochrane rejected the idea that such a policy advocates government intervention as long as it is “for the social good.”

“I’m saying we need to respect a free market, but we need market conditions that reward companies for benefiting people, not harming them,” he said. 

Other industry experts, including Peter Thiel, have pushed back. While AI could pose dangers, the true threat could come from a government given sweeping authorities to regulate it, the Silicon Valley entrepreneur has argued. 

“There’s something about AI that’s very centralized,” Thiel mused during an interview with podcaster Joe Rogan last year, noting the “natural scale” of the industry. “I had this one-liner years ago where it was ‘if we say that crypto is libertarian, can we also say that AI is communist?’”

“A government that’s powerful enough to stop something like AI … has to have some sort of global totalitarian character,” he added in another speech at Cambridge University that appeared to put him at odds with Washington’s regulatory-friendly approach, despite his close ties to the Trump camp. 

Mark Jamison, a nonresident senior fellow at the American Enterprise Institute, took the opposite viewpoint, saying he believes AI “really diminishes somebody’s ability to be an authoritarian.” And focusing the conversation of AI safety around regulation misses the point, he suggested in early December, because “AI is not the problem, misusing AI is.” 

“What AI really does? It puts a lot of power, if you will, abilities, in a lot of people’s hands. So it doesn’t get centralized,” he said during an interview with the Washington Examiner. “If you go back a couple of decades ago, if you wanted to send out a lot of things to a lot of people, you pretty much had to be a broadcaster or a newspaper, but then the digital technology said, no, anybody can do this, and AI helps a lot of people do even more things … so many people can now do so many things that are very hard to track.” 

Jamison wants a federal framework for AI, arguing that it allows “the small guys” to compete on a more level playing field with competitors because they don’t have to grapple with varying state regulations. But he believes the national policy should be one “that says we’re not going to regulate AI.” Such a system would allow employees, investors, customers,  business designers, and entrepreneurs to “make the decisions about what’s done,” he said. 

“Me, sitting in Florida, I can use AI and do something that is valuable and useful for people in Missouri and Kansas and South Dakota, in the Caribbean, and if I have to adapt what I do to every jurisdiction, then I have to be pretty big and make that work,” Jamison said. “But if we have state-by-state regulation or country-by-country regulation, even in some instances, you push some of the small players out.” 

A regulatory framework AI that is responsive to solving problems as they arise, instead of imposing rules guarding against speculative harm, will organically arise by allowing the private sector to problem-solve and the court system to set policies, he argued. 

“Back when we first started getting automobiles, one of the consequences of this fast, cheap transportation was an increase in bank robberies, but we didn’t regulate cars to try and deal with that issue. Instead, we changed our policing methods. We let the courts deal with the change in liability people would have, or what might happen during a bank robbery, and so we evolved with it that way,” Jamison reflected. “I think about that with artificial intelligence. If we try to regulate it, we can stop a lot of good things from happening, and we don’t want to do that. We instead want to adapt to dealing with the bad things when they come about.” 

The proposal embodies the “common law” approach endorsed by Dean Woodley Ball, senior fellow at the Foundation for American Innovation. 

SANDERS WARNS AI SPELLS ‘CRAZY’ IMPLICATIONS FOR HUMANITY: ‘ROBOTIC SOLDIERS’

Common law is typically triggered by lawsuits and develops through court decisions, which create precedents relied on by future judges. Over time, a body of law emerges, with advocates like Jamison and Ball arguing that such a system allows the private sector to innovate with fewer constraints, while allowing for a platform to address real harms presented by injured parties in court.

“American history shows that technological progress and common law liability have often worked in tandem,” Ball wrote in November. “Administrative regulation too often functions like a one-way ratchet; common law uniquely is suited to adjustments up or down. … It would probably be unwise for us to codify too many of our concerns into statute before we really understand what kind of legal response AI merits.”

Source link

Related Posts

1 of 856