HomeCongress News

AI vs The Congress: Debates on Regulation of Both a Great Potential and Concern

Create Ai, Accountability Act, NAIRR (National Artificial Intelligence Research Resource and more..

“The Congress” 2013 Sci-fi Drama Motion Picture

AI regulation in the U.S. is a subject of debate among lawmakers, with representatives and nation expressing concerns about the ability of current legislators, who may not fully grasp AI technology, to create effective regulations. The U.S. House of Representatives recently advanced the AI Accountability Act, which calls for studying AI accountability and reporting back in 2025. However, there is disagreement among lawmakers, with some like Rep. Tim Burchett of Tennessee arguing against regulation, fearing it could stifle AI growth.

Others, including Rep. Nancy Mace of South Carolina, acknowledge the need for regulation but warn against overregulation, as it might hinder innovation and put the U.S. at a disadvantage against other countries like China, which has already drafted its own AI regulations. The rapid advancement of AI technology poses challenges for creating effective regulations, as highlighted by lawmakers such as Sen. Richard Blumenthal and Rep. Jim Himes, who argue that Congress lacks sufficient understanding of AI.

“Right now, we’re in the Wild West,” Connecticut Democrat Sen. Richard Blumenthal told Fox News. “AI enables, not only in effect, appropriation of creative products … but also impersonation, deepfakes, a lot of bad stuff. We need to invest in the kinds of restraints and controls if there’s a danger of AI becoming autonomous.”

“The problem with AI is that it’s advancing so fast,” Republican Rep. Nancy Mace of South Carolina said. “It’s very difficult to regulate because you don’t know what the next thing is going to be.”

MIT Technology Review Article on How the Congress Might Regulate AI https://www.technologyreview.com/2023/07/03/1075807/three-things-to-know-about-how-the-us-congress-might-regulate-ai/

Despite concerns about regulation, there are potential risks associated with AI’s growth, including job displacement. AI advancements could potentially eliminate millions of jobs globally, with lower-wage workers and women being particularly vulnerable, according to studies. However, AI is seen as a powerful tool with great potential benefits, prompting calls for cautious regulation that balances innovation and societal impact.*

The Biden administration and Congress are currently examining how to regulate AI as technology continues to develop rapidly. With up to 30% of U.S. workforce hours expected to become automated by 2030, lawmakers are grappling with the implications and potential challenges arising from this transformation.

Rep. Jim Himes, D-Conn., said that “Congress doesn’t understand AI well enough right now to be promulgating regulation.”

While the Congress grapples with the issue with its expected and usual lengthy approach, some of the leading companies have been creating guidelines and regulations for their own variant product of AI systems.

Leading AI companies including Google, OpenAI, Microsoft, and Anthropic have unveiled plans to form the Frontier Model Forum, an industry-led body aiming to develop safety standards for rapidly advancing AI technology. The forum will focus on AI safety research, technical evaluations, and information sharing about AI risks among companies and governments. This move comes as Washington policymakers still debate the need for a government AI regulator.

The companies’ initiative builds on voluntary promises made to the White House to submit their AI systems to independent tests and develop tools for public alerts when AI-generated images or videos are detected. While some argue that industry-led efforts present risks due to past privacy lapses and abuses, government-led regulations are not imminent. Policymakers in Congress are in the early stages of crafting AI frameworks, and European AI legislation is still years away from coming into force.

The Model Forum, which is open to companies developing powerful AI models, aims to address the capabilities of future AI systems that exceed existing models. The focus on future AI capabilities aligns with OpenAI’s stance that current models do not require regulation, but more powerful AI systems might in the future. The forum will work on joint initiatives, but some companies have expressed concerns about open-source AI models, such as Meta’s decision to make its AI model Llama 2 widely available for research, fearing potential catastrophic dangers with advanced AI.

Despite criticisms from consumer advocates about self-regulation, industry-led initiatives are moving faster than government efforts to regulate AI development and deployment. The Frontier Model Forum aims to guide AI safety practices and could complement existing government initiatives.

U.S. policymakers have indicated a growing interest in artificial intelligence. -Image/Reuters

Though another concern about AI is that, it is mostly in the hands of a few large technology companies. So lawmakers from both parties in the Artificial Intelligence Caucus are proposing the “Creating Resources for Every American To Experiment with Artificial Intelligence Act” (CREATE AI Act). The bill aims to establish the National Artificial Intelligence Research Resource (NAIRR), a public research center that will grant access to AI tools for individuals and organizations lacking the vast research funding available to billion-dollar AI developers.

Rep. Anna Eshoo and Sen. Martin Heinrich are leading the bipartisan effort. They emphasize the potential of AI for the country but recognize that access to high-powered computational tools is limited to these few companies. NAIRR’s establishment would provide researchers from universities, nonprofits, and the government with the necessary resources to develop cutting-edge, safe, ethical, transparent, and inclusive AI systems.

The proposed bill, backed by Republicans in the AI Caucus as well, seeks to make datasets used to train AI and other essential tools accessible to students, entrepreneurs, and others. It envisions NAIRR gaining access to datasets and tools developed by major AI researchers without prescribing specific acquisition methods.

Concerns in Congress over the high costs of AI development being confined to large corporations motivated the creation of NAIRR. While some in Congress have advocated for greater AI regulation, the bill does not set up NAIRR as a regulatory body. Instead, it primarily aims to support AI researchers as a technical resource. However, there is acknowledgment that NAIRR might eventually assist policymakers in developing trustworthy AI best practices.

The cost estimate of $440 million per year, suggested by a federal task force for NAIRR’s establishment, is not explicitly authorized in the CREATE AI Act. Actual funding levels will likely be determined through the regular appropriations process. The bill is currently in progress, and efforts are being made to advance it with the support of the House Science, Space, and Technology Committee, though no timeline for consideration has been set before the August break. Senate Majority Leader Chuck Schumer has also hinted at an AI regulatory bill forthcoming in the future.

Subscribe to our newsletter

COMMENTS