Senate Leadership Releases Sweeping AI Policy Agenda Calling for $32 Billion in R&D Funding — AI: The Washington Report
- The Bipartisan Senate AI Working Group, headed by Majority Leader Chuck Schumer (D-NY), Senator Mike Rounds (R-SD), Senator Martin Heinrich (D-NM), and Senator Todd Young (R-IN) released a roadmap for congressional AI policy.
- The roadmap calls on Congress to allocate $32 billion a year for non-defense-related AI R&D, invest in workforce modernization, protect individuals against discriminatory uses of AI, pass comprehensive data privacy legislation, establish AI transparency requirements, further protect against the acquisition of sensitive technologies by foreign adversaries, and more.
- While it remains unlikely that comprehensive AI legislation will be forthcoming during this Congress, the release of the roadmap indicates that AI is still a top priority for congressional leadership.
On May 15, 2024, the Bipartisan Senate AI Working Group released “Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate.”
The report seeks to distill the information gathered from last year’s AI Insight Forums into a “policy roadmap” that “identifies areas of consensus” the senators believe “merit bipartisan consideration in the Senate in the 118th Congress and beyond.”
The report is divided into eight sections, each of which correspond to an AI Insight Forum. Sections contain recommendations for Congress regarding actions that should be taken regarding AI.
Supporting US Innovation in AI
Following a recommendation made by the now-disbanded National Security Commission on Artificial Intelligence, the report calls for the allocation of “at least $32 billion per year for (non-defense) AI innovation.” To this end, the report calls on the Senate Appropriations Committee to “develop emergency appropriations language to fill the gap” between current spending and the $32 billion target with a focus on a range of non-defense priorities, including:
- Funding a cross-governmental AI R&D effort involving all “relevant agencies and departments.”
- Funding outstanding CHIPS and Science Act accounts not yet fully funded, with a focus on those relating to AI.
- Funding efforts by the relevant agencies to spur the design and manufacture of next generation chips known as “high-end AI chips.”
- Authorizing the National Artificial Intelligence Research Resource (NAIRR) by passing the CREATE AI Act.
- Funding a series of “AI Grand Challenge” programs “with a focus on technical innovation challenges in applications of AI.”
- Funding further AI work at the National Institute of Standards and Technology (NIST).
- Providing funding for the modernization of federal government IT infrastructure.
The report also outlines a range of defense-related AI priorities, including:
- Mitigation of chemical, biological, radiological, and nuclear AI-enhanced threats.
- Research in and safeguards aimed at reducing the risk of AI-augmented chemical and biological synthesis.
- Increased funding for the Defense Advanced Research Projects Agency’s AI work.
- Developing the Department of Defense’s (DOD) “in-house supercomputing and AI capacity.”
AI and the Workforce
A 2023 report on the impacts of generative AI predicts that by 2030, “activities that account for up to 30 percent of hours currently worked across the US economy could be automated,” a dynamic that will likely lead to profound disruptions to the labor market. The working group calls on Congress to take proactive measures to ensure that as AI advances, “American workers are not left behind,” including “legislation related to training, retraining, and upskilling the private sector workforce to successfully participate in an AI-enabled economy.”
Additionally, the working group encourages AI developers to ensure that as AI systems are developed and deployed, they consult a diverse array of stakeholders, including “civil society, unions, and other workforce perspectives.” Finally, the report calls on relevant congressional committees to “consider legislation to improve the US immigration system for high-skilled STEM workers in support of national security and to foster advances in AI across the whole of society.”
High-Impact Uses of AI
Since Senator Schumer announced the SAFE innovation framework in June 2023, the Majority Leader’s approach to AI policy has featured a dual emphasis on encouraging R&D while also putting in place safeguards to prevent and offset the harm that AI may inflict on individuals in contravention of consumer protection and civil rights law.
To this end, the report calls on law enforcement agencies to ensure that existing laws “consistently and effectively apply to AI systems and their developers, deployers, and users.” So as to ensure that law enforcement agencies possess the due authority to properly regulate the conduct of actors in the AI market, the report encourages “relevant committees to consider identifying any gaps in the application of existing law to AI systems and, as needed, develop legislative language to address such gaps.”
Because of the potential for AI systems to perpetuate bias, the report recommends that congressional committees evaluate “the impact of AI or considering legislation in the AI space” and “explore how AI may affect some parts of our population differently, both positively and negatively.”
Specific types of legislation that the report encourages Congress to consider include policies that would:
- Address the proliferation of AI-generated child sexual abuse material.
- Develop mechanisms to deter the use of AI to perpetuate fraud and deception.
- Create a federal framework for the testing and deployment of autonomous vehicles.
- Ban social scoring, which is the electronically mediated tracking and scoring of individuals’ behavior.
The working group also encourages Congress to consider legislation and support policies that would guide the integration of AI into the provision of health care services, such as:
- Legislation that would “provide transparency for providers and the public about the use of AI in medical products and clinical support services, including the data used to train the AI models.”
- Legislation that would strengthen patient data protections.
- Support for AI research at the National Institutes of Health.
Elections and Democracy
As we have discussed in previous newsletters, generative AI technologies have made it significantly easier to create deepfakes, or doctored images, videos, or recordings that make it appear as though an individual is saying or doing something that they did not actually say or do.
In the months after generative AI tools became commercially available at a significant scale, experts worried that AI-generated deepfakes could become a vector of election-related misinformation. These fears turned out to be well-founded, as campaigns have already shared deepfakes through official communication channels.
To address this issue, the report “encourages the relevant committees and AI developers and deployers to advance effective watermarking and digital content provenance as it relates to AI-generated or AI-augmented election content.”
Additionally, the report calls on AI developers to “implement robust protections in advance of the upcoming election to mitigate AI-generated content that is objectively false while still protecting First Amendment rights.”
Privacy and Liability
The development of complex AI systems often involves the ingestion of large quantities of data in a process called “training.” The data-intensive nature of the AI training process raises concerns regarding consumer data privacy.
As it stands, the United States does not have a comprehensive data privacy law. Rather, the nation’s data privacy enforcement is patchwork and provisional, with states enforcing an array of laws, and the Federal Trade Commission pursuing a subset of data privacy harms through its authority to punish unfair or deceptive acts and practices.
Against this backdrop, the report “supports a strong comprehensive federal data privacy law” that would “address issues related to data minimization, data security, consumer data rights, consent and disclosure, and data brokers.”
In addition to developing a comprehensive AI standard, the report encourages relevant Congressional committees to “consider whether there is a need for additional standards, or clarity around existing standards, to hold AI developers and deployers accountable if their products or actions cause harm to consumers, or to hold end users accountable if their actions cause harm, as well as how to enforce any such liability standards.”
Finally, the report recommends that Congress consider “providing appropriate incentives for research and development of privacy-enhancing technologies.”
Transparency, Explainability, Intellectual Property, and Copyright
Along with engendering data privacy concerns, the training process for AI models also raises questions surrounding copyright. Certain creators have expressed concern that their copyrighted works have been used to train AI models, enriching those with access to such tools at the creators’ expense. A separate but related issue is the question of the copyrightability of works created in part or wholly by AI.
To begin to address some of the issues regarding copyright caused by AI, the report encourages Congress to consider the following measures:
- Legislation “that protects against the unauthorized use of one’s name, image, likeness, and voice, consistent with First Amendment principles, as it relates to AI.” The report adds that this legislation should also “consider the impacts of novel synthetic content on professional content creators of digital media.”
- A review of “the results of existing and forthcoming reports from the U.S. Copyright Office and the U.S. Patent and Trademark Office on how AI impacts copyright and intellectual property law” and take action on the basis of these reports.
- Protections for creators should, according to the report, be paired with safeguards for those consuming AI-generated content or subject to decisions made with the assistance of AI. The measures suggested by the report involving transparency and explainability include:
- Legislation “to establish a coherent approach to public-facing transparency requirements for AI systems, while allowing use case specific requirements where necessary and beneficial.”
- A review of the degree to which “federal agencies are required to provide transparency to their employees about the development and deployment of new technology like AI in the workplace.”
- Development of “best practices for the level of automation that is appropriate for a given type of task, considering the need to have a human in the loop at certain stages for some high impact tasks.”
Safeguarding Against AI Risks
Harms to civil liberties are not the only risks posed by AI systems. As AI technologies are integrated into an increasingly wide array of domains, the risks related to erroneous outputs, cybersecurity, and critical infrastructure dependence only become more significant.
To address these concerns, the report calls on companies “to perform detailed testing and evaluation to understand the landscape of potential harms and not to release AI systems that cannot meet industry standards.” In support of these corporate efforts, the report encourages Congress to:
- Develop legislation “aimed at advancing R&D efforts that address the risks posed by various AI system capabilities.”
- Support the widespread adoption of risk management strategies, including “red-teaming, sandboxes and testbeds, commercial AI auditing standards, bug bounty programs, as well as physical and cyber security standards.”
- Investigate how to integrate AI risk mitigation standards into the federal procurement system.
National Security
Given the potential for AI systems to add trillions of dollars in value to the global economy, spur significant advances in weapons systems, and increase espionage capabilities, AI is a matter of acute national security concern. The report focuses on two main aspects of AI and national security: the development of US cyber capabilities and the surveillance of adversaries’ adoption of AI systems.
To the end of developing US cyber capabilities in the field of AI, the report encourages:
- Congress to develop legislation that would “expand the AI talent pathway into the military.”
- The DOD to further establish “career pathways and training programs for digital engineering, specifically in AI.”
- Defense agencies to work with AI developers to “prevent large language models, and other frontier AI models, from inadvertently leaking or reconstructing sensitive or classified information.”
- Congressional committees to collaborate with private sector partners to “address, and mitigate where possible, the rising energy demand of AI systems.”
In order to deepen collaboration with international partners and remain competitive against adversaries in the domain of AI, the report calls on:
- Congress to ensure that federal agencies have sufficient authority to “advance bilateral and multilateral agreements on AI” with international partners.
- Congress to pass legislation that would create AI-centered research partnerships with “like-minded international allies and partners.”
- The executive branch and Congress to institute policies that “support the free flow of information across borders, protect against the forced transfer of American technology, and promote open markets for digital goods exported by American creators and businesses.”
- Congress to create “a framework for determining when, or if, export controls should be placed on powerful AI systems.”
Conclusion
With the promulgation of President Biden’s October 2023 executive order on AI and the conclusion of the Senate’s AI Insight Forums in December, activity in federal AI regulation has largely been confined to the executive branch. While it is still unlikely that comprehensive AI legislation will be forthcoming in this Congress, the release of the roadmap demonstrates that AI continues to be a priority for congressional leaders. Having said that, when compared to the new EU AI law — and the AI executive order — this “roadmap is not a step toward a warp speed solution, and it may reinforce the perception that the speed of AI is moving too fast for the legislative and political process [to] keep up.”
Whether comprehensive AI legislation will be implemented in a matter of years, if at all, is anybody’s guess. What can be gleaned from the roadmap is a telling glimpse into the current perspective of congressional leaders on AI. Political, economic, or technological developments may shift these priorities, so it is important for interested stakeholders to continue to closely follow AI-related developments coming out of the federal government.
We will continue to monitor, analyze, and issue reports on these developments. Please feel free to contact us if you have questions as to current practices or how to proceed.