WASHINGTON, D.C. (April 24, 2026) — Today, Congresswoman Valerie Foushee (NC-04), Congressman Don Beyer (VA-08), and Congressman James Moylan (GU-AL) introduced the bipartisan Protecting Consumers From Deceptive AI Act to establish technical standards and guidelines for generative AI content and ensure that the use of this technology is disclosed when it is used to create or modify audio and visual content.
“Deepfakes and AI-generated audio and visual content poses major risks to consumers, our elections, and public trust. Clear labeling and transparency of this content must be required so Americans can distinguish what images, audio, and videos are artificially generated,” said Congresswoman Valerie Foushee. “As the spread of deceptive AI content fuels misinformation and raises serious civil rights concerns, particularly for those that are disproportionally targeted online, this bill is an important step towards protecting creators and their work and ensuring generative AI systems are not used to hurt our communities. I am proud to introduce this bipartisan legislation alongside Congressman Beyer and Congressman Moylan to help defend the American people from the harms of deceptive AI.”
“The rapid pace and scale at which AI-generated content is flooding social media, news feeds, and political advertising demands urgent transparency to protect American consumers. Our legislation would require clear, consistent labeling of this AI-generated content to ensure informed decision-making and preserve public trust,” said Congressman Don Beyer.
“Artificial intelligence is advancing rapidly, and while innovation brings tremendous opportunity, it also comes with real responsibility. Americans deserve transparency when AI-generated content is being used to influence, persuade, or deceive. This legislation is about protecting consumers, strengthening trust in digital information, and ensuring people can better distinguish between authentic content and AI-generated manipulation,” said Congressman James Moylan.
Specifically, the Protecting Consumers From Deceptive AI Act would:
- Facilitate the development of guidelines by the National Institute of Standards and Technology (NIST) to ensure proper watermarking, digital fingerprinting, and metadata is required for audio and visual content generated by Artificial Intelligence.
- Direct NIST to assist content providers in labeling audio and visual content modified by generative AI, including standards for identifying this content on social media platforms.
- Develop technical standards and guidelines to identify and label text-based content created or substantially modified by AI.
The Protecting Consumers From Deceptive AI Act is endorsed by Adobe, the American Society for Collective Rights Licensing, the Authors Guild, Encode AI, IEEE-USA, and the Society of Composers & Lyricists.
“Adobe commends Congresswoman Foushee, Congressman Beyer, and Congressman Moylan for their leadership in advancing legislation to strengthen transparency and trust online,” said Jace Johnson, Vice President of Global Public Policy at Adobe. “As longtime advocates for content provenance, Adobe supports this bill’s balanced approach that recognizes the shared responsibility of generative AI developers and digital platforms to provide people with meaningful context about the content they encounter. This legislation represents an important step forward in getting AI labeling right, as we see this issue emerging globally. It is an opportunity for the United States to help set the standard and ensure that digital provenance technology is implemented thoughtfully and effectively. We look forward to continuing to work with Congress to advance smart, responsible AI policy.”
“The American Society for Collective Rights Licensing (ASCRL), the largest not-for-profit photography and illustrator association in the United States, is proud to support the Protecting Consumers from Deceptive AI Act again in the 119th Congress. ASCRL believes the bill would give platforms an incentive to license visual author materials, possibly to avoid or to help them comply with the identification and provenance requirements. It is a helpful step towards the broader task of ensuring that Congress pass legislation to provide for a collective licensing system to secure compensation for visual authors for the ingestion of their work for machine learning by generative AI platforms. ASCRL looks forward to working with Congresswoman Foushee, and other members of Congress, to advance this legislation,” said James Silverberg, CEO of the American Society for Collective Rights Licensing.
“The Authors Guild strongly supports Representative Foushee’s Protecting Consumers from Deceptive AI Act. While we’ve developed a Human Authored certification mark, clear labeling of AI-generated content is the more effective solution. This bill requires technologies to identify AI-generated content and trace its origin, starting with audio and video and lays the groundwork for standards to label AI-generated text. We thank Rep. Foushee and urge swift Committee action,” said Mary Rasenberger, CEO of The Authors Guild.
“We appreciate Rep. Foushee’s thoughtful bill to address the concerning and pervasive problem of AI deepfakes. The Protecting Consumers from Deceptive AI Act would require clear labeling of AI-generated content so people can tell what’s real and what isn’t. We welcome NIST’s collaboration with industry and experts to develop technical standards that identify AI-generated content, and look forward to continued collaboration with all stakeholders to ensure we get this right,” said Adam Billen, Co-Executive Director at Encode AI.
“As a global standards developer, IEEE offers standards that complement governance tools in critical areas such as AI and networking. These voluntary standards help ensure system interoperability and can facilitate privacy protections and transparency for autonomous systems. IEEE-USA welcomes this legislation which thoughtfully addresses the need to protect consumers from the possible negative aspects of generative AI, including how a combination of technical standards and requirements can inform how companies may disclose that their systems are using generative AI,” said Nils Smith, Vice President of IEEE-USA.
“The Society of Composers & Lyricists (SCL) thanks Congresswoman Foushee for reintroducing the Protecting Consumers from Deceptive AI Act. Composers and lyricists are already seeing their styles, and compositions replicated by generative AI systems without authorization. A federal framework that requires disclosure of AI-generated audio content, backed by NIST technical standards and FTC enforcement, is an essential first step toward protecting the human creators whose work these systems are trained on and imitating. SCL is proud to endorse this bill again in the 119th Congress,” said Ashley Irwin, President of the Society of Composers and Lyricists (SCL).
“Generative AI can be used to create highly realistic content that is difficult or impossible to distinguish from authentic material, including deepfakes. We are not interested in being deceived, and we are not interested in our children being deceived either. This is a matter of personal safety, national security, and maintaining a properly informed electorate. That is why labeling AI-generated and manipulated content is so important. I support the Protecting Consumers from Deceptive AI Act, introduced by my Congressional representative, Valerie Foushee," said Dr. Cynthia Rudin, Duke University’s Gilbert, Louis, and Edward Lehrman Distinguished Professor of Computer Science.
The full bill text can be found here.