WASHINGTON, DC (October 18, 2023) – Today, the House Committee on Science, Space, and Technology's Subcommittees on Investigations & Oversight and Research & Technology are holding a joint hearing titled, "Balancing Knowledge and Governance: Foundations for Effective Risk Management of Artificial Intelligences."
Subcommittee on Investigations and Oversight Ranking Member Valerie Foushee's (D-NC) opening statement as prepared for the record is below.
Chairman Obernolte and Chairman Collins, thank you for holding today’s hearing on this important and timely subject. Artificial Intelligence is an emerging technology that holds the potential to transform many aspects of human society. And I am incredibly impressed with the quality of scientific research currently being conducted in the realm of AI.
In my district, North Carolina’s Fourth, Duke University is leading the Athena Institute, an NSF-funded multidisciplinary research institute focused on AI research for edge computing related to communications networks. And UNC Chapel Hill is participating in the Artificial Intelligence Institute for Engaged Learning, another NSF-funded research institute working on the development of AI tools to enhance educational learning opportunities. I have met with these researchers. Their work is exciting, and their brilliance, ingenuity, and commitment to developing AI applications that serve the public good are second-to-none.
At the same time, there is no escaping the reality that certain kinds of AI systems and their application could pose grave risks in the absence of clear, rigorous oversight rooted in practical and ethical considerations. AI risk management is a complex subject that defies easy answers. But there are some guiding principles that should be at the front of our minds as we think about these issues.
First, there is a pressing need to develop effective scientific tools and test methods that can support researchers and policymakers in evaluating the benefits and risks of different AI systems. This is an area where federal agencies such as NIST can play a leading role. Second, it is crucial for the federal government to prioritize research into the benefits and risks of different AI systems and applications, and then fund that research accordingly. AI risk management must be a federal research priority because it is essential for understanding AI’s broader impact on society. Finally, as we race to keep up with the breakneck pace of AI research and the accompanying risks, our discussion must be centered around the real-world risks and safety concerns that arise from AI systems.
The debate over AI risk management has an unfortunate tendency to become fixated on dramatic existential risks, like the end of human civilization, which are based in speculation. Those risks are legitimate subjects for research inquiry. But they should not be elevated over concrete, tangible concerns in areas such as equity, bias, privacy, cybersecurity, and disinformation. These are the existing, fact-based risks to real people in the real world, not distractions pulled from the world of science fiction. And they deserve to be prioritized in the AI risk management framework to come.
The Science Committee is the right place for this kind of discussion. We pride ourselves on our responsible and bipartisan approach to complex questions like this one, with deliberations grounded in scientific fact and informed by a broad range of perspectives. I’m confident that today’s hearing will continue in that tradition.
As a member of the New Democrat Coalition’s Artificial Intelligence Working Group, I believe this is exactly the kind of issue that deserves more of our attention in Congress. Chairman Obernolte, I have great respect for your leadership on AI issues, and I’m eager to use today’s hearing to explore areas where we may be able to work together on AI-related oversight in the future.
I want to thank our expert witnesses for appearing before the committee today and for your thoughtful testimony on this topic. I look forward to a vigorous and engaging discussion.
Thank you, and I yield back.