Focus on risk and harm mitigation tools mirrors recent efforts to establish guardrails for responsible AI development
On May 23, the Biden Administration announced several new initiatives to support the development of a National Artificial Intelligence (AI) Strategy. The initiatives focus on: (1) outlining a plan to increase federal investment in AI research and development; (2) gathering information about mitigating risks and responding to the latest challenges posed by AI; and (3) the risks and opportunities of using AI in education. This flows from the administration's recognition that "American leadership in science and engineering research and innovation is rooted in the U.S. government-university-industry R&D ecosystem."
These initiatives join a growing contingent of similar federal efforts to gather information about AI and formulate regulations. For example, on April 11, the Department of Commerce, through the National Telecommunications and Information Administration (NTIA), issued a request for comments (RFC) on AI system accountability measures and policies; on April 25, four federal agencies issued a joint statement highlighting their commitment to use their existing regulatory powers to oversee the use of AI; and on May 3, the White House released a set of initiatives after meeting with CEOs of four companies actively developing AI to "emphasize the importance of driving responsible, trustworthy, and ethical innovation with safeguards that mitigate risks and potential harms to individuals and our society."
This latest announcement leverages two documents issued by the White House Office of Science and Technology Policy (OSTP), the organization within the Administration leading on developing AI policy and strategies.
First, the OSTP released a National AI R&D Strategic Plan that is described as a roadmap outlining key priorities and goals for federal investments in AI R&D. The Plan characterizes this investment philosophy as one where the federal government will invest in R&D that promotes responsible American innovation and "serves the public good, protects people's rights and safety, and advances democratic values." For example, as part of its strategy of investing in long-term, foundational technology, the OSTP notes that current foundational models, including large language models like GPT-4, are "prone to 'hallucinate' and recapitulate biases derived from unfiltered data from the internet used to train them." The Plan suggests that "further research is needed to enhance the validity and reliability as well as security and resilience of these large language models."
Second, OSTP issued a Request for Information (RFI) to seek input on national priorities for mitigating AI risks, protecting individuals' rights and safety, and harnessing AI to improve lives. This RFI is similar to the NTIA Request for Comment in its focus on exploring appropriate safeguards for potential AI applications/use cases. There are 29 questions organized in 5 areas: (i) protecting rights, safety and national security; (ii) advancing equity and strengthening civil rights; (iii) bolstering democracy and civic participation; (iv) promoting economic growth and good jobs; and, (v) innovating in public services. Comments on this RFI are due July 7, 2023.
The U.S. Department of Education's Office of Educational Technology also released a new report, Artificial Intelligence (AI) and the Future of Teaching and Learning: Insights and Recommendations, summarizing the risks and opportunities related to the use of AI edtech platform technologies in teaching, learning, research, and assessment, that is intended to "guide and empower local and individual decisions about which technologies to adopt and use in schools and classrooms." In an interesting highlighted and indented part of the report, OET mentions tech platform accountability, suggests placing "limits on targeted advertising," and putting the onus on tech platforms to minimize how much information they collect, "rather than burdening Americans with reading fine print."
AI governance has become an issue of national and international concern, and the activities within the administration reflect a growing sense of urgency to adopt some guardrails for the responsible development and deployment of AI. In addition to the administration's efforts, legislators, notably Senate Majority Leader Schumer, have taken notice of AI and appear to be actively investigating approaches to regulating AI.
It is important to remember that the OSTP and NTIA do not have rulemaking authority, so the recent information-gathering efforts are not necessarily an indicator of near-term regulations of AI. However, any risk mitigation standards that OSTP may endorse could be leveraged by other regulators or legislators in future initiatives.