On October 30, 2023, President Biden issued an Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence (the “Executive Order”). The Executive Order outlines several steps that the Biden Administration will take with respect to the regulation of artificial intelligence (“AI”) technologies and tools across various industry sectors and previews a more heavily regulated landscape for developers and users of AI-enabled technologies and tools in the health care industry.
Guiding Principles for Governing the Development and Use of AI
The Executive Order outlines the following eight guiding principles that executive departments and agencies should adhere to when making decisions and implementing policies regarding AI.
- AI must be safe and secure.
- Promoting responsible innovation, competition and collaboration will allow the United States to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges.
- The responsible development and use of AI require a commitment to supporting American workers.
- AI policies must be consistent with the Biden Administration’s dedication to advancing equity and civil rights.
- The interests of Americans who increasingly use, interact with or purchase AI and AI‑enabled products in their daily lives must be protected.
- Americans’ privacy and civil liberties must be protected as AI continues advancing.
- It is important to manage the risks from the federal government’s own use of AI and increase its internal capacity to regulate, govern and support responsible use of AI to deliver better results for Americans.
- The federal government should lead the way to global societal, economic and technological progress, as the United States has in previous eras of disruptive innovation and change.
Directives to HHS for Regulating AI in Health Care
The Executive Order charges the U.S. Department of Health and Human Services (“HHS”) (in conjunction with other agencies when appropriate) with establishing guidelines, regulations and best practices for the development, commercialization, deployment and use of AI in the health care sector. These directives to HHS are outlined below.
Establishment of an HHS AI Task Force
Within 90 days, the Secretary of HHS will, in consultation with the U.S. Secretary of Defense and U.S. Secretary for Veterans Affairs, establish an HHS AI Task Force (the “Task Force”). The Task Force will, no later than one year after its establishment, be responsible for developing a strategic plan for policies and regulatory action with respect to the responsible deployment of AI and AI-enabled technologies in the health care sector (including research and discovery, drug and device safety, health care delivery and financing and public health). The Task Force’s recommended guidance will address:
- Development, maintenance and use of predictive and generative AI-enabled technologies in health care delivery and financing;
- Long-term safety and real-world performance monitoring of AI-enabled technologies in the health care sector;
- Incorporation of equity principles in AI-enabled technologies used in the health care sector, using disaggregated data on affected populations and representative population data sets when developing new models, monitoring algorithmic performance against discrimination and bias in existing models and helping to identify and mitigate discrimination and bias in current systems;
- Incorporation of safety, privacy and security standards into the software development life cycle for protection of personally identifiable information;
- Development, maintenance and availability of documentation to help users determine appropriate and safe uses of AI in local health care settings;
- Work with state, local, Tribal and territorial health and human services agencies to advance positive use cases and best practices of AI use in local settings; and
- Identification of AI uses to promote workplace efficiency and satisfaction in the health care sector, including reduction of administrative burdens.
Development of Strategies for AI Quality Assurance and Use of AI in Drug Development
Within 180 days, the Secretary of HHS must direct HHS components to develop a strategy to determine whether AI-enabled technologies in the health care sector maintain appropriate levels of quality, including the development of an AI assurance policy to evaluate the important aspects of AI-enabled health care tools and addressing infrastructure needs for enabling pre-market assessment and post-market oversight of AI-enabled technology algorithmic performance against real-world data.
Advancement of Federal Non-Discrimination Laws as Related to AI
Within 180 days, the Secretary of HHS will take appropriate actions to advance the understanding of and compliance with federal nondiscrimination laws by health care providers that receive federal financial assistance and how those laws relate to AI technologies. Such actions may include:
- Convening and providing technical assistance to health care providers and payers about their obligations under federal nondiscrimination and privacy laws as they relate to AI and potential consequences of noncompliance;
- Issuing guidance in response to complaints or other reports of noncompliance with federal nondiscrimination and privacy laws as they relate to AI.
Establishment of an AI Safety Program
Within one year, the Secretary of HHS will also, in partnership with voluntary federally-listed patient safety organizations, establish an AI safety program that:
- Establishes a framework for identifying and capturing clinical errors resulting from AI deployed in health care settings;
- Establishes specifications for a central tracking repository for associated incidents that cause harm, including through bias or discrimination against patients, caregivers or other parties;
- Analyzes captured data and generated evidence to develop recommendations, best practices or other informal guidelines to prevent clinical errors and associated harms attributable to deployed AI; and
- Disseminate recommendations, best practices or other informal guidance to appropriate stakeholders, including health care providers.
Stakeholders in the health care sector planning to use, develop, commercialize or deploy AI‑enabled technologies and tools should prepare to face closer regulation and monitoring from HHS and other federal agencies. As the Biden Administration increases its emphasis on AI, organizations should assess their current practices and policies regarding AI and keep in mind the unique quality, safety, privacy and security considerations surrounding AI in the health care setting.