The Food and Drug Administration’s recent discussion paper suggests a new regulatory approach for evaluating postmarket changes to artificial intelligence and machine learning software devices, but further clarity is needed on when such devices are subject to FDA regulation. Recognizing that AI/ML is intended to constantly evolve, FDA proposes a plan to help streamline the requirements for postmarket changes, and also outlines the agency’s expectations for AI/ML developers to conform to certain practices and principles.

The Food and Drug Administration (FDA) has published a discussion paper, Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD), in which the agency proposes a new regulatory approach for evaluating postmarket changes to AI/ML-based software devices. Although the April 2 proposal has the potential to reduce postmarket burdens for FDA cleared or approved AI/ML technologies, the agency has yet to offer clear guidance on when such technologies would be subject to FDA regulation.

Key takeaways from the discussion paper are described below.

Further Clarity Needed on When AI/ML Software Is Subject to FDA Regulation

The discussion paper does not provide any guidance on how FDA determines when AI/ML technologies are within its regulatory purview, but does give some limited insight into FDA’s thinking on this point. For example, FDA appears to presume that AI/ML software would be regulated when it is intended to “drive clinical management,” “inform clinical management,” or be used as “an aid in diagnosis.” In addition, the discussion paper provides specific examples of FDA regulated AI/ML software, including

  • AI/ML that processes/analyzes physiological signals to detect patterns that occur at the onset of physiological instability and generates alarms,
  • AI/ML that uses images taken by a smartphone camera to provide detailed information to dermatologists on physical characteristics of a skin legion, and
  • AI/ML that analyzes chest X-rays to evaluate feeding tube placement, detect incorrect placement, and perform triage for radiologists.

Given the lack of clear guidance on where the line is between regulated and unregulated AI/ML technologies, FDA’s proposal to address postmarket changes for such technologies seems premature.

FDA’s Proposal Would Allow Certain Postmarket Changes to Forgo FDA Review

Because AI/ML technologies are by their very nature intended to continuously change and evolve, FDA recognizes that its existing regulatory structure for evaluating postmarket changes is likely unworkable for AI/ML-based SaMD. Currently, software device manufacturers must assess each change made after clearance or approval to determine whether the change impacts safety or effectiveness such that a new FDA submission is required. In its discussion paper, FDA proposes that AI/ML developers would include in their premarket submissions a “predetermined change control plan,” which would include two primary components:

  • SaMD pre-specifications (SPS), which define the types of software algorithm changes that are covered/permitted under the plan
  • An algorithm change protocol (ACP), which defines methods to control risks for the permitted changes and how the changes may occur

Once the AI/ML product is cleared/approved, along with its predetermined change control plan, any postmarket changes that fall within the scope of the plan could be documented to file, without any FDA review. A change that falls outside the scope of the plan may be eligible for a “focused FDA review,” as long as the change does not lead to a new intended use. While FDA does not provide any details on what the “focused review” may entail, the agency requests feedback from stakeholders on this point.

In the discussion paper, FDA also acknowledges that statutory changes may be required to fully implement this proposed framework.

FDA’s Expectations for Conformance to Certain Principles and Practices

FDA states in the discussion paper that, with respect to AI/ML SaMD, it “expect[s] that SaMD developers [will] embrace the excellence principles of culture of quality and organizational excellence” set forth in FDA’s Software Precertification Program, Working Model v1.0. The Software Precertification Program is currently a voluntary pilot program, but FDA’s statements in the discussion paper suggest the agency may eventually look to establish the program’s organizational “excellence principles” as mandatory requirements. Making the Software Precertification Program a mandatory process would significantly change the FDA regulatory obligations for software developers, and could result in increased burdens both pre- and postmarket.

In addition, FDA proposes establishing Good Machine Learning Practices (GMLP), which it describes as “those AI/ML best practices (e.g., data management, feature extraction, training, and evaluation) that are akin to good software engineering practices or quality system practices.” FDA provides a few limited examples of what GMLP may include (e.g., data acquired in a consistent, clinically relevant, and generalizable manner), but requests feedback from stakeholders on how to support the development of GMLP.

Comments on the FDA discussion paper are due by June 3, 2019.