FDA’S PLAN FOR AI/ML-BASED SOFTWARE AS MEDICAL DEVICES: PROGRESS AND CONCERNS

U.S. Food and Drug Administration (FDA) has acknowledged the prevalence of Artificial Intelligence/Machine Learning (AI/ML)-Based Software as Medical Devices (SaMDs) and has been taking steps towards advancing its regulatory oversight. The FDA recently published an AI/ML SaMD action plan, developed in direct response to stakeholder feedback. It is now one month later, and the FDA has yet to implement any steps outlined in its action plan, all the while approving more and more AI products. This post discusses the progress the FDA has made thus far for regulating AI/ML-based SaMDs and the concerns that remain.

FDA’s Progress for Regulatory Oversight

Artificial intelligence is growing rapidly in the field of healthcare. The FDA has acknowledged its importance and the impact it has on health care, but has not been able to keep up with manufacturers. 

In April 2019, the FDA published “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) – Discussion Paper and Request for Feedback,” describing for the first time, clearly, the FDA’s potential approach to premarket review of AI/ML SaMDs. The potential approach involved a “Predetermined Change Control Plan” for premarket submissions including “SaMD Pre‑Specifications” (SPS) and an “Algorithm Change Protocol” (ACP) to account for the iterative nature of AI/ML-based SaMDs.

In February 2020, the FDA announced a marketing authorization via the De Novo pathway for the first cardiac ultrasonic software that uses AI to guide users. The manufacturer used a Predetermined Change Control Plan in its application to obtain authorization.

In the same month, the FDA held a public workshop on the “Evolving Role of Artificial Intelligence in Radiological Imaging.” The FDA and the public stakeholders discussed best practices for validation of AI‑automated radiological imaging software and image acquisition devices.

In September 2020, FDA launched the Digital Health Center of Excellence within the Center for Devices and Radiological Health. According to the FDA, the focus of the Digital Health Center of Excellence is “helping both internal and external stakeholders achieve their goals of getting high quality digital health technologies to patients by providing technological advice, coordinating and supporting work being done across the FDA, advancing best practices, and reimaging digital health device oversight.”

In January 2021, the FDA published “Artificial Intelligence/Machine Learning (AI/ML)‑Based Software as a Medical Device (SaMD) Action Plan,” outlining the FDA’s plans for progressing its regulatory oversight. The Action Plan identified five areas for the FDA focus: 

  1. Tailored regulatory framework for AI/ML-based SaMDs;
  2. Good machine learning practices (GMLPs);
  3. Patient-centered approach incorporating transparency to users;
  4. Regulatory science methods related to algorithm bias and robustness; and
  5. Real-world performance (RWP). 

The FDA stated that it will update its regulatory framework and publish a draft guidance in 2021, actively engage in efforts to harmonize GMLPs, hold public workshops on device labeling for transparency to users, develop methodology for evaluating and improving machine learning algorithms (including identifying and eliminating bias), and develop a framework for gathering and validating RWP. Although the AI/ML Action Plan is recent, the FDA has not yet taken any steps to implement any of these focus areas for policy development or clarification.

The FDA’s Approval of AI/ML-Based SaMDs

Since 2012, the FDA has approved over 160 medical AI/ML-based SaMDs, the majority being in 2019 and 2020. Some reports have found, upon investigation, that the requirements the FDA has imposed on device submission have been inconsistent. For example, some submission sponsors disclosed the amount of patient data used to validate the performance of their devices while others did not. And, according to one investigative report, when disclosed, the amount of data varied widely from 100 patients on one end of the spectrum to 15,000 patients on the other end. 

Only a handful of manufacturers reported racial makeup of the study populations and a few provided gender breakdown. This creates uncertainty as to how effective the AI/ML-based SaMDs will perform with respect to algorithmic bias. Additionally, the FDA has not made training and testing data for these approved AI/ML-based SaMDs publicly available, potentially creating a lack of trust in these technologies or, at the very least, making it difficult for purchasers of the SaMDs to make informed decisions based on similar evidence criteria. However, perhaps most importantly, the FDA has yet to figure out how to handle real-world leaning and adaptation within the regulatory framework. 

Although the FDA clearly aims for increased regulation and monitoring of AI/ML-based SaMDs, it has not provided a specific plan for any of the five focus areas identified in its AI/ML Action Plan. The FDA may need to act much quicker in developing consistent standards as the number of AI/ML-based SaMDs submissions to the FDA continue to increase (from two products in 2012 to 70 products in 2019). The FDA should address stakeholder concerns of data and approval process inconsistencies, lack of well-established standards, transparency, labeling challenges, trustworthiness, algorithmic biases, and RWP. 

Conclusion

The FDA is slowly taking steps towards its commitment to develop a regulatory framework for AI/ML-based SaMDs. For now, the FDA has not yet provided any consistent standards. Without such widely-applicable guidance, manufacturers should expect a longer application timeline and extensive discussions with the FDA—in effect creating a customized plan for data and evidence generation for each and every AI/ML SaMD. It is unpredictable at this stage what the FDA will require and ask for in these negotiations, although to some extent prior submissions and authorizations serve as a baseline for consideration. As artificial intelligence and machine learning technologies are increasing in availability and applicability, organizations and device sponsors will continue to build their own databases and define their own standards, which the FDA may be strategically waiting for.