Because No One is Immune to the Law
January 21, 2020 - AI + Robotics, United States, FDA, Medical Devices + Diagnostics, Healthcare

Digital Health in 2019

Patenting Artificial Intelligence in the U.S. – Considerations for AI Companies

The year 2019 saw relatively few developments in the digital health space. Although progress has been slow, FDA did make headway on two initiatives: (1) addressing the unique challenges of regulating artificial intelligence and machine learning (AI/ML) in medical applications, and (2) implementing changes from the 21st Century Cures Act, which went into effect at the end of 2016.

Artificial Intelligence in Digital Health

In April, FDA proposed a foundational framework for regulating the emerging field of AI/ML in software as a medical device (SaMD) applications. This framework is in response to the unique regulatory challenges presented by AI/ML-based devices.  While a device has traditionally required additional regulatory approval if its behavior changes after its initial approval, changing behavior is generally the goal of an AI/ML-based device. AI/ML-based devices promise to continuously improve throughout the device’s lifetime, often yielding more accurate results than existing methodologies. The continuously learning aspect of AI/ML necessitates a rethinking of how FDA approaches regulating such devices.

FDA’s proposed framework outlines scenarios where changes to an AI/ML-based SaMD may not require additional approval after an initial premarket approval. In the proposal, FDA conceptualizes AI/ML-based SaMD along two main axes. First, FDA proposes to evaluate AI/ML-based SaMD on risk according to the International Medical Device Regulators Forum (IMDRF) framework. On one end, a device may be considered high-risk if it is used to treat or diagnose a critical condition. On the other end, a device may be considered low-risk if it is used to simply inform clinical management of a non-serious condition. Second, FDA proposes to evaluate AI/ML-based SaMD on where it falls along the “locked” to “continuously learning” spectrum. Although FDA has already approved a number of AI/ML-based devices (and continues to do so at an increasing pace), these devices generally use “locked” algorithms that provide the same result each time the same input is provided. Under the current regulatory framework, device manufacturers can perform additional training on approved devices and submit algorithm updates for additional approval.

Under the new framework, FDA proposes a total product lifecycle approach where the agency “will assess the culture of quality and organizational excellence of a particular company.” For premarket submissions, FDA intends to assess whether organizations exercise good machine-learning practices and whether organizations have sufficient risk management controls to continually manage patients throughout the product lifecycle. A key feature of the framework is the introduction of a predetermined change control plan to cover anticipated changes to AI/ML-based SaMD. If proposed changes fall within the scope of the agreed-upon change control plan, the changes may be documented and implemented without further regulatory submissions. If proposed changes fall outside the scope of the plan but do not lead to a new intended use, FDA may conduct a “focused review” of the proposed changes. Under this framework, only proposed changes that fall outside the plan and lead to a new intended use may require a full, new premarket submission. 

FDA solicited feedback on its proposed framework, and comments were generally positive. The industry welcomed FDA’s efforts to provide clarity in this fast-moving space. Many comments requested that FDA provide clear and specific guidance (with detailed examples) on what FDA considers good machine-learning and risk management practices. Many comments also requested robust transparency requirements to ensure the continued clinical reliability of AI/ML-based SaMD. 

Although this framework focuses on SaMD, AI/ML has also grown significantly in adjacent spaces, such as drug development.  In 2019, FDA approved an accelerating number of AI/ML submissions. One such example is an algorithm that identifies instances of breast cancer using machine learning on mammogram images from CureMetrix. Another example is a Biovitals Analytics Engine, which uses machine learning to identify correlations between vital signs and daily activities for patients suffering chronic conditions. Canon Medical also received FDA clearance for using machine learning to reduce noise and increase spatial resolution in CT scan image reconstruction. FDA will likely continue to monitor AI/ML applications in a variety of medical contexts.  However, whether FDA currently possesses sufficient authority to implement its proposed AI framework, or will instead require additional legislative authorization, is not clear.

Digital Health Innovation Action Plan

FDA’s AI regulatory proposal is part of a larger push to stay abreast of technology innovation in the digital space. In 2017, FDA unveiled a Digital Health Innovation Action Plan with three major prongs: (1) implementing new legislation through updated guidance, (2) implementing a pre-certification program for digital products, and (3) adding additional digital expertise to FDA staff. 

On the first prong, FDA implemented several legislative changes from the 21st Century Cures Act (Cures Act) in September 2019. These changes took the form of a finalized Mobile Medical Applications guidance, a finalized Off-the-Shelf Software in Medical Devices guidance, a finalized Medical Device Data Systems guidance, a finalized General Wellness guidance, and a second-draft Clinical Decision Support (CDS) guidance. The finalized guidances generally implemented categories of software that FDA no longer considers a medical device—and therefore no longer plans to regulate—under the Cures Act.

The second-draft CDS guidance follows a first-draft CDS guidance issued in December 2017. Both guidances set out to categorize devices that do not meet the definition set forth in the Cures Act, devices that do meet the definition but will be subject to enforcement discretion, and devices over which FDA intends to exercise regulatory oversight. Compared to the first CDS guidance, the second CDS guidance adopts the IMDRF framework for a risk-based approach to device regulation. The second CDS guidance also extended the scope of CDS to include Patient Decision Support devices, which previously fell under their own, separate guidance. Given the extent of the changes from the first-draft guidance to the second-draft guidance, FDA may still take a while before it finalizes its CDS guidance to comply with the Cures Act.

On the second prong, the pre-certification program continues to move slowly. FDA introduced the pilot pre-certification program in 2017 with the goal of streamlining the regulatory approval pathway. Under the program, FDA would validate software design, software maintenance, and quality standards at the company level, rather than at the device level. Although the pilot launched in 2017 with nine companies, the pilot is still running with the same nine companies (perhaps progressing at a slower rate than initially contemplated). One major impediment has been congressional discomfort with the program. Congress has issued two letters to FDA requesting legal justification for the pilot and the eventual full-fledged program, as well as implementation details to assure that public safety will not be compromised. 

On the third prong, FDA has stated that it intends to hire software engineers, health scientists, and clinicians to bolster its expertise and response times. However, it’s not clear how successful FDA has been in strengthening its ranks. When asked in April if the agency was equipped to handle a total product lifecycle approach to regulation, the agency declined to comment.

On the cybersecurity front, FDA continues to drag its feet and has not finalized a draft guidance issued in 2018. FDA does list a cybersecurity draft guidance as a CDRH priority target for 2020, but the cybersecurity draft guidance was also listed as a priority target for 2019 and never materialized.