Because No One is Immune to the Law
May 06, 2021 - AI + Robotics, FDA

CTA Publishes New Standard for Healthcare AI

Patenting Artificial Intelligence in the U.S. – Considerations for AI Companies

In February, the Consumer Technology Association (CTA) announced a voluntary standard for healthcare products that use Artificial Intelligence (AI). This standard follows the Food and Drug Administration’s (FDA) recently published action plan for regulatory oversight of AI and machine learning-based medical software. FDA ultimately determines the regulatory process and requirements to legally market AI healthcare products, generally reviewing products to make sure they are safe and effective for the consumer market. FDA has worked to adopt a new regulatory framework for AI/ML software medical devices for years now. But as ANSI/CTA‑2090 suggests, regulatory approval is just one consideration for this novel product category.

CTA’s standard focuses on the trustworthiness of healthcare AI products. While this factor is not typically a regulatory requirement, CTA claims that trust is critical to the acceptance and successful implementation of novel AI technologies, especially those in the healthcare setting. The publication represents consumer and manufacturer perspectives on the key features that AI‑based medical software should offer to build trust. The standard focuses on three main trustworthiness areas—Human, Technical, and Regulatory Trust—and outlines baseline features and requirements in each category for products to incorporate.

3 Trustworthiness Areas of Healthcare AI Products

Human Trust

The Human Trust factors focus on facilitating transparent and smooth human interaction with the product. The standard encourages developers to provide users with clear descriptions of what the AI predicts, its clinical parameters, limitations, and performance parameters. Users should be able to understand what the system is capable of and how it may make mistakes. The product should present information that is contextually relevant and consistent with social norms. Programs should incorporate a fault-tolerant user interface appropriate for the target audience. The standard notes that the degree of human trust required increases as the level of AI autonomy increases, so clear explanations of the AI’s autonomy and human input are required.

Technical Trust

To build trust in the product’s ability to perform as technically expected, developers should understand potential bias in the program’s data set and mitigate it to promote system fairness. Applicable data privacy and security requirements should be followed, in addition to being transparent about what information is collected and why. Data sources and any merging or processing of the data used to train the program should also be disclosed to build trust.

Regulatory Trust

The final component addresses how the product is accepted within the highly regulated health care industry. Various federal and state agencies have oversight of different parts, from medical devices to provider licensure and care standards.

Follow our MoFo Life Sciences blog to stay up to date with the latest developments in healthcare AI/ML policy.