
19 Mar
New FDA Guidance on AI enabled Device Software Functions
New FDA Guidance on AI enabled Device Software Functions
Overview
In early January 2025, the FDA released long-awaited guidance addressing the regulatory expectations for artificial intelligence (AI)-enabled medical device software functions: Artificial Intelligence Enabled device software functions. This document provides manufacturers with specific recommendations on how to document AI-powered devices within their marketing submissions. Spanning nearly 70 pages, the guidance is extensive and covers a broad range of regulatory considerations.
Scope of the Guidance
The document primarily applies to medical devices incorporating AI, with a specific focus on AI systems that utilize machine learning (ML). While AI technologies vary, ML-based systems dominate current applications, making them the primary subject of this regulatory framework.
Although not explicitly stated, the guidance is directed at all FDA personnel responsible for regulating medical devices. The extensive collaboration evident in the document suggests input from multiple divisions within the FDA, including those overseeing submissions, usability, cybersecurity, and post-market considerations. This level of cross-departmental engagement is noteworthy and distinguishes this guidance from others, likely reflecting an exceptionally high resource investment from the FDA.
Given the depth of detail provided, the guidance is not only relevant for regulatory authorities but also crucial for all stakeholders involved in the medical device lifecycle—including product development, design, verification and validation (V&V), clinical evaluation, post-market surveillance, quality management, and regulatory compliance. Simply put, anyone working with AI-driven medical devices at any stage of their lifecycle should familiarize themselves with this document.
AI/ML as an Evolution, Not a Revolution
While AI/ML introduces groundbreaking possibilities for developing sophisticated algorithms capable of diagnostic and therapeutic functions, from a regulatory perspective, it does not represent a fundamental departure from existing medical device software oversight. Many of the requirements outlined in the guidance align with those already applicable to complex software-based medical devices, often referred to as “deterministic” systems. These traditional algorithms, though intricate, remain easier to analyze than ML models, whose internal workings may be more opaque.
The regulatory scrutiny applied to AI-enabled medical devices mirrors that imposed on conventional software-based systems. For example, whether a diagnostic imaging platform employs traditional algorithms or AI, the level of performance validation required in a marketing submission remains the same. Similarly, AI-driven insulin dose prediction software is held to the same safety and efficacy standards as its non-AI counterpart. The document reaffirms that AI does not alter fundamental regulatory expectations regarding device description, user interface design, risk assessment, cybersecurity considerations, or data management—though it introduces additional layers of complexity.
Key Distinctions Introduced by AI
The primary differentiator of AI-driven medical devices is the need for extensive data training to develop and refine ML models. Unlike conventional software, which operates based on predefined logic, ML systems rely on data to “learn” and improve performance. As a result, the guidance places a significant emphasis on AI model training, validation, and ongoing performance monitoring.
Specific AI-related risks addressed in the document include:
- Model opacity (lack of transparency)
- Potential biases introduced by training data
- Challenges in generalizing performance across different populations
- Performance degradation (or “drift”) over time
To mitigate these risks, the FDA mandates comprehensive documentation of AI model development and validation, ensuring that manufacturers proactively address these challenges before submitting devices for regulatory approval.
Key Recommendations
Terminology Standardization
The guidance emphasizes the importance of aligning AI terminology with established FDA definitions. A notable example is the term “validation,” which has a precise meaning in medical device regulation. AI developers, particularly data scientists, must adapt their understanding of this term to ensure compliance with regulatory expectations.
Device Description Requirements
Transparency is a major focus of the guidance, and many of the requirements listed for AI-enabled devices also apply to conventional software-based systems. However, certain elements—such as how AI contributes to achieving the device’s intended use and its role in the clinical workflow—are given particular emphasis due to their heightened significance in AI-driven applications. Misrepresenting the integration of an AI-powered device into clinical practice can have severe implications for real-world performance.
User Interface and Labeling
User interface design has always been an important consideration for medical device software. However, in the context of AI, real-time detection of performance degradation (drift) is a critical risk mitigation strategy. The guidance underscores the need for labeling and user-interface elements that enhance transparency, ensuring that users understand the device’s functionality and potential limitations.
Risk Assessment Considerations
For AI risk management, the FDA recommends adherence to AAMI CR 34971, which is currently regarded as the industry standard. Notably, the agency highlights human factors risks specific to AI, emphasizing that opaque or overly complex ML models can lead to misinterpretations, erroneous decisions, and potential patient harm.
Data Management Requirements
Data management is arguably the most AI-specific component of the guidance. Unlike traditional software, where test datasets suffice, AI model development requires training, tuning, verification, and validation datasets. The FDA warns against using poor-quality or biased data, which can compromise model performance and generalizability.
A core principle emphasized in the guidance is the strict separation (“sequestration”) of validation data from the teams responsible for model development. This concept was previously outlined in the “Good Machine Learning Practice” (GMLP) framework co-authored by the FDA, Health Canada, and the UK’s MHRA. Additionally, the FDA stresses the importance of sourcing data from multiple sites to ensure broad representativeness, a recommendation aligned with best practices in clinical data collection.
Model Development and Validation
Beyond describing AI models, manufacturers must also document their development processes, including training methodologies and tuning parameters. While no universally accepted standard exists for AI lifecycle management in medical devices, IEC 5338 provides some foundational guidance. Validation considerations mirror those for traditional software—covering performance testing, human factors evaluation, and clinical validation—but are particularly critical for ML-based models.
Post-Market Performance Monitoring
To address the risk of performance drift caused by real-world data shifts, the FDA strongly recommends continuous performance monitoring. While this concept parallels post-market clinical follow-up (PMCF) requirements under the EU MDR, the FDA’s approach allows manufacturers greater flexibility in defining suitable metrics and data collection methods.
Cybersecurity Considerations
Cybersecurity risks associated with AI-enabled devices include traditional software vulnerabilities as well as threats unique to ML models, such as data poisoning. The guidance suggests that AI models may introduce new attack vectors not previously encountered in conventional software-based medical devices, warranting specialized security assessments.
Transparency and Public Disclosure
A notable aspect of the guidance is the FDA’s strong emphasis on transparency. This applies both to regulatory submissions—where manufacturers must provide extensive documentation—and to public-facing disclosures, such as model cards appended to devices. While the guidance does not mandate AI interpretability techniques, improving the explainability of ML outputs could contribute to greater user confidence.
Quality System Considerations
Although the guidance references existing Quality System Regulation (QSR) requirements, it offers little practical advice on how to tailor quality management systems (QMS) for AI models. The emerging ISO 42001 standard may eventually address this gap, though its applicability to medical device software remains uncertain.
Final Takeaways
To successfully navigate FDA premarket submission for an AI-powered medical device, manufacturers should:
- Systematically follow the guidance’s recommendations, translating them into an actionable compliance roadmap.
- Engage in a Q-Submission (Q-Sub) to ensure that AI model design and validation plans align with FDA expectations.
Finally, while FDA compliance is essential, manufacturers targeting global markets should be aware that the EU MDR imposes distinct requirements for clinical validation and data collection, which may not always align with FDA expectations.
Topics: #healthcare #lifeSciences #medicaldevices #medtech #medicaltechnology #MedSysCon #AI #FDA #Guidance
For further information please get in touch with us:
+49-176-57694801
