The FDA’s guidance documents represent an initial attempt to address these novel challenges, while highlighting the delicate balance between fostering innovation and ensuring public safety.
The U.S. Food and Drug Administration’s recent release of two draft guidance documents on the use of artificial intelligence in drug development, biologics and medical devices has sparked both excitement and skepticism. As AI increasingly permeates these fields, the regulatory landscape is just beginning to take shape—and these proposed guidelines take a step in that direction by raising awareness of important questions about the future of AI innovation in life sciences. For therapeutic, medical device and diagnostics companies—whether already implementing AI or just beginning to explore its potential—the message is clear: The landscape is evolving, and future success will require thoughtful consideration of compliance, patient safety and privacy protection from the earliest stages of AI adoption.
At its core, AI and specifically machine learning involves systems that learn from large datasets to identify patterns and make predictions. These AI models are trained on historical data—such as molecular structures, clinical trial results or device performance metrics—and can then analyze new data to generate insights. However, these AI-based technologies present unique regulatory challenges: Modern AI models are generally difficult to understand and are subject to significant uncertainty. Therefore, the outputs of AI models can be unpredictable or inexplicable. The rapid pace in which AI models advance further adds to the challenge.
The FDA’s guidance documents represent an initial attempt to address these novel challenges, while highlighting the delicate balance between fostering innovation and ensuring public safety. Both guidance documents take similarly risk-averse approaches, prioritizing the safety and efficacy of products. They require thorough validation and documentation to reduce bias, increase transparency and tackle other obstacles related to AI technologies in achieving these safety and efficacy standards.
AI in Drug Development and Biologics: A Credibility Framework
For pharmaceuticals and biologics—spanning new drug applications, abbreviated new drug applications and biologics license applications—the FDA in the first guidance document, entitled “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products,” introduces a so-called credibility assessment framework for AI use in a drug product’s life cycle. This framework emphasizes contextual risk evaluation for decision-making and outlines a seven-step process. It begins with defining the fundamental question an AI model aims to address and establishing its specific context of use, such as predicting drug efficacy, optimizing manufacturing processes, assessing pharmacokinetic profiles, identifying potential adverse effects or supporting regulatory decision-making on drug quality and safety. From there, it moves through assessing the AI model risk (primarily based on the training data and model training process) and establishing the AI model credibility (in terms of the output data) relative to the risk within the context of use, culminating in a final determination of the model’s adequacy for its intended purpose.
The FDA's framework provides a valuable starting point for integrating AI into the rigorous world of drug development and biologics manufacturing, where precision and reproducibility are critical. While acknowledging fundamental concerns like data quality and bias, these initial guidelines establish key principles that can be expanded upon as AI is further integrated and regulatory efforts are further developed. For example, how risk and credibility can be unambiguously and consistently evaluated remains to be seen. In addition, this first guidance document within its limited scope does not address considerations around data governance and privacy compliance—areas that will require more guidance as AI collaborations become more complicated and involve multiple stakeholders.
AI in Medical Devices: A Management and Monitoring Approach
For medical device manufacturers, including those developing diagnostic products, the FDA in the second guidance document, entitled “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations,” provides recommendations on life cycle management and marketing submission specific to AI-enabled medical devices. The FDA’s approach applies across various regulatory submissions, including 510(k)s, de novo classifications, premarket approval applications and humanitarian device exemptions. It also addresses AI-enabled device software functions, encompassing both software-as-a-medical-device and software-in-a-medical-device implementations.
This second guidance document is built on prior FDA guidance on digital health, device software functions, cybersecurity and other relevant areas. On the one hand, this guidance document generally describes information that would be generated and documented during software development and verification for device software functions using AI. In the context of supporting market authorization, it presents labeling considerations in a marketing submission for an AI-enabled device, including descriptions of the architecture, input, output and other aspects of AI models. Such labeling considerations help elucidate the generally convoluted AI processes and the impact thereof as applied to the medical device.
On the other hand, this guidance document discusses incorporating cybersecurity measures to ensure that the vast volumes of data processed by AI models embedded in medical devices remain unaltered and secure, including substantial information that is private and confidential. Such measures can help reduce data poisoning, model evasion and other similar issues that could lead to AI bias or hallucination. Notably, to minimize AI bias, companies under this guidance would be expected to demonstrate that their AI models perform consistently across diverse patient populations, with particular attention to demographic subgroup analysis.
FDA and Further Guidance
The FDA’s guidance documents demonstrate the agency’s evolving understanding of how AI is transforming healthcare. By providing this guidance, the FDA is actively shaping the future of AI in healthcare. This proactive approach helps product developers better understand regulatory expectations, offering a road map for innovation that balances cutting-edge advancements with the need to maintain high standards for patient safety and efficacy. The FDA's role in regulating AI technologies could further help build confidence among patients and healthcare providers. Rigorous evaluation and validation of AI-driven technologies are often seen as critical steps to foster trust and support their adoption in healthcare. The guidance seeks to promote a structured approach to integrating AI in the life sciences industries, with a focus on aligning innovation with public health goals and advancing medical science. However, companies will need to skillfully navigate compliance with these evolving standards while managing the associated costs and operational hurdles associated with meeting regulatory expectations.
While these guidance documents establish foundational principles for risk assessment, users would currently need to consult additional resources or further explore implementational details. These guidance documents also leave some areas largely open. For instance, key questions remain around validating AI models when dealing with limited datasets for rare conditions or underrepresented patient populations, building robust methodologies for ongoing monitoring of AI performance in clinical applications, implementing security measures tailored to AI-driven healthcare systems for compliance with the Health Insurance Portability and Accountability Act and other privacy regulations, and developing clear approaches for communicating complex AI-driven decisions to diverse stakeholders, including clinicians, regulators and patients. As the regulatory landscape continues to evolve, addressing these practical challenges will be essential for advancing AI adoption in life sciences.
Looking ahead, companies implementing or planning to implement AI in medical devices and drug development should recognize that while these guidance documents lay important groundwork, engagement with the FDA will be crucial for successful implementation. The guidance outlines several engagement pathways that organizations should consider utilizing, including: The Center for Clinical Trial Innovation program for discussing AI use in clinical trial designs and the Drug Development Tools Program for qualifying AI-based development tools. Companies can also proactively monitor activities of other domestic or international agencies specializing in AI technology standards or global AI initiatives in anticipation of further regulatory efforts in life sciences to formulate AI best practices or harmonize expectations for AI use across jurisdictions.
Key Takeaways for Biotech and Diagnostics Companies
As companies evaluate these draft guidance documents and as these documents move toward finalization, several areas warrant attention. First, for companies not yet using AI, these guidance documents offer valuable insight into the FDA's emerging regulatory framework, guiding strategic planning for future adoption. Second, the proposed risk-aversion-based approaches, while still in draft form, highlight major factors that companies should consider when developing or implementing AI systems in regulated contexts.
It is important to note that stakeholders have the opportunity to provide feedback on these draft guidance documents. For both documents, comments must be submitted by April 7, 2025. Although comments can be submitted at any time under 21 CFR 10.115(g)(5), submitting feedback before the close date ensures the FDA considers it during the finalization process.
Should these guidance documents be finalized, companies would need to implement robust processes for design control, testing protocols and comprehensive documentation of AI development and validation to reduce risk and harm under the proposed risk-based approaches. The draft guidance documents emphasize the importance of early engagement with the FDA—both in providing feedback on the current proposals and in discussing implementation approaches. Companies are encouraged to begin studying the proposed risk frameworks in how they apply to AI systems and evaluating their readiness to address these potential requirements while the guidance documents are still being finalized.
For More Information
If you have any questions about this Alert, please contact Agatha H. Liu, Vicki G. Norton, Ph.D., any of the attorneys in our Life Sciences and Medical Technologies Industry Group, any of the attorneys in our Artificial Intelligence Group or the attorney in the firm with whom you are regularly in contact.
Disclaimer: This Alert has been prepared and published for informational purposes only and is not offered, nor should be construed, as legal advice. For more information, please see the firm's full disclaimer.