At a landmark FDA workshop on artificial intelligence (AI) in drug development, regulators and industry leaders outlined a vision that balances innovation with risk-based governance. The event reflected an unprecedented level of collaboration between the U.S. Food and Drug Administration (FDA), the Clinical Trials Transformation Initiative (CTTI), and experts from Bristol Myers Squibb (BMS), Microsoft, and patient-led research organizations. Discussions focused on regulatory clarity, data governance, patient trust, and the transformative potential of AI to reshape drug discovery and clinical trials.
AI Adoption: Promise Meets Hesitation
Opening remarks highlighted the uneven maturity of AI implementation across the biopharma ecosystem. A Tufts Center for the Study of Drug Development survey of over 300 industry respondents revealed that two-thirds of organizations are still in early adoption phases, with only 11% achieving full implementation across most clinical programs. Parallel CTTI research found strong alignment with these findings, citing data quality, governance, compliance, and expertise gaps as key barriers. Notably, respondents perceived resistance to AI adoption as evenly distributed among regulators, industry, and patients—underscoring that cultural and trust-related challenges remain as pressing as technical ones.
Redefining Urgency: FDA’s Dual Mandate to Promote and Protect
FDA advisor Dr. Shantanu Nundy framed the discussion around the agency’s twin responsibilities—to promote and protect health—and emphasized that AI has the rare potential to fulfill both. He cited the staggering loss of “16 million birthdays” annually in the United States due to preventable or untreatable diseases, noting that 95% of rare diseases lack approved therapies. Nundy argued that AI can accelerate discovery by creating synthetic disease registries, integrating disparate electronic health records (EHRs), and applying causal inference to answer clinical questions in months instead of years.
He illustrated this with a project analyzing diabetes remission using Kaiser Permanente’s longitudinal EHR data—a study that took two years despite clean datasets and robust resources. “AI could compress that cycle dramatically,” he noted. Similar applications, he said, could revolutionize toxicology modeling, detect structural heart disease from electrocardiograms, and identify trial-eligible patients from clinical records using multilingual, conversational AI. “We don’t just want a faster horse,” Nundy said. “We need to build the car.”
FDA’s Policy Framework: Risk-Based, Transparent, and Global
Another FDA policy outlined the agency’s evolving regulatory framework for AI in drug and biologic development, emphasizing that predictability, transparency, and engagement are central pillars. “Innovation does not automatically equal risk,” she explained, describing a shift toward evaluating AI within its context of use. The framework, she said, recognizes that model risk—defined by a model’s influence on decisions and the consequence of its errors—determines the level of regulatory scrutiny required.
The advisor noted that the FDA is coordinating a cross-center approach to ensure consistent AI review practices between the drug and device divisions, forming multidisciplinary “rapid response teams” to accelerate regulatory feedback. Key priorities include harmonizing standards across agencies such as EMA and MHRA, expanding internal upskilling programs for FDA reviewers, and strengthening infrastructure for large-scale data sharing. “Our current policy cycle of three to four years to publish guidance will not work for AI,” she cautioned. “We must engage faster, publish iteratively, and communicate directly.”
The Draft Guidance: A Seven-Step Framework for Model Credibility
FDA official Gabriel Innes provided an in-depth walkthrough of the agency’s draft guidance on establishing AI model credibility—a first for the Center for Drug Evaluation and Research (CDER). The guidance defines a seven-step risk-based assessment process focused on the model’s intended context of use, lifecycle management, and documentation standards. For high-risk models—such as those determining whether a patient requires inpatient monitoring after dosing—the FDA expects comprehensive validation, transparency in data lineage, and extensive documentation of performance metrics.
Innes reported that 98 organizations submitted nearly 1,500 public comments on the draft, led by regulated industry but including academia, patient advocacy groups, and clinical researchers. Major themes included calls for clearer definitions of model scope (e.g., generative AI versus traditional machine learning), harmonized terminology, additional case examples, and more precise regulatory engagement pathways. “Stakeholders want to know not only how to validate models but when and how to engage with us,” Ennis said, emphasizing the FDA’s intent to expand illustrative use cases and post-deployment management guidance in the final version.
From Discovery to Deployment: Industry Demonstrates AI’s Impact
Bristol Myers Squibb’s Chief Digital and Technology Officer Greg Meyers outlined how the company views biology as a “computational problem.” With 40 trillion cells and seven octillion atoms in the human body, AI, he argued, is indispensable for understanding and manipulating biological complexity. At BMS, AI now drives every stage of drug development—from molecular design to manufacturing optimization.
Approximately 100% of BMS’s small-molecule programs now begin with AI-predicted experiments before any wet-lab work, and machine learning models have reduced biologics production waste by 40%, increasing usable product yield without additional raw materials. Meyers highlighted AI’s role in identifying new drug targets, simulating organoid responses, optimizing trial protocols, and predicting screen failure rates at specific sites—cutting months from enrollment timelines. “It’s the same activity,” he said, “just done in a smarter way.”
Patient-Driven Data: Bridging the Real-World Evidence Gap
Independent researcher Dana Lewis—known for founding OpenAPS, the first open-source artificial pancreas—brought a patient-led lens to the discussion. She argued that while AI has immense potential, current systems often fail to capture the lived realities of patients. Citing her own experience with exocrine pancreatic insufficiency, Lewis explained how the absence of dietary and dosing data in studies left both patients and clinicians unable to optimize treatment. To address this, she built a patient-generated data app and validated a 15-symptom scoring system across 324 participants, demonstrating AI’s ability to structure real-world inputs into clinically meaningful evidence.
Lewis urged regulators and sponsors to establish pathways for patient-initiated studies, enable the inclusion of wearable and app-generated data, and recognize negative results and outliers as critical to improving AI’s learning cycles. “Patients are ready to participate,” she said. “The system isn’t ready to receive.”
Cross-Sector Innovation: Microsoft’s Perspective
Microsoft’s Chief Medical Officer Dr. Thomas Osborne described AI as a “digital crystal ball,” capable of predicting disease onset, revealing new indications for existing therapies, and optimizing treatment combinations. He stressed that the greatest gains will come not from isolated breakthroughs but from cross-disciplinary collaboration and data sharing. “We’re missing opportunities because insights are trapped in silos,” Osborne warned. “Aggregating small, statistically insignificant datasets across domains could unlock personalized medicine at scale.”
Osborne advocated for harmonized data standards, federated learning to preserve intellectual property, and government incentives requiring publicly funded research to share anonymized datasets. “Cost and speed are the twin challenges,” he said, “and both can be addressed by shared knowledge and aligned incentives.”
Toward a Culture of Transparency and Continuous Learning
Throughout the discussion, participants converged on one message: the future of AI in drug development depends on trust built through transparency, negative data sharing, and education. Panelists agreed that automation bias—trusting or distrusting AI simply because it is AI—must be replaced with evidence-based confidence derived from rigorous validation. As Nundy observed, “AI is now finding the mistakes humans make,” citing examples where models identified inconsistencies in FDA submissions before reviewers did.
The FDA acknowledged that it, too, must evolve. “Our challenge,” said Makowski, “is to move from episodic guidance to a continuous learning model.” As the agency and industry embrace iterative collaboration, AI is poised not only to accelerate approvals but also to redefine the very framework of clinical science.
Looking Ahead: A Shared Responsibility
The workshop concluded with a forward-looking consensus: AI is no longer experimental—it is infrastructural. The next frontier lies in governance, education, and inclusivity. As one participant noted, “The risk isn’t in using AI—it’s in not using it soon enough.” With 16 million birthdays at stake, the call for urgency was unmistakable.
Moe Alsumidaie is Chief Editor of The Clinical Trial Vanguard. Moe holds decades of experience in the clinical trials industry. Moe also serves as Head of Research at CliniBiz and Chief Data Scientist at Annex Clinical Corporation.


