The Clinical Trial Transformation Initiative (CTTI) and the FDA hosted a workshop on Artificial Intelligence in Drug & Biological Product Development focusing on innovations and best practices in clinical trials. The opening remarks from FDA’s Patrizia Cavazzoni, Director at CDER, included a reference that over 300 submissions have been made to the FDA with AI elements.  She shared that the FDA is advancing AI Regulatory Science to focus on accelerating the adoption of AI models, encouraging early engagements with Model Informed Drug Development (MIDD). Cavazzoni indicated that clinical trials are at a turning point in drug development and using AI to optimize trial design and enrichment strategies and that data models must be “explainable” to generate high-quality evidence for submission. The goal is to create agile/inclusive clinical trials with clarity to foster innovation for submissions, and we need risk-based regulated approaches to create agile and inclusive clinical trials. There was further discussion about in silico trials during the event.

The event featured presentations and panel discussions on optimizing AI model design, addressing data-related challenges, and identifying strategies to overcome barriers in AI applications. In a session focused on “Model Performance, Explainability, and Transparency,” Luca Emili, CEO of InSilicoTrials, highlighted the integration of AI with traditional modeling and virtual patients to enhance efficiency, reduce costs, and address ethical concerns with in silico trials.

The session then concluded with a panel moderated by Hussein Ezzeldin from the FDA and Artem Trotsyuk from Stanford University, including notable figures, in addition to Emili, Michael Lingzhi Li, Assistant Professor at Harvard Business School, and Subha Madhavan, Head of Clinical AI/ML & Digital Sciences at Pfizer. These discussions emphasized the importance of balancing AI model performance, explainability, and transparency to ensure the efficacy and trustworthiness of AI applications in in silico trials.

Luca Emili discussing in silico trials
Luca Emili CEO of InSilicoTrials Speaking at The FDA Workshop

InSilicoTrials and Virtual Patients

Luca Emili, CEO of InSilicoTrials, provided insights into the use of in silico trials as alternatives to traditional in vivo and in vitro approaches. Emili highlighted the integration of AI, traditional modeling, and virtual patients to reduce costs, improve time efficiency, and minimize ethical concerns. Emili explained that the term “in silico” refers to the use of computer simulations to model biological processes, offering an alternative to traditional in vivo (animal) and in vitro (test tube) methods. This approach leverages AI, traditional modeling techniques like pharmacokinetics (PK), pharmacodynamics (PD), quantitative systems pharmacology (QSP), and the concept of virtual patients.

Emili elaborated on the concept of virtual patients, which are digital representations of real individuals. These virtual patients can be generated using generative AI, allowing researchers to create diverse datasets that comply with privacy regulations like HIPAA

and GDPR. This approach reduces the need for real patient data and allows for more extensive and varied simulations. Emili indicated that integrating AI, traditional modeling, and virtual patients offers several advantages, such as reducing costs, improving time efficiency, and minimizing ethical concerns.

Regulatory Endorsement and Compliance in In Silico Trials

Luca Emili emphasized the importance of regulatory endorsement and compliance in adopting AI-driven methods. He highlighted the Good Simulation Practice (GSP) framework, namely “Towards Good Simulation Practice,” published open access by Springer, which involved 138 experts from academia, industry, and regulatory agencies, including 13 M&S experts from FDA’s ModSimWG. This book provides guidelines for using traditional modeling and simulation in drug development and has been downloaded 28,000 times since its publication, indicating strong interest and support from the scientific community. The GSP framework aims to standardize simulation practices and ensure that models are credible and trustworthy.

Emili also discussed the collaboration with the FDA and other partners to build credibility and trust in AI-driven methods, which has been crucial in gaining regulatory acceptance for in silico trials. By adhering to the GSP framework and working closely with regulators, in silico trials meet the highest standards of quality and reliability, paving the way for broader adoption in the pharmaceutical industry.

Click The Image Above to Download the GSP Framework
AI-Driven Efficiency and Acceleration in In Silico Trials

Emili provided several examples of how AI tools speed up data strategies and literature reviews. One notable example was using large language models to analyze literature and identify disease prevalence in underrepresented populations. This type of research typically takes three to four months but can now be completed in minutes using AI. Such systems are optimized to avoid missing relevant information and reduce hallucinations by using retrieval-augmented generation and chunking strategies. This approach aligns with the FDA’s Diversity Action Plan guidance for clinical trials issued in June 2024, which aims to ensure that trials represent the populations affected by the disease.

The technological infrastructure supporting these efficiencies includes advanced AI algorithms and computer simulations. Emili explained that the integration of AI, traditional modeling, and virtual patients allows for more extensive and varied simulations, leading to faster and more accurate results.

Economic and Ethical Benefits of In Silico Trials

Emili highlighted the economic and ethical benefits of using AI and in silico trials in drug development. By using virtual patients in in silico models, biopharmaceutical enterprises can reduce the number of real patients exposed to experimental compounds, thereby reducing potential risks and ethical concerns. This approach also minimizes the need for animal testing, which has long been a contentious issue in the pharmaceutical industry.

Emili also emphasized the accessibility and affordability of the InSilicoTrials platform, particularly for small and medium-sized enterprises (SMEs), allowing them to leverage advanced AI and simulation technologies without significant investment. This democratizes access to cutting-edge tools and enables smaller companies to compete with larger pharmaceutical firms. By reducing costs and improving efficiency, the InSilicoTrials platform has the potential to transform the clinical trials industry and make drug development more accessible and ethical.

Panel Discussion: AI Credibility and Trust

The session concluded with a panel discussion moderated by Hussein Ezzeldin from the FDA, focusing on how enterprises can enhance the credibility and trust of AI models in clinical trials. Michael Lingzhi Li from Harvard elaborated on the distinction between interpretability and explainability, explaining that explainability involves describing what is happening while interpretability addresses why it is happening. Li argued that interpretability is crucial for gaining decision-makers’ trust, as it allows them to understand the reasoning behind the AI model’s recommendations. Without interpretability, not just explainability, providing satisfactory answers is difficult. He emphasized that decision-makers often need to understand the reasoning behind AI-generated results to feel comfortable using them in critical decisions.

Luca Emili also contributed to the discussion, highlighting the importance of regulatory endorsement and compliance to support the credibility and trust of simulation models. He pointed out that collaborating with regulators, such as the FDA, to build AI-based frameworks establishes a strong foundation of credibility and trust.

Subha Madhavan from Pfizer emphasized the necessity of reproducibility to maintain credibility and trust in AI tools. She highlighted the use of generative AI for productivity tasks, such as drafting regulatory documents. Consistency in AI-generated results is crucial for building trust among teams, as identical outcomes should be produced if the same data is used. Madhavan also stressed the importance of interpretability, particularly in high-stakes situations. Complex models do not necessarily mean better results; instead, models need to be interpretable and explainable to all stakeholders involved. She shared an example of modeling COVID-19 severity by integrating real-world data, clinical trial data, and scientific literature. A late fusion approach was used, analyzing each data domain separately before combining the results, ensuring that the integrated data is relevant and accurate.

Summary

The session provided a comprehensive overview of the challenges and opportunities in using AI for drug development, emphasizing the need for a balanced approach to model performance, explainability, and transparency. The insights shared by the presenters highlighted the potential of AI to revolutionize clinical trials while emphasizing the need for robust, interpretable, and transparent models to ensure trust and efficacy.

This article is sponsored by InSilicoTrials

Website | + posts

Moe Alsumidaie is Chief Editor of The Clinical Trial Vanguard. Moe holds decades of experience in the clinical trials industry. Moe also serves as Head of Research at CliniBiz and Chief Data Scientist at Annex Clinical Corporation.