The DPHARM conference brought together a panel of experts to discuss the transformative potential of Artificial Intelligence (AI) and Machine Learning (ML) in drug development. Moderated by Henry Wei, MD, Head of Development Innovation at Regeneron, the panel included Charles Fisher, PhD, CEO and Founder of Unlearn.AI; Sid Jain, VP of Global Development Data Science & Digital Health at Janssen Research & Development; Nareen Katta, MBA, Head of Data Science & Analytics, Global Therapeutics – Clinical Development at AbbVie; Prasanna Rao, Senior Director, Global Head of AI/ML, Clinical Data Sciences, GPD at Pfizer; and Marsha E. Samson, PhD, MPH, MSHSA, Senior Analyst at the FDA, CDER/Office of Medical Policy. The discussion covered a wide range of topics, from regulatory perspectives to practical applications in clinical trials, offering a comprehensive look at how AI and ML are shaping the future of drug development.

FDA’s Perspective on AI and ML

Marsha Samson from the FDA highlighted the agency’s recent discussion document on AI and ML in drug and biological product development. She emphasized that the FDA analyzes over 800 stakeholder comments, including regulated industries, scientific and academic experts, and private citizens, to identify areas needing greater regulatory clarity. Key themes emerging from these comments include human-led governance, accountability, transparency, data quality, and model validation. Samson elaborated that the FDA’s primary objective is to ensure that AI and ML applications in drug development maintain rigorous evidentiary standards. She mentioned that the agency is mainly focused on the context of use and a risk-based approach, which includes measures commensurate with the level of risk posed by the specific context. For instance, the FDA is interested in how AI and ML can be used to predict clinical outcomes, including disease prognosis and treatment response, for efficacy and safety.

Samson also pointed out that the FDA has already started incorporating AI and ML into its processes. For example, in 2022, the FDA used AI to develop a scoring rule for identifying adult patients in the authorized population under the EUA to treat COVID-19. This example underscores the agency’s commitment to integrating advanced technologies while maintaining high patient safety and efficacy standards. She also mentioned that the FDA is keen on understanding how AI and ML are being used in operational aspects, such as data management and manufacturing, and ensuring that these applications do not compromise data quality or patient safety. Samson encouraged companies to engage with the FDA early in the process to ensure compliance and smooth regulatory approval, emphasizing that transparency and explainability are crucial for the successful integration of AI and ML in drug development.

AI in Clinical Trials: Unlearn.AI’s Approach

Charles Fisher of Unlearn.AI discussed the potential applications of AI and ML in prognostic modeling. He explained that Unlearn.AI focuses on simulating patient outcomes using data from registries, electronic health records (EHRs), and previously completed trials. This approach allows for the creation of “digital twins” of patients, which can be used to predict outcomes under different treatment scenarios. Fisher elaborated that this method can significantly reduce the number of participants needed in clinical trials by 25% to 30%, thereby increasing efficiency while adhering to existing regulatory guidelines. He emphasized that Unlearn.AI aims to marry modern machine learning methods with traditional biostatistical techniques to leverage the vast data available. This approach enhances the power of clinical trials and ensures that the trials are covered by current regulatory guidance, such as the covariate adjustment guidance mentioned earlier.

Fisher also highlighted that this method has been well-received by regulatory bodies. He shared that their interactions with European regulators were particularly positive, with regulatory consultants describing their meetings as some of the best they had experienced in decades. This indicates a growing acceptance and understanding of AI and ML applications in clinical trials among regulatory agencies. Fisher noted that the risk-based approach adopted by both the FDA and EMA allows for deploying sophisticated technologies in applications where, even if they don’t work well, they won’t significantly impact the outcome. This approach ensures that AI and ML can be integrated into clinical trials in a way that maintains high standards for patient safety and efficacy while leveraging the benefits of advanced technologies.

Large Language Models: Janssen’s Insights

Sid Jain from Janssen Research & Development discussed the potential productivity gains from large language models like ChatGPT. He noted that while the potential is significant, it must be balanced with reality. Jain highlighted that these models could serve as “co-pilots” in various stages of the clinical development cycle, from protocol design to site selection, thereby democratizing access to data and driving efficiencies. Jain provided specific examples of how large language models can streamline clinical trial processes. For instance, these models can assist in designing protocols by automating the generation of inclusion and exclusion criteria, which are often time-consuming and complex. They can also help define patient cohorts by quickly analyzing large datasets to identify suitable candidates, thus reducing the burden on clinicians and data scientists.

Moreover, Jain mentioned that large language models could enhance site selection by analyzing historical data to identify sites with the highest potential for successful patient recruitment. This can lead to more efficient and effective clinical trials, ultimately speeding up drug development. However, he cautioned that while the potential is enormous, managing expectations and ensuring that these models are used responsibly and ethically is essential. Jain emphasized that the key to successfully integrating large language models in clinical trials is to maintain rigorous data quality checks and involve humans in the loop to verify the outputs. This approach ensures that the benefits of advanced technologies are realized without compromising the integrity of the clinical trial process.

Pfizer’s Take on GPT-4 and Beyond

Prasanna Rao from Pfizer elaborated on the advancements in large language models, particularly the leap from GPT-3 to GPT-4. He pointed out that GPT-4, with its 1 trillion parameters, has shown remarkable capabilities, including passing the US medical exam with a 90+ percentile. Rao emphasized that while these models are compelling, they still require human oversight to ensure accuracy and reliability, especially in clinical trials. Rao shared his experience with earlier models like IBM Watson, which relied heavily on supervised learning and required extensive expert data annotation. In contrast, GPT-4 uses unsupervised learning, allowing it to learn from vast amounts of unstructured data available on the internet. This shift has enabled the model to achieve unprecedented levels of accuracy and intelligence.

However, Rao also highlighted the challenges associated with these models, such as the risk of “hallucinations,” where the model generates incorrect or misleading information. He stressed the importance of having a human in the loop to verify the accuracy of the model’s outputs. Rao also mentioned that while GPT-4 is a significant advancement, the industry should focus on optimizing its use for specific applications in clinical development rather than rushing to adopt even more advanced models like GPT-5. He pointed out that the current capabilities of GPT-4 are sufficient for many applications in clinical trials, and the focus should be on ensuring that these models are used responsibly and effectively to enhance the drug development process.

AbbVie’s Focus on Site Productivity

Nareen Katta from AbbVie discussed how AI is being leveraged to improve site productivity and patient recruitment. He explained that AI techniques, including Natural Language Processing (NLP), are used to mine insights from clinical research associates and medical science liaisons. This helps engage sites better and improve their productivity, thereby addressing one of the significant challenges in clinical trials. Katta provided examples of how AI can be used to harness institutional knowledge and drive better decision-making. For instance, AI can analyze historical data to identify patterns and trends that can inform site selection and patient recruitment strategies. This can lead to more targeted and effective interactions with sites, ultimately improving their productivity and contribution to clinical trials.

He also mentioned that AbbVie is using pre-trained large language models to optimize insights further and enhance the effectiveness of their interactions with sites. By leveraging these advanced technologies, AbbVie aims to streamline the clinical trial process and reduce the burden on sites, thereby accelerating the development of new therapies. Katta emphasized that while AI is a powerful tool, it is essential to focus on specific use cases where it can provide the most value. He noted that the goal is to use AI to simplify and enhance the clinical trial process, ultimately reducing the cost and time required for drug development while maintaining high patient safety and efficacy standards.

Regulatory Considerations and Risk-Based Approaches

The panel also discussed the regulatory considerations and risk-based approaches to AI and ML. Marsha Samson emphasized that the FDA is interested in understanding how AI and ML are used and ensuring that these technologies do not compromise patient safety or data quality. She encouraged companies to engage with the FDA early to ensure compliance and smooth regulatory approval. Samson explained that the FDA’s risk-based approach involves evaluating the regulatory impact of AI and ML applications based on their specific use cases. For instance, if a large language model is used for systematic literature reviews, the FDA would be interested in knowing how the data quality is maintained and whether there are any risks of “hallucinations.” She stressed the importance of transparency and explainability in AI and ML applications and the need for human oversight to ensure accuracy.

Industry Conservatism and Regulatory Interactions

Charles Fisher noted that biopharma companies tend to be more conservative than regulatory agencies when adopting new technologies. He shared that his interactions with European regulators were particularly positive, with regulatory consultants describing their meetings as some of the best they had experienced in decades. This indicates a growing acceptance and understanding of AI and ML applications in clinical trials among regulatory agencies. Fisher explained that concerns about meeting regulatory standards often drive this conservatism among biopharma companies. He suggested that companies could benefit from engaging with regulators early in the process to gain clarity and guidance on using novel methods. Marsha Samson echoed this sentiment, encouraging companies to contact the FDA for guidance and support, mainly when using digital health technologies (DHTs) and other innovative approaches.

Sid Jain added that while biopharma companies may be cautious, there are significant opportunities to leverage AI and ML for more data-driven clinical trial designs. He mentioned that Janssen is already using these technologies to optimize trial designs and improve patient recruitment but emphasized the importance of balancing innovation with regulatory compliance. Jain provided examples of how AI can optimize inclusion and exclusion criteria but stressed the importance of assessing the potential impact on patient safety and efficacy. He noted that while data-driven approaches can offer significant benefits, they must be carefully evaluated to avoid compromising trial outcomes.

Conclusion

The DPHARM panel provided a comprehensive overview of the current and future potential of AI and ML in drug development. While the technology offers significant promise, it also requires careful consideration and rigorous validation to ensure it meets the high standards required in the medical field. The panelists agreed that collaboration between industry and regulatory bodies is crucial for successfully integrating these technologies into the drug development lifecycle. They emphasized the importance of maintaining rigorous data quality checks, involving humans in the loop, and engaging with regulatory agencies early to ensure compliance and smooth regulatory approval.

Website | + posts

Moe Alsumidaie is Chief Editor of The Clinical Trial Vanguard. Moe holds decades of experience in the clinical trials industry. Moe also serves as Head of Research at CliniBiz and Chief Data Scientist at Annex Clinical Corporation.