The Office for Human Research Protections (OHRP) recently hosted its 7th exploratory workshop, focusing on the increasingly pervasive topic of AI in clinical trials. The event brought together experts to discuss AI technologies’ ethical, legal, and practical implications in human subjects research. The workshop aimed to foster discussions on integrating ethical considerations into AI research and its applications, ensuring that human subjects’ rights and welfare remain protected.

Keynote by Jeff Smith: The AI Lifecycle

Jeff Smith, Deputy Division Director within the Certification and Testing Division at the Office of the National Coordinator for Health Information Technology (ONC), delivered the keynote address. Smith proposed that the AI lifecycle has “fractal-like properties,” making it a versatile framework for product development, institutional strategy, and public policy. Smith elaborated on ONC’s regulatory role, particularly in the pre-deployment phase of AI technologies. He mentioned that ONC regulates electronic health records (EHRs) used by 96% of hospitals and 80% of office-based physicians. This extensive reach allows ONC to influence a significant portion of the healthcare industry.

Smith discussed a recent rule finalized by ONC, establishing first-of-its-kind regulations for AI and predictive algorithms in certified health IT. These regulations require developers to provide information about how algorithms are designed, developed, tested, and evaluated, ensuring transparency and accountability. Smith also discussed the recent reorganization within ONC, which now includes responsibilities related to AI strategy and policy for the Department of Health and Human Services (HHS). This reorganization aims to create a coordinated set of activities promoting fair, appropriate, valid, effective, and safe use of AI in health. Smith highlighted the importance of a regulatory mosaic, where agencies within HHS and other governance bodies work together to address the various facets of AI deployment and use.

Approaching AI with Ethical Conduct

Smith posed three critical questions for the research community to consider: how to prioritize areas of ethical concern at the intersection of AI and human subjects research, which priorities are more tractable and can be tackled in the near term, and what steps can be taken to establish practical guidance and policy. He emphasized the importance of prioritization, noting that when everything is a priority, nothing is a priority. Smith suggested that the research community identify which ethical concerns are most pressing and which can be addressed more efficiently within a specific timeframe, such as 12 to 18 months.

He also highlighted the need for practical guidance and policy to help practitioners navigate the ethical landscape of AI in human research. Smith pointed out that the task ahead, while challenging, might not be as daunting as translating the Belmont Report into the Common Rule. However, it would still require careful consideration and collaboration. He encouraged the research community to leverage existing policies and frameworks while also being open to necessary modifications to address the unique challenges posed by AI. Smith’s questions set the stage for a focused and actionable discussion on the ethical implications of AI in human research.

Panel Discussions: Diverse Perspectives on AI

The first panel, moderated by Jessica Vitak, a professor at the University of Maryland, featured five speakers who comprehensively outlined AI’s role in human research. Kevin McKee, a Staff Research Scientist at Google DeepMind, aimed to demystify AI, defining it as a human-made process that makes decisions or solves problems. He highlighted the diverse forms of AI, from rule-based systems to machine learning, and discussed the limitations and biases inherent in these technologies. McKee provided examples of different AI systems, such as Eliza, a rule-based chatbot from the 1960s, and modern machine learning systems like chatbots that learn from training data. He emphasized that AI is not a monolith and that different systems carry different advantages and risks.

Craig Lipset, Co-Chair of Decentralized Trials and Research Alliance, focused on how AI is being used to improve trial access and the ethical considerations that come with it. He discussed the potential of AI to decentralize clinical trials, making them more accessible to diverse populations. Lipset highlighted examples of AI in clinical trials, such as predictive algorithms that identify suitable trial participants and tools that monitor patient adherence to treatment protocols. He emphasized the need for transparency and accountability in using AI in clinical trials, ensuring that these technologies do not exacerbate existing disparities in healthcare access.

Reid Blackman, Founder and CEO of Virtue Consultants, emphasized the importance of creating responsible and responsive research programs for AI, drawing from his experience in ethical risk consultancy. He discussed the ethical risks associated with AI, such as bias and discrimination, and the need for robust governance frameworks to mitigate these risks. Blackman provided examples of ethical challenges in AI research, such as biased training data that can lead to discriminatory outcomes. He highlighted the importance of stakeholder engagement and the need for researchers to consider the broader social implications of their work. Blackman also discussed the role of ethical guidelines and standards in promoting responsible AI in clinical trials.

Stephanie Batalis, a Research Fellow at Georgetown University, examined the intersection of AI and life sciences, discussing how emerging technologies impact biomedical innovation and biosecurity. She highlighted the potential of AI to accelerate biomedical research, such as using machine learning algorithms to analyze large datasets and identify new drug targets. Batalis also discussed AI’s ethical and security implications in the life sciences, such as the potential for dual-use research that could be misused for harmful purposes. She emphasized the need for robust governance frameworks to ensure that AI is used responsibly in biomedical research.

Michael Pencina, Chief Data Scientist at Duke Health, discussed creating trustworthy health AI ecosystems, bridging data science, healthcare, and AI. He discussed the importance of transparency, accountability, and stakeholder engagement in developing and deploying AI in healthcare. Pencina provided examples of AI applications in healthcare, such as predictive algorithms that identify patients at risk of adverse outcomes and tools that support clinical decision-making. He emphasized the need for rigorous validation and evaluation of AI systems to ensure their safety and effectiveness. Pensina also discussed the role of interdisciplinary collaboration in creating trustworthy health AI ecosystems.

Summary

OHRP’s 7th exploratory workshop brought together experts from various fields to discuss AI’s ethical, legal, and practical implications in human research. The event highlighted the importance of a shared understanding of AI, the need for coordinated regulatory efforts, and the critical role of ethical considerations in shaping the future of AI in clinical trials. Such discussions will protect human subjects’ rights and welfare as AI evolves. The workshop emphasized the need for ongoing dialogue and collaboration to navigate the complex landscape of AI in clinical trials.

Website | + posts

Moe Alsumidaie is Chief Editor of The Clinical Trial Vanguard. Moe holds decades of experience in the clinical trials industry. Moe also serves as Head of Research at CliniBiz and Chief Data Scientist at Annex Clinical Corporation.