The AI Bill of Rights, released by the White House, offers a comprehensive guide to ensuring AI’s ethical and responsible use in various sectors, including clinical trials. This article explores how the AI Bill of Rights principles can be integrated into clinical trial best practices, emphasizing safety, non-discrimination, data privacy, transparency, and human-centric approaches.
Safe and Effective Systems
AI systems in clinical trials should be developed with extensive consultation from various communities, stakeholders, and domain experts. This collaborative approach is crucial to identify potential risks and impacts of the AI system. Pre-deployment testing, risk identification and mitigation, and ongoing monitoring are essential to ensure the systems are safe and effective for their intended use. Additionally, these systems should be designed to protect from harm due to unintended yet foreseeable uses proactively. Independent evaluations confirming the system’s safety and effectiveness and reporting steps taken to mitigate potential harms are vital for maintaining trust and credibility.
In a diabetes management trial, for instance, AI algorithms could be developed with inputs from endocrinologists, patients, and data scientists. Pre-deployment testing could involve simulations with diverse patient data to ensure the AI accurately predicts blood sugar trends across various demographics. Ongoing monitoring would then track the algorithm’s performance in real-world clinical trial settings, adjusting for unforeseen risks or inefficiencies.
Algorithmic Discrimination Protections
To ensure equitable AI application in clinical trials, proactive and continuous measures must be taken to protect individuals and communities from algorithmic discrimination. This involves using representative data in AI system design, ensuring accessibility in design and development, and conducting pre-deployment and ongoing disparity testing and mitigation. Transparent organizational oversight and independent evaluations, including algorithmic impact assessments, are crucial to confirm these protections and should be reported to appropriate trial stakeholders whenever feasible.
To elaborate, in a clinical trial for a new cardiovascular drug, the AI model must be trained on data representing a wide range of ethnicities, ages, and genders. Regular disparity testing ensures the AI doesn’t inadvertently favor one group over another in its efficacy predictions. An independent audit could assess and report these aspects to confirm non-discrimination.
AI in clinical trials must adhere to strict data privacy standards. This includes built-in protections to prevent abusive data practices and ensure data collection meets reasonable expectations and necessities for the specific context. AI system designers and deployers should respect patient data collection, use, access, transfer, and deletion decisions. Enhanced protections and restrictions are crucial. Systems should also avoid privacy-invasive defaults, and any surveillance technologies used should be subject to heightened oversight to protect privacy and civil liberties.
Let’s look at a mental health study, for example, using AI to analyze patient speech patterns; participants would be informed about what data is collected and how it’s used. Data collection would be strictly limited to what’s necessary for the study, and any surveillance AI that may collect personal information and conversations under normal circumstances (i.e., at work or personal conversations with friends or relatives) would undergo rigorous privacy impact assessments to ensure patients’ conversations remain private.
Notice and Explanation
Transparency is critical in AI-driven clinical trials. Participants should be informed about the use of automated systems, including their role and impact on trial outcomes. Plain language documentation, up-to-date notices, and clear explanations of outcomes are essential. The automated systems should provide participants and operators with technically valid, meaningful, and contextually calibrated explanations. Patient and investigator reporting enhances the clarity and quality of AI applications in clinical trials.
For instance, in an oncology trial employing AI for tumor imaging analysis, patients would receive easy-to-understand information about how the AI works, its role in their treatment plan, and how it affects clinical decisions. Any changes in the AI system’s functionality would be promptly communicated to participants, preferably via an eConsent.
Human Alternatives, Consideration, and Fallback
Human alternatives and fallback options are essential in AI-driven clinical trials, especially in system failure or errors. Participants should be able to opt out of automated systems in favor of human alternatives when appropriate. This approach ensures broad accessibility and protects patients from especially harmful impacts. AI systems used in sensitive domains should include human consideration for adverse or high-risk decisions. Reporting on human governance processes, timeliness, accessibility, and effectiveness should be available to clinical trial stakeholders whenever possible.
In a trial using AI for automated drug dosing, patients could have the option to consult a human pharmacist or doctor if they have concerns about the AI’s recommendations. In cases where the AI system fails or produces an error, a robust process would be in place for immediate human review and remedy.
Integrating the AI Bill of Rights principles into clinical trial best practices ensures the ethical use of AI and enhances the trustworthiness and reliability of clinical research. By prioritizing safety, non-discrimination, data privacy, transparency, and human-centric approaches, AI can be harnessed effectively to advance medical research while safeguarding participant rights and well-being.
Moe Alsumidaie is Chief Editor of The Clinical Trial Vanguard. Moe holds decades of experience in the clinical trials industry. Moe also serves as Head of Research at CliniBiz and Chief Data Scientist at Annex Clinical Corporation.