Over the past 30 years, there have been significant advancements in therapies and the in clinical acquisition and measurement of medical images. However, the tools and infrastructure used in clinical trial imaging workflows have largely remained unchanged [1].

Various human, operational, and technical factors can hinder the accuracy and consistency of radiological assessments in clinical trials. These issues critically affect the reliability of patient screening and eligibility assessments as well as longitudinal tracking, where even minor inaccuracies can compromise patient inclusion and progression decisions, ultimately impacting the integrity of clinical trial data for sponsors and ability of patients to participate and receive potentially life-extending investigational treatments [2].

Historically, the fragmented array of clinical trial imaging services has been able to keep medical imaging out of the spotlight. However, recent changes in the post-COVID era have dramatically altered the landscape. Regulators are now intentionally broadening recruitment requirements to include diversity and ensure meaningful community access [3]. Concurrently, trial volumes and complexity have increased, with more complicated inclusion and exclusion criteria, particularly regarding clinical trial imaging. Research sites also face significant staffing and compliance challenges, especially among clinical research coordinators and radiologists [4].

Imaging plays a crucial role in cancer clinical trials by assessing the efficacy of therapies as key surrogate clinical endpoints in most cancer trials [5]. The increasing use of imaging in oncology trials is driven by targeted treatments that have become more personalized to patient needs [1]. Imaging is also vital in determining patients’ eligibility for trials and whether they continue receiving treatments based on tumor response.

Even when standard-of-care imaging is collected and stored for future use, there is a lack of solutions that keep sponsors, investigators, and imaging sites appropriately connected. Approximately half of the clinical trial investments made by biopharma sponsors focus on cancer treatments, which are typically intensive in medical imaging [2]. Other research areas, such as cardiology, pulmonology, central nervous system, and gastrointestinal studies, face similar challenges regarding imaging efficiency and accuracy.

Current Imaging Review Processes

Research imaging review in clinical trials can be performed at the site, a centralized location, or both. In oncology clinical trials, this review requires specific quantitative measurement and longitudinal tracking of tumor burden based on the guidelines stated in the clinical trial protocol, which vary for each trial [6]. The goal is to achieve minimal variability in both image acquisition and quantification.

Despite potential biases due to the availability of clinical information, site research reads are essential for determining patient eligibility, confirming responses, and assessing progression. Early-phase trials, particularly those sponsored by small and emerging biopharma, medical device companies, and investigator-initiated trials are more likely to rely heavily on site imaging assessments due to the high cost and delays associated with Blinded Independent Central Reviews (BICR) [5].  In fact, the majority of clinical trials rely on site imaging assessments rather than BICR so ignoring the inefficiencies and errors of site reads and expecting these to be addressed by expensive and delayed central review is not a viable solution.

Challenges with Site Reads: Imaging Assessment Read Inaccuracy

Early signals of therapeutic efficacy often come from site data and inform research investments regardless of trial phase or sponsor type. Technological advances have enabled unprecedented levels of remote work and collaboration during clinical trials, making it logical to perform site reads remotely and be blinded from patient information when required [7]. This can ensure timely data delivery to both sites and sponsors.

However, site reads have not been considered trustworthy due to unacceptably high error rates and observed non-conformance to clinical trial imaging protocols. These issues stem from trial complexity, staffing shortages, inconsistent workflows, and ineffective data management systems [8]. There is a huge need for accessible, collaborative clinical trials imaging informatics platforms to help sites provide timely and accurate site reads.

Mistakes in imaging assessments at sites occur when imaging teams do not know or fail to follow the required protocol-specific measurement criteria. This task is daunting for radiologist readers, considering that there are hundreds of modified ways across over 30 imaging primary criteria to measure images and quantify findings, which vary by each clinical trial  [2]. Unfortunately, widely available clinical imaging interpretation systems cannot adequately support even a fraction of these requirements, nor do they offer a means for collaboration among internal and external stakeholders working on each trial. Consequently, the work has largely reverted to manual methods involving pen and paper, calculators, spreadsheets, and email. The most prevalent system in use today worldwide for clinical trial site imaging assessments are these manual methods [9].

Error Rates in Imaging Assessments

National Cancer Institute (NCI) designated Comprehensive Cancer Centers are leading the charge by shedding light on these challenges. They have measured the burdens placed upon them by broken trial imaging workflows [10]. Studies examining compliance with clinical trial imaging protocols at three major NCI-designated Comprehensive Cancer Centers found 25%, 30%, and 50% error rates prior to implementing a comprehensive clinical trials imaging informatics platform to minimize these errors [10]. These were measured on a per-imaging time point basis, checking lesion-level compliance and overall response calculations across all active oncology clinical trials at each institution. For example, a typical patient in a single trial who has been imaged at four time points was found to have one to two inaccurate imaging assessment results on average.

Another study evaluated the root cause analysis of imaging site read errors showing discrepancies in interpreting follow-up imaging (29% of these errors), which can skew Progression Free Survival (PFS) and Overall Response Rate (ORR) trial endpoints [11]:

Imaging errors in 627 trials from 2014 to 2017 [11].
  • Discrepancies in Follow-Up Imaging: 29% of errors can skew Progression-Free Survival (PFS) and Overall Response Rate (ORR) endpoints.
  • Missing or Inaccurate Measurements: 24% of errors threaten trial integrity by potentially misrepresenting treatment efficacy.
  • Incorrect Baseline Measurements: 18% of errors complicate disease progression assessments and eligibility determinations.
  • Application of Incorrect Response Criteria: 16% of errors violated the imaging protocol by applying the wrong imaging criteria.
  • Errors in Baseline Comparisons: 8% of errors.
  • Overlooking Critical Baseline and Nadir Data: 5% of errors.

These issues emphasize the need for enhanced precision and standardization in site imaging assessments to ensure reliable trial results. Without protocol-compliant and criteria-specific research imaging assessment tools at the site level, reliance on inaccurate and inconsistent site reads can significantly contribute to increased censor rates and inaccuracies in trial endpoints such as PFS, which is sensitive to censor rates [12]. A recent study indicated that censor rates of 10-12% are detected upon central review of patients enrolled at sites [17]. In addition to the ethical and medicolegal issues of treating patients on experimental investigational therapeutics who do not meet the trial-specific eligibility inclusion criteria, the high cost to sponsors of enrolling and treating patients based on site assessments where 10% of enrolled patients are later censored greatly exceeds the entire costs across all patients on trial for imaging assessments.

Challenges Radiologists Face

These findings, while critically important, are not surprising. Radiologists performing site reads in clinical trials often grapple with complex protocols that lead to inconsistent interpretations, especially when different trials demand unique and multiple imaging criteria. Although RECIST is the most commonly used criterion, there are over 30 primary oncology criteria, and RECIST alone accounts for only about half of trial assessments. Sponsor-required modifications often make even RECIST non-standard or necessitate its combination with other criteria, rendering systems that support only unmodified RECIST insufficient [7].

Already burdened with high clinical volumes, local radiologists must perform research reads using systems that do not comply with protocols or support necessary calculations. To cope, some radiologists distribute research reads evenly among all radiologists, leading to training issues. Others mark target lesions and rely on treating investigators or oncology data teams to extract the relevant metrics and perform the required calculations, frequently resulting in errors and protocol violations due to incorrect lesion selection or transcription errors [2].

Additionally, due to clinical demands, radiologists often become backlogged and unable to respond promptly to initial and re-review trial assessment requests, causing delays of several weeks to months. This sometimes leads to patients being dropped from the trial or for healthcare institutions to discontinue participation in complex clinical trials, especially in community healthcare settings [4]. In other cases, treating investigators perform measurements and calculations to accommodate trial subjects’ treatment regimens, introducing known biases as they balance investigational treatments with patient care [2].

Each trial’s unique modifications highlight the need for efficient clinical trial workflows that robustly incorporate accurate protocol imaging assessments with minimal errors. Technology that detects these errors can reduce the workload of radiologists and study staff, promote decentralization, and improve data quality in both site and central imaging environments by systematically avoiding mistakes, re-reviews, and protocol deviations [9].

Addressing Common Clinical Trial Imaging Misconceptions

There are several misconceptions about site imaging assessments:

  • Site Radiologists are Available: There is a difference between radiologists who can read research imaging assessments in a clinical setting and those who can provide timely support from outside. External radiologists trained in imaging assessments may be more available remotely to support site reads than local radiologists [2].
  • AI Can Automate The Imaging Process: While AI holds great potential, total automation is not yet feasible and measurement of the lesion is only a tiny portion of the inefficient time spent with broken workflows for radiologists and study staff [13]. Furthermore, in the current environment, study teams spend as much as eight hours trying to manage imaging assessments, data transcription, management, and audits for every hour of radiologist time.
  • EHR and EDC Integration Can Improve Imaging Efficiency and Accuracy: Integrations between Electronic Health Records (EHR) and Electronic Data Capture (EDC) systems have limited potential to enhance accuracy in clinical trial imaging research.
  • Training and Communication Addresses Systemic Issues in Imaging: Better site training and improved CRO-to-site communication alone cannot address systemic workflow issues [14].

The Conundrum for Early-Phase Trial Sponsors

Sponsors, particularly those involved in early-phase trials, face a difficult choice between the high costs and delays associated with BICRs and the potential inaccuracies of site assessments. While a five-day turnaround time is considered fast by clinical trial imaging CROs, it rarely aligns with the trial and treatment decisions needed at follow-up visits [9]. The reliance on site reads due to cost and time constraints can result in inconsistent data quality, affecting the trial’s outcomes and overall integrity [6].

Additionally, Sponsors face significant challenges with enrollment, diversity, and inclusion due to imaging issues. Again, they must choose between manual and geographically limited site assessments and the slow and costly BICRs provided by CROs. Neither option effectively addresses the need for increased patient access and enrollment, nor do they fulfill diversity and inclusion mandates, as they fail to extend patient access and ensure accuracy at sites [15].

Increasing patient catchment areas and bringing more trial access to patients requires decentralizing trial imaging workflows with better technology. Unless timely, compliant, and accurate imaging assessments can be brought to the sites where patients receive the care, decentralized clinical trials requiring imaging will not become a reality. Such an approach requires a collaborative clinical trials imaging informatics platform that can standardize and harmonize imaging assessments according to trial-specific protocols across all participating trial sites, including community care facilities. This necessitates a system that supports consistency, speed, accuracy, and remote collaboration. Often, patients are imaged, have their images read locally for clinical and research purposes, and are treated during the same visit. This creates a significant barrier to extending trials to underserved populations [7] yet decentralizing imaging access and workflow with a cloud-based platform shows great promise to bring more appropriate levels of diversity to trials.

The Potential and Limitations of AI in Research Imaging

While Artificial Intelligence (AI) holds great potential to enhance the radiologist reading process, few data scientists are addressing the challenge of selecting appropriate lesions for serial tracking. Most efforts have focused on automated lesion detection and segmentation rather than target lesion selection, which is the first step in applying any of the 30 major oncologic tumor response criteria commonly recognized by the FDA.

A more immediate application of AI in research imaging is generating opportunistic findings through radiomic feature extraction. These features, derived from images, can be investigated for their prognostic value in future trial designs. Radiomics and these features can be the subject of clinical trials to establish their capabilities, playing an important role as companion processors in current trials before serving as replacements or alternatives to established response criteria. The first step in advancing AI for clinical trials is to provide a framework for the systematic and prospective inclusion of opportunistic radiomics and feature extractors [16].

Limitations of EHR to EDC Integration

EHR to EDC capabilities have limited potential in imaging research compared to other data extraction forms. The imaging data stored in EHRs is frequently limited to standard-of-care imaging. Even when research imaging assessment results are included in clinical trials, they are often inaccurate due to the limitations of clinical systems. Discordance between clinical and research reads is expected because the definitions of clinical progression and clinical trial protocol progression differ. This discrepancy explains why efforts to extract clinical trial imaging data from large amounts of uncurated provider data have largely failed for most Real World Data (RWD) companies.

Systemic Issues in Clinical Trial Imaging Workflow

There is a fundamental dichotomy between clinically focused imaging technologies at sites and clinical trial research imaging services commonly contracted by sponsors. Current clinical trial imaging workflows are too labor-intensive, anti-collaborative, and error-prone to unlock the full potential of today’s high-cost and high-impact imaging examinations. According to recent 2023 data, staffing and retention are top concerns at research sites, with 63% of respondents highlighting this issue [4]. Staffing problems often result from significant operational breakdowns, amplifying burnout, and human errors. Without clinical trial imaging assessment workflow software that meets the needs of all stakeholders and moves beyond paper-based workflows, radiologists and study staff in clinical trials are poorly supported by inefficient and error-prone processes and hindered by miscommunications among imaging teams, study staff, and investigators.

The Need for Collaborative and Technology-First Approaches in Clinical Trial Imaging

Adopting a collaborative and technology-first approach is crucial to address the challenges in clinical trial imaging. Collaborative clinical trial imaging platforms enable research institutions to support each other during temporary staffing shortages and cover needed subspecialization. By eliminating geographic boundaries, these systems help mitigate shortages, making resources more accessible [8].

Enhanced communication between the site, CRO, and sponsor on a harmonized technology platform can yield positive results. Effective communication between readers, study staff, and investigators within sites should occur, with real-time oversight by CROs and sponsors. This ensures that all stakeholders are aligned and can respond quickly to issues as they arise [14].

In addition to collaboration, embracing technology-first approaches is essential. Sponsors must adopt advanced imaging assessment tools and integrated workflows that ensure compliance with site and central review standards while addressing immediate needs for improved accuracy, efficiency, and data management. Implementing these technologies can reduce errors, promote decentralization, and enhance data quality [13].

By reducing the burden on clinical research sites and supporting accurate and efficient imaging assessments, technology-enabled and workflow-savvy strategies can provide reliable imaging data consistently and promptly. This will lead to better clinical trial outcomes and improved patient care [16].

This article is sponsored by Yunu


References:

  1. Zanon C, Crimì A, Quaia E, Crimì F. New Frontiers in Oncological Imaging. Tomography. 2023; 9(4):1329-1331. https://doi.org/10.3390/tomography9040105
  2. Schmid AM, Raunig DL, Miller CG, et al. Radiologists and Clinical Trials: Part 1 The Truth About Reader Disagreements. Ther Innov Regul Sci 2021; 55: 1111–1121. https://doi.org/10.1007/s43441-021-00316-6
  3. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/clinical-trial-imaging-endpoint-process-standards-guidance-industry
  4. https://acrpnet.org/2023/08/top-site-challenges-of-2023-data-and-insights-on-site-burden-and-trial-efficiency-2/
  5. Beaumont H, Iannessi A, Wang Y, Voyton CM, Cillario J, Liu Y. Blinded Independent Central Review (BICR) in New Therapeutic Lung Cancer Trials. Cancers. 2021; 13(18):4533. https://doi.org/10.3390/cancers13184533
  6. Amit O, Bushnell W, Dodd L, Roach N, Sargent D. Blinded independent central review of the progression-free survival endpoint. Oncologist. 2010; 15(5):492-5. doi: 10.1634/theoncologist.2009-0261. PMID: 20489186; PMCID: PMC3227978, accessed at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3227978/
  7. Beaumont H, Evans TL, Klifa C, et al. Discrepancies of assessments in a RECIST 1.1 phase II clinical trial – association between adjudication rate and variability in images and tumors selection. Cancer Imaging. 2018; 18:50. https://doi.org/10.1186/s40644-018-0186-0
  8. Artesani A, Bruno A, Gelardi F, et al. Empowering PET: harnessing deep learning for improved clinical insight. Eur Radiol Exp. 2024; 8:17. https://doi.org/10.1186/s41747-023-00413-1
  9. Bronen RA, Urban T, Hall K, Hanlon WB, Van den Abbeele AD, Harris GJ. Tumor Imaging Metrics Manager: The complete workflow solution for quantitative imaging assessment of tumor response for oncology clinical trials. Presented at AACI-CRI Conference, Chicago, IL, 2014
  10. Cruz A, Lankhorst B, McDaniels H, Weihe E, Correa E, Nacamuli D, Somarouthu B, Harris GJ. The complete workflow solution for quantitative imaging assessment of tumor response for oncology clinical trials. Presented at AACI-CRI Conference, Chicago, IL, 2024.
  11. Urban T, Ziegler E, Leary M, Somarouthu B, Correa E, Basinsky G, Nacamuli D, Sadow CA, O’Malley R, Wang C, Van den Abbeele AD, Harris GJ. Precision Imaging Metrics: Changing the way clinical trial imaging assessment is managed. Presented at AACI-CRI Conference, Chicago, IL, 2018.
  12. Lesan V, Olivier T, Prasad V. Progression-free survival estimates are shaped by specific censoring rules: Implications for PFS as an endpoint in cancer randomized trials. European Journal of Cancer. 2024; 202;114022. https://doi.org/10.1016/j.ejca.2024.114022
  13. Vollmuth P, Foltyn M, Huang RY, et al. Artificial intelligence (AI)-based decision support improves reproducibility of tumor response assessment in neuro-oncology: An international multi-reader study. Neuro-Oncology. 2023; 25(3):533–543. https://doi.org/10.1093/neuonc/noac189
  14. Beaumont H, Iannessi A. Can we predict discordant RECIST 1.1 evaluations in double read clinical trials? Front Oncol. 2023;13:1239570. doi:10.3389/fonc.2023.1239570. Available at: https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2023.1239570/full
  15. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/clinical-trial-imaging-endpoint-process-standards-guidance-industry
  16. Sundar LKS, Gutschmayer S, Maenle M, et al. Extracting value from total-body PET/CT image data – the emerging role of artificial intelligence. Cancer Imaging. 2024; 24,51. https://doi.org/10.1186/s40644-024-00684-w
  17. Borys L, Marini M, Lu E, Ford R. The influence of blinded independent central review on subject eligibility in oncology studies. American Society of Clinical Oncology. 2022; e13603. https://ascopubs.org/doi/pdfdirect/10.1200/JCO.2022.40.16_suppl.e13603:
Website | + posts

Moe Alsumidaie is Chief Editor of The Clinical Trial Vanguard. Moe holds decades of experience in the clinical trials industry. Moe also serves as Head of Research at CliniBiz and Chief Data Scientist at Annex Clinical Corporation.