In this interview on integrating innovative digital technologies in central nervous system (CNS) trials, I engaged with Matthew Stork, CEO of Cambridge Cognition, to explore the profound impact of these advancements on clinical research. Our conversation delved into several crucial areas, including the application of speech analysis technology, the role of automated quality assurance systems, and the significant evolution of electronic clinical outcome assessments (eCOAs). Matthew provided insightful perspectives on how these technologies are transforming the landscape of CNS trials, enhancing both the precision and efficiency of data collection and analysis processes critical to the success of these studies.

Moe: What are the advantages of integrating speech analysis technology into CNS trial protocols?

Matthew Stork: Digital methods are set to revolutionize CNS trials by significantly enhancing data integrity and reducing errors in CNS scales. Our system already includes 11 widely recognized verbal assessments to assess brain health, some of which are available in 8 languages so that they can be used in clinical trials worldwide. Our most popular voice-based assessment for patients is the picture description task, in which a patient describes a picture, and then our solution automatically scores the result.


We also have a brand new voice-based solution that provides these advanced systems features for quality assurance (QA), with capabilities that automate the processing and analysis of trial data, achieving error detection with reliable precision that could match human experts with significantly less cost. By ensuring high accuracy and reliability in data collection, these QA systems play a crucial role in minimizing risks associated with data-driven errors. This is especially important in CNS trials, where minor inaccuracies in battery assessments can introduce variabilities that can lead to significant setbacks in drug development and adversely affect patient outcomes.

Moe: With the critical importance of data integrity, how have eCOAs evolved to meet stringent trial requirements?

Matthew Stork: CNS-tailored eCOAs are evolving to meet the stringent demands of CNS trials, mainly because these trials often involve complex and detailed questionnaires essential for assessing neurological functions. To address these complexities, CNS eCOA platforms are now equipped with advanced features to capture data accurately. This includes sophisticated transcription capabilities that minimize errors in capturing verbal responses and algorithms that ensure scoring is consistent with the clinical scales used. Automating these processes enhances the reliability of the data collected, which is crucial for evaluating the effectiveness of the treatments being studied.

Matthew Stork CEO of Cambridge Cognition

Furthermore, CNS eCOA solutions now incorporate stringent security measures to protect the integrity and confidentiality of data within CNS trials, a critical concern in clinical research. This includes employing encrypted data transmission and adhering to global regulatory standards for data protection. We also use real-time data monitoring and validation techniques to quickly identify and correct anomalies, ensuring the data remains pristine throughout the trial period. These advancements meet the rigorous standards required for CNS trials and provide a scalable and efficient method for managing data across multiple trial sites, which is often a logistical challenge in extensive studies.

Moe: What technological advancements do you foresee in CNS trials?

Matthew Stork: In the coming years, I envision significant technological advancements in CNS trials, mainly by adopting fully automated digital assessment systems. These systems will be highly adaptable and explicitly tailored to the nuanced requirements of various CNS conditions such as Alzheimer’s, Parkinson’s, and schizophrenia. This customization will allow for more precise and personalized assessments, which are crucial for effectively tracking disease progression and response to treatment. Additionally, the automation of these systems will streamline the assessment process, reducing human error and increasing the efficiency of data collection and analysis. They could measure patients daily, as we are now doing with our ultra-quick cognitive assessments that can be touch-screen or verbal. These can show how patients change day-by-day on a new drug, which could be a game changer for pharmaceutical companies that want to show the rapid impact of their new treatment.

Moreover, the integration of multimodal data will play a pivotal role in enhancing the accuracy of these digital assessment tools. By combining data from various sources—like actigraphy to monitor physical activity, voice analysis to detect emotional and cognitive changes, and touchscreen inputs to assess memory, motor skills, and reaction times—we can achieve a more holistic view of a patient’s condition. This comprehensive approach will improve the sensitivity and specificity of assessments and pave the way for developing new biomarkers for early detection and monitoring of CNS diseases.

Moe: What challenges remain in implementing these advanced digital solutions across different settings?

Matthew Stork: One of the main challenges in implementing advanced digital solutions across different settings is the varying levels of technology acceptance among patients and healthcare professionals. This can stem from a lack of familiarity with digital tools, concerns about data security, or skepticism about the efficacy of digital versus traditional assessments. We have found that extensive educational outreach and training programs help a lot; we collaborate closely with healthcare providers and trial sponsors to demonstrate our technologies’ robustness, security, and clinical relevance. This hands-on approach helps to build trust and confidence in digital assessments, facilitating smoother integration into clinical practice.

Furthermore, real-world validation of these technologies is crucial for their acceptance. In our experience, conducting pilot studies and phased rollouts to gather feedback and make iterative improvements has been beneficial. This strategy ensures that our digital tools are scientifically valid, practical, and user-friendly in everyday clinical and research settings. By continuously monitoring and addressing different user groups’ specific needs and concerns, sponsors can ensure that digital solutions are versatile and adaptable, promoting broader adoption and ultimately enhancing patient outcomes in CNS trials.

Moe: How do you ensure the validity and reliability of novel digital tools compared to traditional cognitive assessments?

Matthew Stork: With cognitive assessments developed and used in clinical trials over three decades, many published studies have been conducted using our solutions. Our widely-known touch-screen assessment, CANTAB, has been used in nearly 3,500 published studies. The widespread application of our solution across many conditions and patient groups offers assurance to scientists that the assessment has been previously validated in the population they are investigating. We are currently focusing on swiftly expanding the data for our newer solutions, which is a major priority for our company. 

We are also working with the University of Essex to collect normative data from latency-based tasks, such as Reaction Time (RTI) and Verbal Recognition Memory (VRM). This will further enrich our testing capabilities and provide more robust benchmarks to increase endpoint sensitivity.

Considering validation from an internal view, we have implemented a multifaceted approach with rigorous validation processes to ensure the validity and reliability of digital tools compared to traditional cognitive assessments. Initially, each digital tool undergoes a comprehensive verification and validation of its software code to confirm that it performs according to predetermined specifications and standards. Following this, we conduct extensive testing with a diverse demographic to collect normative data, ensuring our tools perform consistently across different populations and producing essential data concerning the reliability of our tools. Thirdly, we actively engage in collaborative research to evaluate our tools in patient populations and compare our digital assessments with other cognitive or biological data. This involves direct comparisons in clinical trials and publishing our findings for peer review, thus contributing to the broader scientific dialogue on cognitive assessment. By doing so, we can refine our tools based on real-world feedback and scientific advancements. Finally, our ongoing efforts to integrate feedback from users and cognitive science experts allow us to enhance our solutions continually. These rigorous validation efforts are crucial as they enable us to deliver reliable and valid digital tools that can potentially surpass the performance of traditional methods in detecting and monitoring subtle changes in function.

author avatar
Moe Alsumidaie Chief Editor
Moe Alsumidaie is Chief Editor of The Clinical Trial Vanguard. Moe holds decades of experience in the clinical trials industry. Moe also serves as Head of Research at CliniBiz and Chief Data Scientist at Annex Clinical Corporation.