Facing the Challenge: Remote Assessments for Clinical Trials During the COVID-19 Pandemic
The COVID-19 pandemic and associated social distancing measures are creating unprecedented challenges for everyone working in clinical trials and drug development. We wanted to share with you how VeraSci is confronting some of these challenges.
The first topic we want to discuss is transitioning to remote assessments for on-going studies. With many subjects isolating themselves at home and many sites concerned about seeing subjects in-person for routine study visits, on-going clinical trials will need to make difficult decisions about whether to skip scheduled visits, try to conduct visits remotely, or in some instances, delay or cancel the trial. Many of our trials are using a combination of patient-reported outcomes and clinician-administered assessments designed to be administered on site. At present, we are tackling these issues on a study-by-study basis, and while there are no easy answers, there are some common themes that have been bolstering our contingency plans. Here are some general tips:
- Read the regulatory guidance. FDA, EMA, and MHRA have all just issued guidance for clinical trials to address the challenges of the day. As we do, we recommend that all contingency plans start by carefully considering the advice in these documents.
- Patients aren’t the only ones staying home. Depending on the region and institution, raters, investigators, and other site staff may also be working from home. What do we need to do to support them? They may need additional equipment, training, and technical support to succeed. We can deliver standardized remote rater training and have an extensive training help desk that can support assessment administration.
- Remote assessment is possible! As of March 17, 2020, the Centers for Medicare and Medicaid Services (CMS) have agreed to pay for various forms of tele-medicine, including tele-neuropsychology. Keep a look out for a future post from us that addresses specific scales and assessments that can be delivered remotely. We found that in some instances, even when we haven’t used a particular assessment in a remote setting, someone else has. For example, several dementia studies are using the Alzheimer’s Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), and there is a literature to support its use as a remote tool. At this point we do not know if regulatory agencies will require validation for online administration of otherwise validated tools. We think some tools, like the Montgomery-Âsberg Depression Rating Scale (MADRS) and the Positive and Negative Syndrome Scale (PANSS), will be straightforward to implement remotely as they simply require that a rater conducts an interview. Performance-based assessments, like the ADAS-Cog, will be more challenging, but we have worked out strategies for most components of the tool such that proxy scores can be calculated.
- Prepare to provide increased tech support. Many of our contingency plans will include deploying new technologies, extending existing technologies to new locations and environments, or a combination of both. We have invested in excess support capacity and are prepared for a significant increase in the volume of support calls. We are training our support staff to handle new types of questions that will arise. With all of the new challenges site staff are facing, the last thing they need is to have a frustrating tech support encounter. We will have humans with broad expertise making sure that trials can stay active. We resolve questions 98% of the time on the phone the first time.
- Consider the challenges posed by individual assessments. In some instances, translating an assessment to an electronic format is pretty straightforward. For other assessments, the translations aren’t so obvious or may not be possible. For example, some assessments require subjects to physically manipulate objects. Is there an electronic equivalent? Do you need to send some sort of kit and then observe by video? After adapting over a hundred assessments to an electronic format, including the Brief Assessment of Cognition (BAC), our scientific and technology teams understand what needs to be done to collect valid data. We have been applying innovative operational strategies developed by neuropsychologists to meet the current need.
- Hybrid approaches to remote assessments may be needed. Because some assessments either already can be administered remotely or can easily be converted to remote assessment but others can’t be administered remotely, you may have to decide whether a hybrid solution is feasible. For example, one of our trials uses the MATRICS Consensus Cognitive Battery (MCCB) and an interview-based assessment, the Schizophrenia Cognition Rating Scale (SCoRS). The SCoRS can be easily administered over the phone. Some of the MCCB tests can be administered remotely with audio only, while others require video input.
- Start thinking about data quality and consistency issues. Making changes to the way assessments are administered mid-study (and in some cases making changes to the assessments themselves) will undoubtedly create issues with consistency of the data. While this is something we would never do under normal circumstances, under current conditions we all have to find ways to adapt, and regulatory guidance permits us to be creative. Even doing the best we can, we must consider the impact this will have on the data. Consulting with experts experienced in the complex analysis and management of data is a must have. Management of missing data and definition of intercurrent events may include COVID-19-dependent switches in the method of assessment that we will be applying based on forthcoming guidance from regulatory agencies.
We will continue to post about how we are facing these challenges in the coming weeks as the situation develops and as we learn more. We hope this is useful and look forward to hearing from you about the issues you face and the approaches you are taking.