The COVID-19 pandemic and associated social distancing measures are creating unprecedented challenges for everyone working in clinical trials and drug development. In this series, we’re sharing some of the ways VeraSci is addressing these challenges.
With many subjects isolating themselves at home and many sites concerned about seeing subjects in-person for routine study visits, we’re seeing an increased interest from sponsors in remote assessments. For some on-going trials, conducting remote assessments may be a better choice than not collecting any data because of missed visits. In other cases, sponsors had planned to start studies soon and are now considering site-less or virtual trials as a way to keep their development programs on track.
The good news is that in many cases there are good options that do not require cancelling or delaying trials. We’re actively working with a number of sponsors to recommend ways to keep their trials and development programs moving. Each trial is unique and will call for its own detailed plan. Here are some examples of the questions we are hearing and some of the key considerations for getting these plans to work well.
Cognitive assessments are one of the most frequently requested type of assessments, and traditionally the majority of these have been administered by a clinician in an on-site visit. This is a complex topic, but we wanted to share some of what we’re considering when it comes to some of the most popular cognitive assessments— like the Alzheimer’s Disease Assessment Scale—Cognitive Subscale (ADAS-Cog), the Brief Assessment of Cognition (BAC), and the MATRICS Consensus Cognitive Battery (MCCB)—and specific cognitive tests like the Digital Symbol Substitution Test (DSST). While EMA and FDA have indicated a willingness to be flexible, this is an emerging situation, and this blog reflects our most recent thinking on the topic.
One of the most significant hurdles in remote cognitive assessment is related to patient populations and their ability to access and use the technology involved. The trials where these assessments are used may include geriatric patients not comfortable with technology, cognitively impaired subjects who will have difficulty following and remembering directions related to technology, and subjects whose mental illness may mean that they don’t have access to technology (for example in patients with schizophrenia). The level of caregiver support that subjects have is a significant factor to consider. Caretakers already play an essential role in the ability of many of these subjects to participate in a trial. In some cases, we may need to look at whether caretakers can assist subjects with the set-up of remote assessments.
Many assessments include multiple subtests from which a composite score is created. For example, there are 10 tests in the MCCB. Some of the tests can be more easily administered remotely than others. For example, a test where the subject is asked to name all of the words they can think of that start with a particular letter could easily be done over the phone or videoconference. On the other hand, a test where the subject needs to manipulate physical items may not be easy to replicate in a remote setting. In these cases, kits can be delivered to subjects. Some tests require drawing. The rater can observe the subject drawing, capture an image, and have the original mailed to the site.
For each alternative assessment option, we have considered the perspectives of patients, raters, sponsors and regulatory agencies whether the alternative is feasible and if the modifications would render the alternative invalid. Additionally, when appropriate we’re creating an alternative composite score if one or two of the tests were not administered. In some cases, test developers provide information about using the assessments in a remote setting. For example, the Montreal Cognitive Assessment (MoCA) has been validated for use in two formats. An abbreviated version can be administered over the phone while the full version can be administered over video conference. In most cases, we’re finding that there are paths forward that will produce valid, meaningful data.
Technology is also an important consideration. In some cases, we can extend existing technologies that are already in use in a study. We may also need to acquire additional tools such as telemedicine videoconferencing systems that meet the necessary regulatory requirements for use in clinical development. We need to determine how subjects would access technology at home. Bring Your Own Device (BYOD) will be faster and less expensive to implement, but it makes a lot of assumptions and creates a lot of variance. Subjects don’t necessarily have appropriate hardware and may not have sufficient internet access. Our tech support staff will need to provide support for BYOD devices that they don’t have access to or experience using. Leveraging subject’s personal devices also means we won’t have control over what is done on the device.
Provisioning devices eliminates many of these issues but will be costly and time-consuming. Provisioned devices allow control over screen size, what other software is installed and in use, and will enable us to provide internet access through a cellular connection for subjects that need it. It also means that our tech support staff will know a lot more about the devices in use and how they are configured, allowing them to provide a more seamless support experience. In some instances, we may end up with a hybrid solution where some subjects and sites go with a BYOD model, and we provision some additional devices to individual sites or subjects. Making the best decision for each study requires close communication and coordination with sponsors and sites. While the technology issues are complex, they are also solvable. In our experience, technology alone hasn’t been a reason to halt or delay trials.
Training raters is also a consideration. Delivering rater training remotely isn’t a significant issue. It’s something we already do. However, if any modifications are being made in order to administer training remotely, raters may need supplemental training. When it comes to assessments that were not designed for remote administration originally, new or updated manuals are needed. It also makes sense to add audio or video surveillance for studies that weren’t using it previously to allow centralized reviewers to confirm that remote assessments are being conducted consistently and properly.
Do you have questions about how to make remote cognitive assessments a reality for your trial? Contact us for more information.