Social distancing requirements based on COVID-19 have left many sponsors looking for alternatives to in-person study visits for both on-going clinical trials and clinical trials planning to start soon. As we’ve been having conversations with sponsors about their options, we’ve noticed some confusion in three related but different terms—centralized review, centralized scoring, and remote assessment.
In centralized review, a rater at the site administers and scores the assessment. These assessments could be delivered through eCOA, paper and pen assessments, during an on-site visit (most common), or via phone or videoconference. Following the administration and scoring of the assessment by the site’s rater, a third party (in our case VeraSci) would review the assessment. This review could include data checks based on expected values, review based on audio or video recording, and checking for completeness. The reviewer is a certified rater.
In centralized scoring, a rater at the site administers the assessment. The assessment can be delivered through eCOA, paper and pen assessments, during an on-site (most common), or via phone or videoconference. Following the administration of the assessment, an experienced clinician or data monitor (depending on the assessment in question) scores the assessments. Centralized scoring can increase data quality for assessments with complex scoring rules, like the Brief Visuospatial Memory Test (BVMT).
Remote assessment, sometimes referred to as remote administration or centralized assessment, involves rater administering an assessment from a different location than the subject being assessed. The subject could be at a clinical site, at home, or in a healthcare setting like a hospital or a nursing home. Depending on the assessment, the mode of administration could be phone, videoconference, email, or some combination. Remote assessments make sense not only during social distancing but also in cases where sites may not have qualified raters available. This is common in some rare disease trials. Using a smaller pool of remote raters also can increase data consistency and reduce variability between sites. Additionally, there is some evidence to support that using remote raters can mitigate the placebo effect. Finally, rater training costs can be reduced, or in some cases eliminated, when the centralized raters are already trained on the assessments in use.