The Alzheimer’s Disease Assessment Scale Cognitive Subscale (ADAS-Cog) is the primary neurocognitive outcome measure in many clinical trials for mild-moderate Alzheimer’s disease (AD). Despite wide use, variations in administration and scoring are well-documented, and may reduce reliability and sensitivity of the measure.
Variations in administration practices, misinterpretation of scoring rules, and intra-rater drift are among the greatest threats to psychometric reliability. Addressing these issues via rater education and centralized surveillance may increase reliability and sensitivity to treatment effects. Our goal was to quantify and compare variability in ADAS-Cog12 scores in the Critical Path Online Data Repository (CODR), and in the run-in phase of a recent multicenter, placebo-controlled treatment trial (NCT01852110) that incorporated centralized data surveillance, in order to assess the impact of centralized video/audio review on reliability across visits.