MC10
Validating Unsupervised Digital Cognitive Evaluations and Training Tools in Multiple Sclerosis

Thursday, May 31, 2018
Exhibit Hall A (Nashville Music City Center)
Priya Garcha, BSc , UCSF Neurology, Weill Institute for Neurosciences, San Francisco, CA
Gillian Rush, BSc , UCSF Neurology, Weill Institute for Neurosciences, San Francisco, CA
John Morrissey, BSc , UCSF Neurology, Weill Institute for Neurosciences, San Francisco, CA
Chao Zhao, MSc , UCSF Neurology, Weill Institute for Neurosciences, San Francisco, CA
Katherine Possin, PhD , UCSF Neurology, Weill Institute for Neurosciences, San Francisco, CA
Anthony Feinstein, PhD, MD , Psychiatry, University of Toronto, Toronto, ON, Canada
Joaquin Anguera, PhD , UCSF Neurology, Weill Institute for Neurosciences, San Francisco, CA
Riley Bove, MD, MMSc , University of California San Francisco, Weill Institute for Neurosciences, San Francisco, CA



Background:

Significant cognitive impairment (CI) is present in more than half of people with multiple sclerosis (MS). Cognitive rehabilitation has, in recent years, been shown to improve various domains of CI. Unsupervised in-home digital therapeutics could expand access to cognitive rehabilitation, which is currently scarce. However, correlations with standard clinical measures and feasibility in an MS population must be clarified.

Objectives:

1) To assess the feasibility of treating patients with MS with a videogame-based digital therapeutic for CI. 2) To identify tests of processing speed that would allow evaluation of the tool’s effectiveness in an unsupervised setting.

Methods: In this pilot study, 50 participants with MS and 25 without MS completed a baseline neurological evaluation (EDSS; MSFC components). Cognitive tests included paper-and-pencil (BICAMS), and unsupervised, and other tablet-based tests (including Match: a test of executive functions and processing speed, developed at UCSF). Then, 21 of the MS participants were assigned to an in-home, tablet-based, cognitive digital therapeutic tool for 25 minutes daily, 5 days weekly, for 8 weeks (Akili Evo), and to repeat in-clinic evaluation.

Results:

There was high enthusiasm for this first of its kind in-home digital therapeutic tool, and study enrollment was completed within 3.5 months. At baseline, participants with and without MS did not differ in age, sex, education, ancestry, or performance on BICAMS tests (P>0.05 for each). Raw SDMT scores significantly correlated with 4/5 unsupervised computerized tests, including Match (Pearson r=0.71) (p<0.05 for each). Overall, Match showed slightly greater correlation than did SDMT with age, clinical, and other cognitive tests. 

Of 21 MS participants enrolled (mean (SD) SDMT z-score: -0.92 (1.16)), 20 completed 8 weeks of Akili Evo treatment, with an average of 20 sessions (median (SD): 22.5 (8.0)) per month; 3 participants could only complete an average of 7.4 sessions (median (SD): 7.5 (0.2) due to vertigo or impaired vision. Over the 8-week period, scores improved significantly on SDMT (paired t-test, p=0.006), as well as in 2/5 unsupervised tests, including Match (p=0.010 for each). 

Conclusions:

Deploying an in-home digital tool to improve processing speed in MS is feasible, shows preliminary efficacy, and its effects on cognition may be captured using unsupervised evaluations in the home. A larger, controlled clinical trial is ongoing.