Prior evidence has demonstrated subtle cognitive changes in the earliest stages of Alzheimer’s disease, in the domains of episodic memory, semantic memory and language, at the prodromal and perhaps even the preclinical stage of disease.
Traditionally, impairments in these domains are measured with neuropsychological tests that rely on simple response indices and have had limited sensitivity to subtle cognitive changes.
There’s now a new paradigm emerging that subtle impairment in these domains can be measured in connected speech. And this can be done in an automated fashion with software and natural language processing.
How do we elicit connected speech? Our approach has been focussed on story recall (retelling a short story), which is widely known as an episodic memory task, but which has also been shown as the optimal speech elicitation protocol for connected speech.
Skirrow et al. 2022 JMIR Aging – task design and psychometric properties
The Automated Story Recall Task (ASRT) is a story recall task designed for elicitation of connected speech, with more natural sentence structure and balancing for key linguistic and discourse variables. The task is automatically administered via a web-app on a smartphone, tablet or computer. User spoken responses are captured, then automatically analysed with a robust NLP pipeline. In Skirrow et al. 2022, we describe the task design and demonstrate excellent psychometric properties, including test-retest reliability, parallel forms reliability and high convergent validity with the CDR and PACC5.
Weston et al. 2022 ACL – advanced AI models for better linguistic biomarkers
Our approach diverges from prior work focussed on feature engineering with signal processing and statistical parsers. Instead we develop clinically-informed custom Transformer models, pre-trained on large non-clinical datasets. One of our models is described in Weston et al. 2022.
Fristed et al. 2022a/b Brain Communications and A&D DADM – predicting MCI and amyloid from speech with AI
Applying this class of novel models has allowed us to achieve a number of breakthroughs, including predicting MCI, preclinical AD and amyloid PET positivity from speech. We’ve done this in different settings and setups, indicating robustness of the approach (Fristed et al. 2022a, Fristed et al. 2022b).
Moving into the real world
Based on this work, we developed an abbreviated AI speech test for scalable testing online. This is now used in the largest cohorts in early AD globally, including ADNI4, where the digital screener will support recruitment of underrepresented groups at scale (Weiner et al. 2022).
Weston et al. 2021 ICML: Learning De-identified Representations of Prosody from Raw Audio
Lenain et al. 2020 INTERSPEECH: Surfboard: Audio Feature Extraction for Modern Machine Learning
Shivkumar et al. 2020 INTERSPEECH: BlaBla: Linguistic Feature Extraction for Clinical Analysis in Multiple Languages