Boston University researchers have developed an AI system that can predict with nearly 80% accuracy whether someone with mild cognitive impairment will develop Alzheimer’s disease within six years, based on the analysis of speech patterns. The study, published in the journal Alzheimer’s & Dementia, leverages AI to extract diagnostic information from cognitive assessments, thus expediting Alzheimer’s diagnosis and treatment.
The AI model relies on data obtained from transcribed speech gathered during cognitive assessments and basic demographic information. Cognitive assessments such as the Boston Naming Test involve a clinician conversing with the patient, and these sessions are typically recorded.
The research team initially collected audio recordings of cognitive examinations from 166 participants diagnosed with mild cognitive impairment (MCI). These individuals were then tracked over six years to ascertain who progressed to Alzheimer’s disease and who remained stable. Advanced speech recognition technology was used to transcribe the audio recordings, converting them into data for analysis. The team used sophisticated natural language processing techniques to extract a variety of linguistic features and patterns, from which they developed several machine learning models to predict the likelihood of an individual progressing to Alzheimer’s.
Some cognitive tests, such as the Boston Naming Test, similarity tests, and the Wechsler Adult Intelligence Scale, were found to have a higher predictive power for Alzheimer’s risk. However, the researchers highlighted the need for further validation in larger and more diverse populations.
The results of the study underline the potential of speech analysis in predicting diseases such as Alzheimer’s. In a similar study in 2020, University of Sheffield researchers revealed their AI’s ability to differentiate between participants with Alzheimer’s disease or mild cognitive impairment and those without, with an accuracy rate of 86.7%.
Furthermore, Klick Labs developed an AI model that can identify type 2 diabetes by analyzing brief voice recordings of just 6 to 10 seconds. The study analyzed 18,000 recordings and found subtle differences in acoustic characteristics between diabetic and non-diabetic individuals. When analyzed in conjunction with factors such as age and BMI, the model achieved a maximum test accuracy of 89% for women and 86% for men. Fundamental findings from these studies suggest AI-supported non-invasive tests and diagnostic methods could lead to quicker and more effective treatment for diseases.