AL TAMIMI, Abdel-Karim, ANDREWS, Jacob, BENFIELD, Jacqueline, SWEBY, Cath, GILMARTIN, Chris, LINDLEY, Rebecca, TRUSSON, Diane, DZIUNKA, Molly, WEBSTER, Dee and RADFORD, Kathryn (2026). Development and Qualitative Evaluation of R-Speak: Acceptability and Usability of a Smartphone App System Using AI to Enhance Communication in People With Expressive Aphasia. [Pre-print] (Unpublished) [Pre-print]
Preprints have not been peer-reviewed. They should not be relied on to guide clinical practice or health related behaviour and should not be regarded as conclusive or be reported in news media as established information.
Documents
36995:1195634
PDF (Archiving query)
Development and Qualitative Evaluation of R-Speak.pdf - Pre-print
Restricted to Repository staff only
Available under License All rights reserved.
Development and Qualitative Evaluation of R-Speak.pdf - Pre-print
Restricted to Repository staff only
Available under License All rights reserved.
Download (1MB)
Abstract
Background:
Aphasia, an acquired language disorder impacting the ability to understand and produce language, greatly impacts effective communication. Large language models (LLMs) like GPT-5 offer potential to support communication by generating human-like sentences and coherent speech and subsequently enhance functional communication for individuals with aphasia.Objective:
Co-produce a system using LLMs to support communication and explore potential utility and acceptability in people with mild-to-moderate aphasia.Methods:
: Using the Double Diamond approach: Phase 1: Discover and define; Stroke survivor PPI group (n=5) and research team used MoSCoW prioritisation to develop and prioritise ideas and co-design a software solution (R-SPEAK) to augment verbal communication. Phase 2: Develop and demonstrate; eight LLM’s were evaluated for interpretation using existing datasets from AphasiaBank, ratified by team members. The best-performing model was used for prototype development. Prototype testing was undertaken with 4 people with aphasia (PwA) and 1 carer using semi-structured interviews. A healthcare professional (HCP) focus group (n=6) evaluated the concept and prototype. The topic guide was informed by, and themes from thematic analysis were mapped onto the Technology Acceptance Model (TAM). Participants rated usability with the System Usability Scale (SUS). Phase 3: Refine and resign. To increase the processing speed, we systematically evaluated 12 lightweight open-weight LLMs (0.5B–3.8B) on interpreting real aphasic speech, using clinician-curated dialogues and an LLM-as-a-judge framework assessing relevance, faithfulness, and completeness.Results:
Initially Mixtral (8x7b), was the best-performing LLM for aphasic utterances, and was utilised for the prototype. PwA rated R-SPEAK as good using the SUS (mean 75). Themes extracted from qualitative data mapped across all three TAM constructs. Attitude towards using; PwA had high hopes whilst clinicians demonstrated more caution about its benefits. Perceived ease of use; participants found it easy to use but it may be more challenging for those with other post stroke impairments or more severe aphasia and training might be needed. Perceived usefulness: R-SPEAK could be useful in many scenarios and has potential to improve independence for PwA. Recommendations for development included improved accuracy, speed and modifications to the interface according to the individual’s needs. Further refinement demonstrated that Qwen (2.5:3b) achieved the strongest overall performance with high faithfulness and sub-second latency, while models under 1.5b parameters showed pronounced hallucination issues, indicating a lower bound on model capacity for reliable clinical speech interpretation.Conclusions:
Our co-designed R-SPEAK prototype was considered acceptable to patients. Next steps involve ongoing refinement, development of a phone-based app for feasibility testing in a larger and broader cohort of people with mild-to-moderate aphasia.More Information
Metrics
Altmetric Badge
Dimensions Badge
Share
Actions (login required)
![]() |
View Item |


Tools
Tools
