top of page

​

​

The Facts

SPOKEN DISCOURSE IS BEING USED IN ASSESSMENT AND TREATMENT OF APHASIA MORE AND MORE.

Analyzing spoken discourse gleans microstructural (e.g. syntax, lexical-semantics) and macrostructural (e.g. cohesion, coherence) information that is comparatively naturalistic in contrast to information collected from other spoken language tasks, such as confrontation naming and repetition. To collect such connected speech samples, structured and semi-structures prompts are frequently used (e.g. single picture description, story retell), with language varying by prompt. Given the inconsistency in discourse measurement and analysis procedures across aphasia studies, experts have agreed that research in this area has reached a tipping point where a more systematic approach is necessary. The recent establishment of a Core Set of Outcomes (COS) for aphasia research demonstrates the concerted effort made by the aphasia community to move toward systematic assessment and reporting of aphasia outcomes, allowing for more robust data aggregation (e.g. meta-analyses) and reproducibility. Discourse is not presently included in the COS for aphasia due to the scarcity of psychometric information on outcome measures and vast heterogeneity in study findings.

​

Sources: Bryant et al., 2016, Dietz & Boyle, 2018; Kintz & Wright, 2017; Wallace et al., 2019; Stark, 2019

​

CURRENTLY, NO REPORTING STANDARDS EXIST.

There is a notable inconsistency in the types of information reported in studies of spoken discourse in aphasia (e.g. some studies report only inter-rater agreement for coding and not intra-rater; studies use different statistical metrics to quantify reliability; studies lack appropriate detail in study parts which does not allow for replication of findings). Many fields have recognized that reporting standards are key for replication and robustness of research. COBIDAS, the Committee on Best Practice in Data Analysis and Sharing, is a working group of experts on human brain mapping which came together to create a standard method for reporting methods and results in published works. The stated purpose of COBIDAS was to elaborate the principles of open and reproducible research and to distill these principles to specific research practices. Studies comprise many elements, not all of which can be prescribed or restricted. However, what COBIDAS and other initiatives like it do is specify the information that must be reported in order to fully understand and potentially replicate a study. COBIDAS also discusses areas where specific poor practices can be identified (e.g. statistical modeling). For each of seven study areas (experimental design; acquisition of data; preprocessing; statistical modeling and inference; results; data sharing; and reproducibility), the report details good practice and reporting standards for over 100 items to help plan, execute, report and share research in a transparent fashion. Many journals heavily encourage reviewers to use the COBIDAS reporting standards when evaluating the quality of a human brain mapping manuscript. Notably, COBIDAS is a living initiative and its report continues to be updated and improved as the field grows and changes. Like COBIDAS, there are other initiatives for reporting standards, such as CONSORT for clinical trial data and the EQUATOR network for health research.  

​

Such standards of reporting do not currently exist for the field of spoken discourse analysis in aphasia. Spoken discourse analysis is frequently conducted in both clinical and research settings and comes with many nuances that coincide with clinical, behavioral research. Given rapid growth in spoken discourse studies in aphasia and the current state of the research, the creation of reporting standards would encourage reproduction of studies (combatting the replication crisis in the behavioral and social sciences); ensure consistent reporting on important study details ranging from experimental design to data availability; recommend appropriate statistical modeling, thereby ensuring most appropriate statistical inferences; and, overall, contribute to a more homogeneous, rigorous and standardized process by which spoken discourse research is evaluated and ultimately disseminated. Importantly, a more homogeneous and rigorous research standard will have a direct clinical implication: identification of spoken discourse best practices in aphasia assessment and rehabilitation.

​

Sources: Pritchard et al., 2017; Nichols et al., 2017

​

THERE ARE FEW DATA ON THE PSYCHOMETRIC PROPERTIES OF SPOKEN DISCOURSE-DERIVED OUTCOMES.

‘Outcomes’ are the micro- or macrostructural features extracted from the sample (i.e. dependent variables). Typically, the goal of the clinician and/or researcher is to choose a discourse-derived outcome that is representative of an element of the speech-language system. For example, one can extract information related to syntactic complexity—the desired element—by evaluating outcomes such as mean length of utterance, proportion of prepositions, or proportion of complete sentences produced. Understandably, there are many outcomes which can be and have been studied, resulting in many reported spoken discourse outcomes. Indeed, over 536 unique spoken discourse outcomes have been reported in the aphasia literature. This heterogeneity precludes meta-analytic and systematic comparison of outcomes across studies, thus hindering the development of best practices in discourse. Due to the immense number of reported outcomes, very little is known about the psychometric properties of even the more commonly used outcomes (e.g. total tokens produced, words per minute, correct information units). Psychometric properties are key to understanding an outcome’s validity and reliability at the inter- and intra-subject level. Recent work has provided necessary and valuable evidence on select psychometric information on some spoken discourse outcomes in aphasia, such as validity and rater reliability.

​

Certain psychometric properties are valuable and, one might argue, essential for good clinical research. One such psychometric property is stability, or the inherent variance (due to internal or external factors) of a measure. Stability is typically measured at test and again at some closely related point (‘retest’), between which no intervention takes place. Establishing inherent variance, or a measure’s range of stability, allows researchers to draw conclusions about clinically-relevant improvements after therapy. Similarly, determining typical intra-subject variability patterns for outcomes at test-retest is vital for understanding variability of spoken discourse in aphasia, given that typical speakers without acquired brain injury show variability in micro- and macrostructural outcomes between test and retest and given that performance variability is a hallmark of aphasia. Therefore, a concerted effort must be made to collect test-retest spoken discourse data form speakers with and without aphasia to identify intra-subject variability ranges across prompt type and for commonly employed outcome measures.

​

Sources: Bryant et al., 2016; Pritchard et al., 2017; Pritchard et al., 2018; Armstrong, 2002; Goodglass, 1993

bottom of page