Typical cognitive tasks can produce robust experimental effects yet not measure individual differences reliably. Here the authors use hierarchical Bayesian analysis to develop and calibrate tasks that can efficiently achieve good reliability. Standard, well-established cognitive tasks that produce reliable effects in group comparisons also lead to unreliable measurement when assessing individual differences. This reliability paradox has been demonstrated in decision-conflict tasks such as the Simon, Flanker, and Stroop tasks, which measure various aspects of cognitive control. We aim to address this paradox by implementing carefully calibrated versions of the standard tests with an additional manipulation to encourage processing of conflicting information, as well as combinations of standard tasks. Over five experiments, we show that a Flanker task and a combined Simon and Stroop task with the additional manipulation produced reliable estimates of individual differences in under 100 trials per task, which improves on the reliability seen in benchmark Flanker, Simon, and Stroop data. We make these tasks freely available and discuss both theoretical and applied implications regarding how the cognitive testing of individual differences is carried out.
Calibration of cognitive tests to address the reliability paradox for decision-conflict tasks
Talira Kucina,Lindsay Wells,Ian J. Lewis,Kristy de Salas,Amelia T. Kohl,Matthew A. Palmer,J. Sauer,D. Matzke,E. Aidman,A. Heathcote
Published 2023 in Nature Communications
ABSTRACT
PUBLICATION RECORD
- Publication year
2023
- Venue
Nature Communications
- Publication date
2023-04-19
- Fields of study
Medicine, Psychology
- Identifiers
- External record
- Source metadata
Semantic Scholar, PubMed
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-53 of 53 references · Page 1 of 1
CITED BY
Showing 1-22 of 22 citing papers · Page 1 of 1