A variety of delay discounting tasks are widely used in human studies designed to quantify the degree to which individuals discount the value of delayed rewards. It is currently unknown which task(s) yields the largest proportion of valid and systematic data using standard criteria (Johnson & Bickel, 2008). The goal of this study was to compare three delay-discounting tasks on task duration and amount of valid and systematic data produced. In Experiment 1, 180 college students completed one of three tasks online (fixed alternatives, titrating, or visual analogue scale [VAS]). Invalid and nonsystematic data, identified using standard criteria, were most prevalent with the VAS (47.3% of participants). The other tasks produced more (and similar amounts of) valid and systematic data, but required more time to complete than the VAS. Viewing systematic data as more important than completion times, Experiment 2 (n = 153 college students) sought to reduce the amount of invalid datasets in the fixed-alternatives task, and compare amounts of nonsystematic data with the titrating task. Completion times were superior in the titrating task, which produced modestly more systematic data than the fixed-alternatives task. Causes of invalid and nonsystematic data are discussed, as are methods for reducing data exclusion.
Choosing the right delay-discounting task: Completion times and rates of nonsystematic data.
Jillian M. Rung,Thomas M. Argyle,Jodi L. Siri,G. Madden
Published 2018 in Behavioural Processes
ABSTRACT
PUBLICATION RECORD
- Publication year
2018
- Venue
Behavioural Processes
- Publication date
2018-06-01
- Fields of study
Medicine, Computer Science, Psychology
- Identifiers
- External record
- Source metadata
Semantic Scholar, PubMed
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-39 of 39 references · Page 1 of 1
CITED BY
Showing 1-16 of 16 citing papers · Page 1 of 1