Evaluation of the reliability and validity of computerized tests of attention.

Détails

Ressource 1Télécharger: pone.0281196.pdf (2442.85 [Ko])
Etat: Public
Version: Final published version
Licence: CC0 1.0
ID Serval
serval:BIB_CA4797451D78
Type
Article: article d'un périodique ou d'un magazine.
Collection
Publications
Institution
Titre
Evaluation of the reliability and validity of computerized tests of attention.
Périodique
PloS one
Auteur⸱e⸱s
Langner R., Scharnowski F., Ionta S., G Salmon C.E., Piper B.J., Pamplona GSP
ISSN
1932-6203 (Electronic)
ISSN-L
1932-6203
Statut éditorial
Publié
Date de publication
2023
Peer-reviewed
Oui
Volume
18
Numéro
1
Pages
e0281196
Langue
anglais
Notes
Publication types: Journal Article ; Research Support, Non-U.S. Gov't
Publication Status: epublish
Résumé
Different aspects of attention can be assessed through psychological tests to identify stable individual or group differences as well as alterations after interventions. Aiming for a wide applicability of attentional assessments, Psychology Experiment Building Language (PEBL) is an open-source software system for designing and running computerized tasks that tax various attentional functions. Here, we evaluated the reliability and validity of computerized attention tasks as provided with the PEBL package: Continuous Performance Task (CPT), Switcher task, Psychomotor Vigilance Task (PVT), Mental Rotation task, and Attentional Network Test. For all tasks, we evaluated test-retest reliability using the intraclass correlation coefficient (ICC), as well as internal consistency through within-test correlations and split-half ICC. Across tasks, response time scores showed adequate reliability, whereas scores of performance accuracy, variability, and deterioration over time did not. Stability across application sites was observed for the CPT and Switcher task, but practice effects were observed for all tasks except the PVT. We substantiate convergent and discriminant validity for several task scores using between-task correlations and provide further evidence for construct validity via associations of task scores with attentional and motivational assessments. Taken together, our results provide necessary information to help design and interpret studies involving attention assessments.
Mots-clé
Reproducibility of Results, Attention, Reaction Time, Software, Wakefulness, Neuropsychological Tests
Pubmed
Web of science
Open Access
Oui
Création de la notice
03/03/2023 18:32
Dernière modification de la notice
11/10/2023 7:02
Données d'usage