File(s) stored somewhere else
Please note: Linked content is NOT stored on La Trobe and we can't guarantee its availability, quality, security or accept any liability.
Methodological Problems With Online Concussion Testing
journal contributionposted on 16.12.2020, 23:51 by Jameson Holden, E Francisco, A Tommerdahl, R Lensch, B Kirsch, L Zai, Alan Pearce, OV Favorov, RG Dennis, M Tommerdahl
© Copyright © 2020 Holden, Francisco, Tommerdahl, Lensch, Kirsch, Zai, Pearce, Favorov, Dennis and Tommerdahl. Reaction time testing is widely used in online computerized concussion assessments, and most concussion studies utilizing the metric have demonstrated varying degrees of difference between concussed and non-concussed individuals. The problem with most of these online concussion assessments is that they predominantly rely on consumer grade technology. Typical administration of these reaction time tests involves presenting a visual stimulus on a computer monitor and prompting the test subject to respond as quickly as possible via keypad or computer mouse. However, inherent delays and variabilities are introduced to the reaction time measure by both computer and associated operating systems that the concussion assessment tool is installed on. The authors hypothesized systems that are typically used to collect concussion reaction time data would demonstrate significant errors in reaction time measurements. To remove human bias, a series of experiments was conducted robotically to assess timing errors introduced by reaction time tests under four different conditions. In the first condition, a visual reaction time test was conducted by flashing a visual stimulus on a computer monitor. Detection was via photodiode and mechanical response was delivered via computer mouse. The second condition employed a mobile device for the visual stimulus, and the mechanical response was delivered to the mobile device's touchscreen. The third condition simulated a tactile reaction time test, and mechanical response was delivered via computer mouse. The fourth condition also simulated a tactile reaction time test, but response was delivered to a dedicated device designed to store the interval between stimulus delivery and response, thus bypassing any problems hypothesized to be introduced by computer and/or computer software. There were significant differences in the range of responses recorded from the four different conditions with the reaction time collected from visual stimulus on a mobile device being the worst and the device with dedicated hardware designed for the task being the best. The results suggest that some of the commonly used visual tasks on consumer grade computers could be (and have been) introducing significant errors for reaction time testing and that dedicated hardware designed for the reaction time task is needed to minimize testing errors.
Partial support for this work was provided by the Office of Naval Research. A previous version of thismanuscript has been released as a Pre-Print at Biology Archives (Holden et al., 2019).
JournalFrontiers in Human Neuroscience
Pagination13p. (p. 1-13)
PublisherFrontiers Research Foundation
Rights StatementThe Author reserves all moral rights over the deposited text and must be credited if any re-use occurs. Documents deposited in OPAL are the Open Access versions of outputs published elsewhere. Changes resulting from the publishing process may therefore not be reflected in this document. The final published version may be obtained via the publisher’s DOI. Please note that additional copyright and access restrictions may apply to the published version.
Science & TechnologySocial SciencesLife Sciences & BiomedicineNeurosciencesPsychologyNeurosciences & Neurologyreaction timereaction time variabilityonline cognitive testingonline concussion testingintraindividual reaction time variabilityconcussionconcussion testingTRAUMATIC BRAIN-INJURYSIMPLE REACTION-TIMENEUROPSYCHOLOGICAL PERFORMANCEATTENTION DEFICITSSPORTS CONCUSSIONTIMING ACCURACYLONG-TERMMILDVARIABILITYMOTORExperimental Psychology