AFAICT, the infant reaction time along is the measured quantity. Everything else that adds latency is an error term. From the description provided, operator latency (reaction time) could easily be the largest error term, with a variability that exceeds all other latencies combined. Yet somehow, this error term does not seem to be accounted for, unless I'm missing something about the experimental setup.
If "I" is the infant reaction time to be measured, K is the sum of all electronics and computer latencies, and O is the operator reaction time, then the total time T is:
T = I + K + O
If we know T (the measured result) and K (measured before starting) but not O, how is it possible to determine the unknown value I with any accuracy, considering that O and I are of approximately similar magnitudes. If O were much less than I, one could characterize a range for O and maybe accept with the resulting error. Given that they're similar in magnitude, and arise from the same cause (human reaction time), it seems misguided to not take O into account. It seems even more misguided to worry about variations in K that are at least an order of magnitude less than variations in O.