Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

balamw

Moderator emeritus
Aug 16, 2005
19,366
979
New England
Could you elaborate on this? If you subtract the system latency in a stable system what you are left with is the actual response time, if the input is sampled fast enough you should be able to get sub ms accuracy.

Things that you can not control, remain uncontrollable low latency or not.

I'd agree with you if one were trying to measure the operator's response time, but that does not seem to be what the experiment is about.

If the uncontrollable operator's response time is 200+/-50 ms is there really any point in getting the hardware to operate in sub ms?

B
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
I'd agree with you if one were trying to measure the operator's response time, but that does not seem to be what the experiment is about.

If the uncontrollable operator's response time is 200+/-50 ms is there really any point in getting the hardware to operate in sub ms?

B

That may be very well be true for infants, otherwise 200ms is quite a lot, you could easily tap your finger faster on a table desktop for example.

Here's two rimshot samples separated by 200ms: http://f.cl.ly/items/1g3q0j2u1I2m3b2n173I/200ms.mp3

I get your point and it seems like it would be more important to achieve consistency than sub ms accuracy, but it's interesting to see how it can be solved and meeting flutillie's supervisors demands.
 

chown33

Moderator
Staff member
Aug 9, 2009
10,750
8,422
A sea of green
That may be very well be true for infants, otherwise 200ms is quite a lot, you could easily tap your finger faster on a table desktop for example.

Here's two rimshot samples separated by 200ms: http://f.cl.ly/items/1g3q0j2u1I2m3b2n173I/200ms.mp3

The problem is a reaction-time issue, not a rhythmic-output issue. And it's reaction-time of both the infant and the person observing the infant.

Reaction time is the latency from the presentation of a stimulus to the output that is considered recognition of the stimulus (perception time + movement time). A rimshot or repeated finger-tap has no external stimulus to be perceived.

The drummer or finger-tapper is using an internal rhythmic generator to produce output, not the perception of a randomly occurring external event.

In digital circuit terms, it's the difference between an input-to-output latency vs. a signal-generator latency. If the software reads the input and flips the output after a calculation, it's completely different than if the software simply outputs a regular series of internally generated data.
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
The problem is a reaction-time issue, not a rhythmic-output issue. And it's reaction-time of both the infant and the person observing the infant.

I understand this, the sample is there to give a feel for 200ms. Without it it's very much like large numbers, it's hard to get a intuitive feel for if it's a lot or little with just a figure without any frame of reference.

Reaction time is the latency from the presentation of a stimulus to the output that is considered recognition of the stimulus (perception time + movement time). A rimshot or repeated finger-tap has no external stimulus to be perceived.

The drummer or finger-tapper is using an internal rhythmic generator to produce output, not the perception of a randomly occurring external event.

In digital circuit terms, it's the difference between an input-to-output latency vs. a signal-generator latency. If the software reads the input and flips the output after a calculation, it's completely different than if the software simply outputs a regular series of internally generated data.

Of course, but what is it that is measured here? If it's reaction time, then the "human latency" is part of it, and should be part of the result.

If the latency of the system is consistently, say 10ms, then it can be subtracted from the result. If on the other hand it varies between say 10 to 25ms you can not do that, and get an error margin of 15ms.
 
Last edited:

chown33

Moderator
Staff member
Aug 9, 2009
10,750
8,422
A sea of green
I understand this, the sample is there to give a feel for 200ms. Without it it's very much like large numbers, it's hard to get a intuitive feel for if it's a lot or little with just a figure without any frame of reference.

Ah, now I understand your purpose in providing the audio.


Of course, but what is it that is measured here? If it's reaction time, then the "human latency" is part of it, and should be part of the result.
https://forums.macrumors.com/posts/16039457/
https://forums.macrumors.com/posts/16053687/

AFAICT, the infant reaction time along is the measured quantity. Everything else that adds latency is an error term. From the description provided, operator latency (reaction time) could easily be the largest error term, with a variability that exceeds all other latencies combined. Yet somehow, this error term does not seem to be accounted for, unless I'm missing something about the experimental setup.

If "I" is the infant reaction time to be measured, K is the sum of all electronics and computer latencies, and O is the operator reaction time, then the total time T is:
T = I + K + O
If we know T (the measured result) and K (measured before starting) but not O, how is it possible to determine the unknown value I with any accuracy, considering that O and I are of approximately similar magnitudes. If O were much less than I, one could characterize a range for O and maybe accept with the resulting error. Given that they're similar in magnitude, and arise from the same cause (human reaction time), it seems misguided to not take O into account. It seems even more misguided to worry about variations in K that are at least an order of magnitude less than variations in O.
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
AFAICT, the infant reaction time along is the measured quantity. Everything else that adds latency is an error term. From the description provided, operator latency (reaction time) could easily be the largest error term, with a variability that exceeds all other latencies combined. Yet somehow, this error term does not seem to be accounted for, unless I'm missing something about the experimental setup.

If "I" is the infant reaction time to be measured, K is the sum of all electronics and computer latencies, and O is the operator reaction time, then the total time T is:
T = I + K + O
If we know T (the measured result) and K (measured before starting) but not O, how is it possible to determine the unknown value I with any accuracy, considering that O and I are of approximately similar magnitudes. If O were much less than I, one could characterize a range for O and maybe accept with the resulting error. Given that they're similar in magnitude, and arise from the same cause (human reaction time), it seems misguided to not take O into account. It seems even more misguided to worry about variations in K that are at least an order of magnitude less than variations in O.

Yeah I agree, it wasn't clear to me how this was done exactly.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.