Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

flutillie

macrumors newbie
Original poster
Oct 14, 2012
6
0
Hello, I am new to these forums, and I am sorry if this is the wrong place to post.

I am trying to rewrite a piece of custom Mac software from the early 1990s that records button inputs using a digital I/O PCI card. The old system used a National Instruments card, but I am finding that such digital I/O cards don't exist anymore for modern Mac Pros.

The closest to my needs I can find is the Apogee digital I/O card intended for audio. http://apogeedigital.com/products/symphony-system-features.php

Would it be possible to connect buttons to this card, and then read in their states through Core Audio?
 

balamw

Moderator emeritus
Aug 16, 2005
19,366
979
New England
Any reason you are stuck on PCI/internal devices?

There should be plenty of USB based GPIO solutions. e.g. you could just repurpose a gamepad.

B
 

flutillie

macrumors newbie
Original poster
Oct 14, 2012
6
0
I have tried USB solutions and they are all too slow/inconsistent. I need a way to reliably always achieve <2 ms latency reading a button state then flipping an output pin.
 

chown33

Moderator
Staff member
Aug 9, 2009
10,706
8,346
A sea of green
I have tried USB solutions and they are all too slow/inconsistent. I need a way to reliably always achieve <2 ms latency reading a button state then flipping an output pin.

You should be able to program an Arduino to achieve that. Depending on your programming experience, or lack thereof, you might find Wiring a better choice than the stock Arduino dev software.

I recommend describing what you're trying to do, rather than how you're trying to do it. So far, you've posted an XY Problem.
http://www.perlmonks.org/index.pl?node_id=542341
.. You want to do X, and you think Y is the best way of doing it.
.. Instead of asking about X, you ask about Y.
In short, you're asking us to evaluate possible solutions, without any of us knowing the actual problem to be solved.

FWIW, 2 ms seems awfully short. I usually debounce my switches for longer than that.
 

flutillie

macrumors newbie
Original poster
Oct 14, 2012
6
0
Sorry for not being clearer.

I am a student working on software for a psychology department that measures infants' reaction to stimuli. The experimenter has a hidden button box, and when the infant turns his or her head a certain way, the experimenter presses or releases a button. I know there is inherent delay in the experimenter's reaction time and perception of a head turn, and this human error is probably far greater than a couple ms. However, my supervisor wants to eliminate as much computer delay as possible, and because the old system can register a button input in under a ms, she expects the new system to do the same.

I've tested an Arduino in the following way: When a button is pressed, the Arduino sends the updated byte corresponding to eight buttons back to the computer; upon receiving this, the computer respond with a byte sent back to the Arduino, which flips a pin. While this process can sometimes take under 2 ms (as measured with an oscilloscope), it fluctuates substantially, which is why it was recommended to me to look into PCI cards.
 

chown33

Moderator
Staff member
Aug 9, 2009
10,706
8,346
A sea of green
First, I don't understand why the computer is involved in flipping a pin. Surely the Arduino should handle that by itself. This would completely eliminate all latencies associated with the computer response loop.

Second, I suggest putting a timebase on the Arduino, then record the entire event (button-press at current value of timebase). Send the whole event to the computer: button-press identifier + timestamp. The computer then knows exactly what time the event occurred, even if the notification of the event is delayed.

At that point it's an issue of how to sync the Arduino-resident timebase with the computer, so the computer knows how to correlate the event's timestamp to some other timebase it already knows about. That could mean the computer tells the Arduino to reset it timebase to a known value, or the computer asks the Arduino for its current timebase value. If the communication latencies are known (i.e. you measure them), you could get sub-millisecond accuracy on the time of the event.


A completely different approach is to forgo real-time measurements, and simply video the infants. Then you go over it frame by frame and you can get the most accurate numbers, by eliminating researcher reaction time.
 

balamw

Moderator emeritus
Aug 16, 2005
19,366
979
New England
This would completely eliminate all latencies associated with the computer response loop.

Clearly, if sub ms resolution is a requirement, any non-RTOS solution is unlikely to provide consistent results. Today's complicated, multi-tasking OSes don't really do "real time" well, they look like they are multitasking beautifully to use slow humans, but it's not quite as nice at non-human perceptible times.

That said ... What about the other (seemingly artificial) constraint in the problem. If National Instruments makes what you want, but just doesn't support Mac OS X any more, why implement this under Mac OS X? Can't you just boot the Mac Pro into Windows and use the NI solution?

B
 

ibennetch

macrumors member
Aug 9, 2008
39
0
A completely different approach is to forgo real-time measurements, and simply video the infants. Then you go over it frame by frame and you can get the most accurate numbers, by eliminating researcher reaction time.

If you need measurements accurate to one or two ms, traditional video isn't going to cut it. Sure, there are a variety of higher framerate specialty cameras available, but that sounds a bit beyond the scope here. The rest of your advice is quite good, I just wanted to chime in on the video part...which actually is good advice too, if they have access to high framerate equipment.
 

chown33

Moderator
Staff member
Aug 9, 2009
10,706
8,346
A sea of green
If you need measurements accurate to one or two ms, traditional video isn't going to cut it. Sure, there are a variety of higher framerate specialty cameras available, but that sounds a bit beyond the scope here. The rest of your advice is quite good, I just wanted to chime in on the video part...which actually is good advice too, if they have access to high framerate equipment.

I would argue they don't really need millisecond resolution. Not when the reaction time of a researcher pressing the button is about two orders of magnitude greater. Unless there's some other factor that accounts for it.

In other words, if the computer latency varies between, say 2 ms and 20 ms, it's completely swamped by the unmeasured and unaccounted for 200 ms reaction time of the person pressing the button.
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
Clearly, if sub ms resolution is a requirement, any non-RTOS solution is unlikely to provide consistent results. Today's complicated, multi-tasking OSes don't really do "real time" well, they look like they are multitasking beautifully to use slow humans, but it's not quite as nice at non-human perceptible times.

The use of a sound card would not rely on the OS but the capabilities of the ADC. I have not used a sound card for this purpose, but recording at 192khz would give you 192 samples / ms.
 

balamw

Moderator emeritus
Aug 16, 2005
19,366
979
New England
The use of a sound card would not rely on the OS but the capabilities of the ADC. I have not used a sound card for this purpose, but recording at 192khz would give you 192 samples / ms.

To record the button press? Sure I see how that could work, but how would the computer react to a button press and/or log it if captured via an ADC (sound card or not).

flutillie does the output pin you want to flip serve any other purpose than to enable measurement of the latency of the code running in the computer on the oscilloscope?

Also have you considered putting at least some of your code into the hardware driver rather than code that is running at the application layer? This could also help you eliminate a lot of the latency.

B
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
To record the button press? Sure I see how that could work, but how would the computer react to a button press and/or log it if captured via an ADC (sound card or not).

Since it's recorded, the recording would be the log, you would need to analyze the audio buffer to see where the button press happened. For stereo input, you could generate a reference in lets say the left channel, then capture the user input in right. I have seen this done to measure midi jitter, by recording the midi signal itself. Can't see why it couldn't be used for this purpose, on principle.
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
We still have a requirement to flip an output pin. How does this approach do that?

B

Do they need to be related, you could record the button state and flip an output pin with the same switch depending on how many poles it has.
 

flutillie

macrumors newbie
Original poster
Oct 14, 2012
6
0
Thanks for all your suggestions and sorry for the delayed reply.

There are three components to the experiment being run.
  1. Auditory stimuli are played following the contraints of the defined experiment (sometimes a sound is played when a button is pressed, sometimes sound keeps playing until a button is pressed).
  2. The infant is rewarded with lights and moving toys when they make the "correct" choice. This is what the outputs are for.
  3. The button inputs that the experimenter uses to control the experiment while in progress.

I have been able to get the latency of sound playback from the press of a button connected to the arduino to the output from the audio jack down to 2-7 ms. I have been told that I need to at least eliminate the variability in the latency, and ideally make it always close to 2 ms.

Second, I suggest putting a timebase on the Arduino, then record the entire event (button-press at current value of timebase). Send the whole event to the computer: button-press identifier + timestamp. The computer then knows exactly what time the event occurred, even if the notification of the event is delayed.
I contemplated doing this, but I instead decided to stick with the old system's way of measuring time, which is to calculate it based on the CPU cycles of the computer. This also allows me to send as little information over the serial as possible.

That said ... What about the other (seemingly artificial) constraint in the problem. If National Instruments makes what you want, but just doesn't support Mac OS X any more, why implement this under Mac OS X? Can't you just boot the Mac Pro into Windows and use the NI solution?
I should have considered this when I first started working on the project, but now I have fairly limited time constraints, and not enough time to port the code over to Linux or Windows, and learn how to do low-latency sound playback on another OS. (I tried doing some sound stuff on Linux and it was painful).

If you need measurements accurate to one or two ms, traditional video isn't going to cut it. Sure, there are a variety of higher framerate specialty cameras available, but that sounds a bit beyond the scope here. The rest of your advice is quite good, I just wanted to chime in on the video part...which actually is good advice too, if they have access to high framerate equipment.
Video also would not work because the experiment is interactive. The behavior of the infant toward the first stimuli could alter the presentation of the subsequent one.

I would argue they don't really need millisecond resolution. Not when the reaction time of a researcher pressing the button is about two orders of magnitude greater. Unless there's some other factor that accounts for it.

In other words, if the computer latency varies between, say 2 ms and 20 ms, it's completely swamped by the unmeasured and unaccounted for 200 ms reaction time of the person pressing the button.
I completely agree with this. However, my supervisor does not. She thinks that if they were able to achieve sub-ms latency in 1992, we should be able to achieve it in 2012.

flutillie does the output pin you want to flip serve any other purpose than to enable measurement of the latency of the code running in the computer on the oscilloscope?
As I described above, the output pins will ultimately be used to turn on lights and moving toys that "reward" the baby. I've been told that timing is less crucial for these, because of the inherent delay in starting the motor moving on the toy, and the fact that no timing measurements are being taken for them. However, to avoid extra layers of complication, I was hoping to use the same system for the outputs as I do for the inputs.

Since it's recorded, the recording would be the log, you would need to analyze the audio buffer to see where the button press happened. For stereo input, you could generate a reference in lets say the left channel, then capture the user input in right. I have seen this done to measure midi jitter, by recording the midi signal itself. Can't see why it couldn't be used for this purpose, on principle.
So couldn't I have really small buffers, and process each one as it comes in to check for the button press? Is this possible?
 
Last edited:

balamw

Moderator emeritus
Aug 16, 2005
19,366
979
New England
I completely agree with this. However, my supervisor does not. She thinks that if they were able to achieve sub-ms latency in 1992, we should be able to achieve it in 2012.

Part of that is that a 1992 vintage 68xxx Mac running System 7 is quite a bit closer to an Arduino or PIC today. The OS was far less complicated and had far less to do, and there were many GPIO signals almost directly connected to the CPU than in a "modern" Mac running OS X. Hence all the latency.

I think an embedded solution is probably your best bet.

Is the latency you are seeing from the Arduino dominated by the USB link or something else?

Can you expand on the nature of the audio? Is it something you might be able to do by interfacing an iDevice to your Arduino, or maybe an MP3 player shield? https://www.sparkfun.com/products/10628

B
 

flutillie

macrumors newbie
Original poster
Oct 14, 2012
6
0
Part of that is that a 1992 vintage 68xxx Mac running System 7 is quite a bit closer to an Arduino or PIC today. The OS was far less complicated and had far less to do, and there were many GPIO signals almost directly connected to the CPU than in a "modern" Mac running OS X. Hence all the latency.

I think an embedded solution is probably your best bet.

Is the latency you are seeing from the Arduino dominated by the USB link or something else?

Can you expand on the nature of the audio? Is it something you might be able to do by interfacing an iDevice to your Arduino, or maybe an MP3 player shield? https://www.sparkfun.com/products/10628

B

I bought and played around with the sparkfun MP3 shield, but I could not get the latency with it down below 30 ms. I would also prefer using uncompressed sound files, but when we tried to play full stereo wav files from the shield, the Ardunio maxed out on its CPU cycles, and the sound came out distorted.

An imbedded solution might be possible but complicated. The software I am working on consists of a simple custom programming language that researchers design their experiments in. There is no one experiment that will be run. I don't know to what extent it would be possible to do the pre-processing of the experiment parameters file on the Arduino, or if this processing was done on the computer, how to then pass the experiment instructions on to the Arduino.

I think the latency I am seeing is dominated by the Arduino USB connection, but I don't know how to test this.
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
So couldn't I have really small buffers, and process each one as it comes in to check for the button press? Is this possible?

Smaller buffers would equal lower latency, on a new mac you should be able to achieve just north of 1ms round trip latency at 192khz sample rate with the built in sound card or a really good 3rd party interface. But that is under ideal conditions, it's probably possible to do much worse as well if you write the code yourself.

But, the way I understood it was that you were capturing input generated elsewhere, not in the system itself, then the buffer size wouldn't matter. It strikes me as somewhat of a hack however. Also that Apogee card needs a converter as well, the card it self is just used to interface with the converters.

If audio is involved keep in mind that sound travels 34 cm in 1 ms, so if someone is sitting 2 meters from a speaker your adding 6ms of latency.
 

flutillie

macrumors newbie
Original poster
Oct 14, 2012
6
0
Smaller buffers would equal lower latency, on a new mac you should be able to achieve just north of 1ms round trip latency at 192khz sample rate with the built in sound card or a really good 3rd party interface. But that is under ideal conditions, it's probably possible to do much worse as well if you write the code yourself.

But, the way I understood it was that you were capturing input generated elsewhere, not in the system itself, then the buffer size wouldn't matter. It strikes me as somewhat of a hack however. Also that Apogee card needs a converter as well, the card it self is just used to interface with the converters.

If audio is involved keep in mind that sound travels 34 cm in 1 ms, so if someone is sitting 2 meters from a speaker your adding 6ms of latency.

Thanks for your input. I'll see what I can come up with.

I am a bit frustrated that I am being told by my superiors that some forms of latency are okay while others or not. Presumably the distance-from-the-speakers latency is okay because the speakers are always in the same position in the sound booth, and the baby is always sitting in the same spot.
 
Last edited by a moderator:

chown33

Moderator
Staff member
Aug 9, 2009
10,706
8,346
A sea of green
Thanks for your input. I'll see what I can come up with.

I am a bit frustrated that I am being told by my superiors that some forms of latency are okay while others or not. Presumably the distance-from-the-speakers latency is okay because the speakers are always in the same position in the sound booth, and the baby is always sitting in the same spot.

I still can't tell if your superiors are accounting for operator latency, i.e. the amount of time it takes the person operating the switch to sense a response in the infant and either press or release the switch. Since that's on the order of 200 ms, even in high attention situations (e.g. a trained runner at the start of a race), I honestly don't understand how that could be left unaccounted for while worrying over things in the 2-10 ms range.

Back when I was working on video games, we ran some tests just to get an idea of what kind of switch debouncing the software should do, and the relationship of audio to video. With 1 ms resolution using a real-time dedicated microcontroller, we saw pretty high variation (often over 100 ms) between operators, depending on age, attentiveness, multiple stimuli (on-screen targets), etc. There was also moderately high variation for a single operator, around 50 ms, even with training runs and focused attention.

We also found we could "trick" the operator by triggering a sound (audio stimulus) either before or after the on-screen target's appearance (visual stimulus), and cause a lag or a lead of ~50 ms or so. That is, if the video came before the audio, we would reliably see a slower response time than if audio was concurrent with or preceded the video. With a suitable gap between video and audio, we could reliably make response time worse than if there were no audio at all.

In the end, we did two things:
1. Make sure audio started regardless of what was on-screen; leading audio was better than lagging. (We concluded that both vertical retrace and visual-sensing lag by the human were contributing factors.)
2. Sample the switches faster than vertical retrace, and register a press or release on first change, then debounce for at least 1 complete vertical retrace interval.

Vertical retrace was 60 Hz or 50 Hz (NTSC vs. PAL). I think we ended up using horizontal retrace for switch sampling, but I don't recall with certainty. There may have been another timer source we used, but it was definitely faster than vertical retrace.
 

fwhh

macrumors regular
Aug 11, 2004
122
0
Berlin, Germany
Edit: Sorry, just read that you did something like this already. (Should have read all the posts...)
Make sure, that you set the data rate to 115200 baud, and don't use delay loops.


To get a clue, how long the delay between Arduino and the software is, I would start to make a simple "input-to-output" measurement:
Setup: Arduino and a button. Make a piece of software in xcode to read the serial port.
Set Ardunio to serial output at 115200 baud/bps.
If button is pressed, send out an "1\r\n" (we ignore debouncing and latching for now.)
If the software reads a "1", send back for example "2" to the arduino. Let arduino switch a pin to on, if "2" is received.
Connect your scope ch1 to the switch and ch2 to the pin, you are switching.
Set the scope to single trigger.
So you should be able to see how much time your input-to-output delay is.
The problems you may run into is, that using interrupts in arduino is not so easy as everything else, but you could still use the arduino hardware and use avr-libc to programm your software to avoid polling delays.
 
Last edited:

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
Presumably the distance-from-the-speakers latency is okay because the speakers are always in the same position in the sound booth, and the baby is always sitting in the same spot.

It's an interesting point, because if your latency is consistent you can subtract it after you have captured the input. So, if you can measure the latency of the system with some kind of loop back arrangement you should be able to adjust for latencies added by the system.

The problem then moves from acheiving low latency, to acheiving a stable and consistent latency. The ADC that captures the input would still need to be high frequency with a hardware clock to achive sub ms accuracy I think.
 
Last edited:

balamw

Moderator emeritus
Aug 16, 2005
19,366
979
New England
The problem then moves from acheiving low latency, to acheiving a stable and consistent latency.

PEBCAB = Problem exists between child and button.

As chown33 has suggested all of the system latencies that one could get to be stable and consistent are (generally) dwarfed by the one you can't control.


----------

Make sure, that you set the data rate to 115200 baud, and don't use delay loops.

Also, use as high a crystal oscillator frequency as you can if the Arduino is running at 8 MHz internal, consider giving it a 16 MHz or 20 MHz external clock.

B
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
PEBCAB = Problem exists between child and button.

As chown33 has suggested all of the system latencies that one could get to be stable and consistent are (generally) dwarfed by the one you can't control.

Could you elaborate on this? If you subtract the system latency in a stable system what you are left with is the actual response time, if the input is sampled fast enough you should be able to get sub ms accuracy.

Things that you can not control, remain uncontrollable low latency or not.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.