Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

belboz

macrumors newbie
Original poster
Nov 28, 2010
4
0
Hi,

I remember in my good old days of programming in C that you could create sound and send it to the speaker at the millisecond level. I put down an example below.

Is there a way to do this with an iPhone App, at the MICROSECOND level?

As in, I program how long in microseconds I want a certain sound.

Thanks for the help.

__________


Code:
void play(int octave,int note,int duration)
/* play note (C=1 to B=12), in octave (1-8), and duration (msec)
   include NOTE.H for note values */
{
  int k;
  double frequency;
 
  if (note == 0) {                  /* pause */
    delay(duration);
    return;
  }
  frequency = 32.625;
  for (k = 0; k < octave; k++)      /* compute C in octave  */
    frequency *= 2;
  for (k = 0; k < note; k++)        /* frequency of note    */
    frequency *= 1.059463094;       /* twelve root of 2     */
  delay(5);                         /* delay between keys   */
  sound((int) frequency);           /* sound the note       */
  delay(duration);                  /* for correct duration */
  nosound();
}
 
Last edited by a moderator:

chown33

Moderator
Staff member
Aug 9, 2009
10,747
8,421
A sea of green
A microsecond is one cycle at 1 MHz. Nothing can hear that high, not even bats.

10 microseconds is one cycle at 100 KHz. That's less than one sample at 96 KHz sample-rate. The iPhone does not have a 96 KHz sample-rate.

100 microseconds is one cycle at 10 KHz, or two cycles at 20 KHz. That seems like a reasonable granularity for determining the duration of a sound.

If you really want to clean up any ticks and pops, you need to start and stop the audio at an exact zero-crossing of the audio waveform samples. Even brief discontinuities are noticeable.
 

belboz

macrumors newbie
Original poster
Nov 28, 2010
4
0
Yea...

Thanks for the response...

It is not for hearing, it is for sending electrical impulses in the form of sound. Hence the microsecond fidelity.

It looks as though I just will have to create sound files to play. I was hoping I could create a program to produce them.
 

chown33

Moderator
Staff member
Aug 9, 2009
10,747
8,421
A sea of green
It doesn't matter whether it's for hearing or not. The output signal is produced from sampled data by a DAC. The DAC's sample rate determines the smallest possible time interval of the signal.

For example, if the DAC sample rate is 50 KHz, the smallest possible time interval in the output signal is 20 microseconds. You can't possibly resolve anything shorter than that, because that's how long one sample is, and samples are discrete (indivisible).

To resolve 1 microsecond, you would need DACs with a sample-rate of 1 MHz. To produce a DAC-output signal of 1 MHz, they would have to operate above 2 MHz (Nyquist criterion).
 

smithrh

macrumors 68030
Feb 28, 2009
2,722
1,730
msec is milliseconds. 100 msec is 0.1 seconds.

usec is microseconds.

Producing sounds on the order of tens of milliseconds should be possible; accurately detecting them is another matter altogether.
 

belboz

macrumors newbie
Original poster
Nov 28, 2010
4
0
thanks...

thanks for the input... it helps give some direction....
 

firewood

macrumors G3
Jul 29, 2003
8,108
1,345
Silicon Valley
It is not for hearing, it is for sending electrical impulses in the form of sound. Hence the microsecond fidelity.

All iOS device audio channels include audio (anti-aliasing) filtering.

The filters will roll-off all high frequency content (at around or above 20 kHz) and thus all sharp impulse edges. They will likely also roll off low frequency (below 20Hz) and thus DC signal content.

So you can't send digital impulses, unless they are within or are modulated to fit standard audio channel bandwidths.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.