# Creating Microsecond Sounds

Discussion in 'iOS Programming' started by belboz, Nov 30, 2010.

1. Nov 30, 2010
Last edited by a moderator: Dec 6, 2010

### belboz macrumors newbie

Joined:
Nov 28, 2010
#1
Hi,

I remember in my good old days of programming in C that you could create sound and send it to the speaker at the millisecond level. I put down an example below.

Is there a way to do this with an iPhone App, at the MICROSECOND level?

As in, I program how long in microseconds I want a certain sound.

Thanks for the help.

__________

Code:
```void play(int octave,int note,int duration)
/* play note (C=1 to B=12), in octave (1-8), and duration (msec)
include NOTE.H for note values */
{
int k;
double frequency;

if (note == 0) {                  /* pause */
delay(duration);
return;
}
frequency = 32.625;
for (k = 0; k < octave; k++)      /* compute C in octave  */
frequency *= 2;
for (k = 0; k < note; k++)        /* frequency of note    */
frequency *= 1.059463094;       /* twelve root of 2     */
delay(5);                         /* delay between keys   */
sound((int) frequency);           /* sound the note       */
delay(duration);                  /* for correct duration */
nosound();
}```

2. ### chown33 macrumors 604

Joined:
Aug 9, 2009
Location:
Sailing beyond the sunset
#2
A microsecond is one cycle at 1 MHz. Nothing can hear that high, not even bats.

10 microseconds is one cycle at 100 KHz. That's less than one sample at 96 KHz sample-rate. The iPhone does not have a 96 KHz sample-rate.

100 microseconds is one cycle at 10 KHz, or two cycles at 20 KHz. That seems like a reasonable granularity for determining the duration of a sound.

If you really want to clean up any ticks and pops, you need to start and stop the audio at an exact zero-crossing of the audio waveform samples. Even brief discontinuities are noticeable.

3. ### belboz thread starter macrumors newbie

Joined:
Nov 28, 2010
#3
Yea...

Thanks for the response...

It is not for hearing, it is for sending electrical impulses in the form of sound. Hence the microsecond fidelity.

It looks as though I just will have to create sound files to play. I was hoping I could create a program to produce them.

4. ### chown33 macrumors 604

Joined:
Aug 9, 2009
Location:
Sailing beyond the sunset
#4
It doesn't matter whether it's for hearing or not. The output signal is produced from sampled data by a DAC. The DAC's sample rate determines the smallest possible time interval of the signal.

For example, if the DAC sample rate is 50 KHz, the smallest possible time interval in the output signal is 20 microseconds. You can't possibly resolve anything shorter than that, because that's how long one sample is, and samples are discrete (indivisible).

To resolve 1 microsecond, you would need DACs with a sample-rate of 1 MHz. To produce a DAC-output signal of 1 MHz, they would have to operate above 2 MHz (Nyquist criterion).

5. ### smithrh macrumors 68020

Joined:
Feb 28, 2009
#5
msec is milliseconds. 100 msec is 0.1 seconds.

usec is microseconds.

Producing sounds on the order of tens of milliseconds should be possible; accurately detecting them is another matter altogether.

6. ### belboz thread starter macrumors newbie

Joined:
Nov 28, 2010
#6
thanks...

thanks for the input... it helps give some direction....

7. ### firewood macrumors 604

Joined:
Jul 29, 2003
Location:
Silicon Valley
#7
All iOS device audio channels include audio (anti-aliasing) filtering.

The filters will roll-off all high frequency content (at around or above 20 kHz) and thus all sharp impulse edges. They will likely also roll off low frequency (below 20Hz) and thus DC signal content.

So you can't send digital impulses, unless they are within or are modulated to fit standard audio channel bandwidths.