What kind of delays were you experiencing? Could you measure it?
The delay when iOS has decided to ditch the memory,
and reload the source sound file, which the programmer has no control over,
or you wouldn't need a function like "PrepareToPlay".
You can see it happen using a program, but no, I didn't measure it.
I could start a timer from when I call the sound to play, but don't know how
to determine when it actually starts playing, but it might be possible.
Fortunately, I will never have to use it for an interface again, so don't care.
Also, did you look into using AudioServicesPlaySystemSound?
You've asked me that question before, in a thread where I was frustrated with
AVplayer, and the answer is the same, it's fast enough, but if the user has their
ringer volume and normal volume set to different levels,
system sound plays at the ringer volume level.
Then the user has to go into settings to adjust the ringer volume to change the volume level of that app.
Apple have their guidelines, I have mine

This breaks one of them, so is only used to run the vibrator.
You wont be wasting time doing any of that since there are system functions to do that with an audio asset. Both aiff and wav use pcm.
I'm not talking about wasting my time, but a processor's.
Why put that work back on a computer when I've saved that time?
It isn't free, you only have to Google AVplayer lag,
or the Apple Multimedia Programming Guide:
To play and record audio in the fewest lines of code, use the AV Foundation framework. See Playing Sounds Easily with the AVAudioPlayer Class and Recording with the AVAudioRecorder Class.
To provide lowest latency audio, especially when doing simultaneous input and output (such as for a VoIP application), use the I/O unit or the Voice Processing I/O unit. See Audio Unit Support in iOS.
I have since drawn the wave in the background for the speech effect,
but don't know if that access to the PCM data is possible with AVplayer.
Because at a 22.5khz sample rate you end up with 22500 bytes per second, per sample, in arrays that are defined in a header file.
At the moment, while the data is small, but if I were to store as a file,
it would still be a single pure PCM stream representing all interface samples,
loaded at run time into the same explicitly declared arrays in a header file.
They won't be purged from memory by iOS. So the only difference is where the data is stored?