Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

SynthSubs

macrumors newbie
Original poster
Apr 9, 2013
1
0
Hello,

First of all I am new to the forum, just a quick "hi!" for the beginning. Glad to be here.

I am developing an app that is getting PCM data from recoding device -> compress it to iLBC format -> writes it to singleton -> reads from singleton -> decompress it back to PCM and write to playback device. I believe that compression works well, tho decompression is not working properly. At first PCM buffer is 1024 bytes, after comression iLBC has 38 bytes, after attempt to decompress data buffer has 2 bytes.

My class for compression:

Code:
-(void)convertBuffer:(unsigned char**) stream size:(int *)streamSize {

    // create an audio converter

    AudioConverterRef audioConverter;

//    outputDescription = audioFormat;



    OSStatus acCreationResult = AudioConverterNew(&audioFormat, &outputDescription, &audioConverter);

    if(!audioConverter)

    {

        // bail out

        free(*stream);

        *streamSize = 0;

        *stream = (unsigned char*)malloc(0);

        return;

    }



    // calculate number of bytes required for output of input stream.

    // allocate buffer of adequate size.

//    UInt32 outputBytes = outputDescription.mBytesPerPacket * (*streamSize / audioFormat.mBytesPerPacket); // outputDescription.mFramesPerPacket * outputDescription.mBytesPerFrame;

    UInt32 size = sizeof(UInt32);

    UInt32 outputBytes;

    AudioConverterGetProperty(audioConverter, kAudioConverterPropertyMaximumOutputPacketSize, &size, &outputBytes);



    unsigned char *outputBuffer = (unsigned char*)malloc(outputBytes);

    memset(outputBuffer, 0, outputBytes);



    // describe input data we'll pass into converter

    AudioBuffer inputBuffer;

    inputBuffer.mNumberChannels = audioFormat.mChannelsPerFrame;

    inputBuffer.mDataByteSize = *streamSize;

    inputBuffer.mData = *stream;



    // describe output data buffers into which we can receive data.

    AudioBufferList outputBufferList;

    outputBufferList.mNumberBuffers = 1;

    outputBufferList.mBuffers[0].mNumberChannels = outputDescription.mChannelsPerFrame;

    outputBufferList.mBuffers[0].mDataByteSize = outputBytes;

    outputBufferList.mBuffers[0].mData = outputBuffer;



    // set output data packet size

    UInt32 outputDataPacketSize = outputBytes; // / outputDescription.mBytesPerPacket;



    // fill class members with data that we'll pass into

    // the InputDataProc

    _converter_currentBuffer = &inputBuffer;

    _converter_currentInputDescription = audioFormat;



    // convert

    OSStatus result = AudioConverterFillComplexBuffer(audioConverter, /* AudioConverterRef inAudioConverter */

                                                      _converterComplexInputDataProc, /* AudioConverterComplexInputDataProc inInputDataProc */

                                                      self, /* void *inInputDataProcUserData */

                                                      &outputDataPacketSize, /* UInt32 *ioOutputDataPacketSize */

                                                      &outputBufferList, /* AudioBufferList *outOutputData */

                                                      NULL /* AudioStreamPacketDescription *outPacketDescription */

                                                      );





    //if (err) {NSLog(@"%s : AudioFormat Convert error %d\n",__FUNCTION__, (int)err);  }

    NSLog(@"Before %d after %d", *streamSize, (unsigned int)outputBytes);

    // change "stream" to describe our output buffer.

    // even if error occured, we'd rather have silence than unconverted audio.

    free(*stream);

    *stream = outputBuffer;

    *streamSize = outputBytes;



    // dispose of the audio converter

    AudioConverterDispose(audioConverter);





    Data *data = [Data sharedData];



     // copy incoming audio data to the audio buffer

    //intFromBuffer = audioBufferList->mBuffers[0].mData;



    NSMutableData *dataOut = [[NSMutableData alloc] initWithCapacity:0];

    [dataOut appendBytes:(const void *)outputBuffer length:outputBytes];



    [data setMOutput:dataOut];

}

Code for decompression:

Code:
-(void)convertBufferBack:(unsigned char**) stream size:(int *)streamSize {

    // create an audio converter

    AudioConverterRef audioConverter1;

    //    outputDescription = audioFormat;



    OSStatus acCreationResult = AudioConverterNew(&outputDescription, &audioFormat, &audioConverter1);

    if(!audioConverter1)

    {

        // bail out

        free(*stream);

        *streamSize = 0;

        *stream = (unsigned char*)malloc(0);

        return;

    }



    // calculate number of bytes required for output of input stream.

    // allocate buffer of adequate size.

//    UInt32 outputBytes = audioFormat.mFramesPerPacket * audioFormat.mBytesPerFrame;

    UInt32 size = sizeof(UInt32);

    UInt32 outputBytes;

    AudioConverterGetProperty(audioConverter1, kAudioConverterPropertyMaximumOutputPacketSize, &size, &outputBytes); //this returns 2 bytes



    unsigned char *outputBuffer = (unsigned char*)malloc(outputBytes);

    memset(outputBuffer, 0, outputBytes);



    // describe input data we'll pass into converter

    AudioBuffer inputBuffer;

    inputBuffer.mNumberChannels = outputDescription.mChannelsPerFrame;

    inputBuffer.mDataByteSize = *streamSize;

    inputBuffer.mData = *stream;



    // describe output data buffers into which we can receive data.

    AudioBufferList outputBufferList;

    outputBufferList.mNumberBuffers = 1;

    outputBufferList.mBuffers[0].mNumberChannels = audioFormat.mChannelsPerFrame;

    outputBufferList.mBuffers[0].mDataByteSize = outputBytes;

    outputBufferList.mBuffers[0].mData = outputBuffer;



    // set output data packet size

    UInt32 outputDataPacketSize = outputBytes / audioFormat.mBytesPerPacket;



    // fill class members with data that we'll pass into

    // the InputDataProc

    _converter_currentBuffer = &inputBuffer;

    _converter_currentInputDescription = outputDescription;



    // convert

    OSStatus result = AudioConverterFillComplexBuffer(audioConverter1, /* AudioConverterRef inAudioConverter */

                                                      _converterComplexInputDataProc, /* AudioConverterComplexInputDataProc inInputDataProc */

                                                      self, /* void *inInputDataProcUserData */

                                                      &outputDataPacketSize, /* UInt32 *ioOutputDataPacketSize */

                                                      &outputBufferList, /* AudioBufferList *outOutputData */

                                                      NULL /* AudioStreamPacketDescription *outPacketDescription */

                                                      );

    NSLog(@"Before %d after %d", *streamSize, (unsigned int)outputBytes);

    // change "stream" to describe our output buffer.

    // even if error occured, we'd rather have silence than unconverted audio.

    free(*stream);

    *stream = outputBuffer;

    *streamSize = outputBytes;



    // dispose of the audio converter

    AudioConverterDispose(audioConverter1);





    Data *data = [Data sharedData];



     // copy incoming audio data to the audio buffer

    //intFromBuffer = audioBufferList->mBuffers[0].mData;



    NSMutableData *dataOut = [[NSMutableData alloc] initWithCapacity:0];

    [dataOut appendBytes:(const void *)outputBuffer length:outputBytes];



    [data setMOutput:dataOut];

}

My callback for AudioConverterFillComplexBuffer:

Code:
OSStatus _converterComplexInputDataProc(AudioConverterRef inAudioConverter, UInt32* ioNumberDataPackets, AudioBufferList* ioData, AudioStreamPacketDescription** ioDataPacketDescription, void* inUserData)

{    

    AudioProcessor *conv = (AudioProcessor *)inUserData;

    ioData->mNumberBuffers = 1;

    ioData->mBuffers[0] = *(conv->_converter_currentBuffer);

    if(conv->_converter_currentInputDescription.mBytesPerPacket)

        *ioNumberDataPackets = ioData->mBuffers[0].mDataByteSize / conv->_converter_currentInputDescription.mBytesPerPacket;

    else

        *ioNumberDataPackets = ioData->mBuffers[0].mDataByteSize;





    return 0;



}


And finally format descriptors:

Code:
memset(&audioFormat, 0, sizeof(AudioStreamBasicDescription));

     audioFormat.mSampleRate               = 44100;

     audioFormat.mFormatID               = kAudioFormatLinearPCM;

     audioFormat.mFormatFlags          = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;

     audioFormat.mFramesPerPacket     = 1;

     audioFormat.mChannelsPerFrame     = 1;

     audioFormat.mBitsPerChannel          = 16;

     audioFormat.mBytesPerPacket          = 2;

     audioFormat.mBytesPerFrame          = 2;



    memset(&outputDescription, 0, sizeof(AudioStreamBasicDescription));

    outputDescription.mFormatID = kAudioFormatiLBC;

    outputDescription.mFormatFlags = 0;

    outputDescription.mSampleRate = 8000;

    outputDescription.mBitsPerChannel = 0;

    outputDescription.mChannelsPerFrame = 1;

    outputDescription.mBytesPerFrame = 0;

    outputDescription.mFramesPerPacket = 160;

    outputDescription.mBytesPerPacket = 38;

Could you please tell me what I'm doing wrong? I need it badly.

Thank you in advance.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.