Remote IO audio is very noisy

Discussion in 'iOS Programming' started by techgentsia, Dec 14, 2011.

  1. techgentsia, Dec 14, 2011
    Last edited by a moderator: Dec 15, 2011

    techgentsia macrumors newbie

    Joined:
    Jul 19, 2011
    #1
    Hi. I am new to core audio and remote io. I need data of size 320 bytes which i encode and send. Here is what i have done:

    Code:
    AudioComponentDescription desc;
    
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = kAudioUnitSubType_RemoteIO;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = 0;
    
    
    // Get component
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
    
    
    // Get audio units
    AudioComponentInstanceNew(inputComponent, &audioUnit);
    
    
    
    // Enable IO for recording
    UInt32 flag = 1;
    AudioUnitSetProperty(audioUnit,kAudioOutputUnitProperty_EnableIO,kAudioUnitScope_Input,kInputBus,&flag,sizeof(flag));
    
    
    
    // Enable IO for playback
    AudioUnitSetProperty(audioUnit,kAudioOutputUnitProperty_EnableIO,kAudioUnitScope_Output,kOutputBus,&flag,sizeof(flag));
    
    UInt32 shouldAllocateBuffer = 1;
    AudioUnitSetProperty(audioUnit, kAudioUnitProperty_ShouldAllocateBuffer, kAudioUnitScope_Global, 1, &shouldAllocateBuffer, sizeof(shouldAllocateBuffer));
    
    // Describe format
    audioFormat.mSampleRate = 8000.00;
    audioFormat.mFormatID = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger|kAudioFormatFlagIsPacked;
    audioFormat.mFramesPerPacket = 1;
    audioFormat.mChannelsPerFrame = 1;
    audioFormat.mBitsPerChannel = 16;
    audioFormat.mBytesPerPacket = 2;
    audioFormat.mBytesPerFrame = 2;
    
    
    // Apply format
    AudioUnitSetProperty(audioUnit,kAudioUnitProperty_StreamFormat,kAudioUnitScope_Output,1,&audioFormat,**sizeof(audioFormat));
    
    AudioUnitSetProperty(audioUnit,kAudioUnitProperty_StreamFormat,kAudioUnitScope_Input,0,&audioFormat,sizeof(audioFormat));
    
    
    // Set input callback
    AURenderCallbackStruct callbackStruct;
    callbackStruct.inputProc = recordingCallback;
    callbackStruct.inputProcRefCon = self;
    AudioUnitSetProperty(audioUnit,kAudioOutputUnitProperty_SetInputCallback,kAudioUnitScope_Global,1,&callbackStruct,sizeof(callbackStruct));
    
    
    
    // Set output callback
    callbackStruct.inputProc = playbackCallback;
    callbackStruct.inputProcRefCon = self;
    AudioUnitSetProperty(audioUnit,kAudioUnitProperty_SetRenderCallback,kAudioUnitScope_Global,0,&callbackStruct,sizeof(callbackStruct));
    
    
    // Initialise
    AudioUnitInitialize(audioUnit);
    
    AudioOutputUnitStart(audioUnit);
    With this settings, i get 186 frames in the callback method when tried with device.
    I have allocated by buffer:

    Code:
    bufferList = (AudioBufferList*) malloc(sizeof(AudioBufferList));
    bufferList->mNumberBuffers = 1; //mono input
    for(UInt32 i=0;i<bufferList->mNumberBuffers;i++)
    {
    bufferList->mBuffers[i].mNumberChannels = 1;
    bufferList->mBuffers[i].mDataByteSize = 2*186;
    bufferList->mBuffers[i].mData = malloc(bufferList->mBuffers[i].mDataByteSize);
    }
    From this 372(2 x 186)bytes in the callback, i took 320 byte data and used as per my requirement. Something like:
    memcpy(data, bufferList->mBuffers[0].mData, 320);

    It is working ,but very noisy.:(

    Someone please help me. I am in big trouble.
     
  2. firewood macrumors 604

    Joined:
    Jul 29, 2003
    Location:
    Silicon Valley
    #2
    RemoteIO is not noisy. You're probably just using it wrong. You're not even checking your return values and error codes. Start there.
     
  3. chown33, Dec 14, 2011
    Last edited: Dec 14, 2011

    chown33 macrumors 604

    Joined:
    Aug 9, 2009
    #3
    If you take 372 bytes of sampled data, and simply ignore 52 bytes of it (372-320), then you shouldn't be surprised if the resulting audio signal is "noisy". You're ignoring data. Simply removing it as if it doesn't matter. Of course the result is noisy.

    Why would you expect anything else, unless you're ignorant of sampled audio signals? You can't just le___ out pieces of ____ and ______ to have com_____ly error-___ results. Only ___ls think _____one can ____ their minds.

    If you can fill in every missing word in the above paragraph, consider how much side-channel information I conveyed by accurately providing one _ for each missing letter. Imagine how much more difficult an accurate reconstruction would be if I left out even the _s.

    Please explain why 320 bytes is so important that you simply ignore data more than 320 bytes.

    "I need data of size 320 bytes" is a meaningless and silly restriction without a context and an explanation. It could mean 320 8-bit samples. It could mean 320 samples encoded in mu-law or A-law encoding. It could mean that you're expected to compress your original audio signal down to 320 bytes, using some compression technique. All anyone can tell is that you're obsessed with 320 bytes. Why is not at all clear.

    Simply chopping 26 samples (52 bytes) off every 186 samples (372 bytes) can't possibly work. No compression or encoding system can accomodate that kind of ignorant and misapplied brute force.
     
  4. techgentsia, Dec 14, 2011
    Last edited by a moderator: Dec 15, 2011

    techgentsia thread starter macrumors newbie

    Joined:
    Jul 19, 2011
    #4
    Thanks for your reply chown. Actually i doesnt mean that i just take each 320 from 372. I am taking every bytes(but as a fraction of 320).
    This is what i wrote for that:

    Code:
    int y = 0;
    int x=320;
    short data[320];
    short temp[320];
    
    static OSStatus recordingCallback(void *inRefCon,AudioUnitRenderActionFlags *ioActionFlags,const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames,  AudioBufferList *ioData) {
    
        AudioUnitRender(audioUnit, ioActionFlags, inTimeStamp,inBusNumber,              inNumberFrames, bufferList);
    
            bufferList->mNumberBuffers = 1;
    	bufferList->mBuffers[0].mDataByteSize = 2 * inNumberFrames;
    	bufferList->mBuffers[0].mNumberChannels = 1;
      
           int db = bufferList->mBuffers[0].mDataByteSize;
    
    if(y > x)
        {
            memcpy(data, bufferList->mBuffers[0].mData, x);
            memcpy(temp, bufferList->mBuffers[0].mData, y-x);
            y = y -x;
        }
        else
        {
        memcpy(temp + y, bufferList->mBuffers[0].mData, x-y);
        memcpy(data, temp, x);
        y = db-(x-y);
        memcpy(temp, bufferList->mBuffers[0].mData, y);
        }   
     //i will encode and send "data".
    }
    The relevance of my 320bytes is that i am using speex encoder which needs 320bytes(160 shorts).

    Is there something wrong in my callback?
     
  5. chown33, Dec 15, 2011
    Last edited: Dec 15, 2011

    chown33 macrumors 604

    Joined:
    Aug 9, 2009
    #5
    I'm not going to analyze your code for you. You wrote it, so you should be able to understand it. If you don't understand it, then you should simplify it until you do understand it.

    I suggest taking a careful look at the buffer management, especially the management of the indexes for temp and data. Walk through the algorithm manually. Take note of the indexes. Or run with the debugger and look at your indexes. If nothing shows the problem, then write a test program that feeds test data of different sizes, say 372 bytes, to your buffer-handling code.

    EDIT
    In this line of code:
    Code:
        memcpy([COLOR="Red"]temp + y[/COLOR], bufferList->mBuffers[0].mData, x-y);
    
    the red-hilited expression will perform pointer arithmetic using a short ptr. That means y is scaled by sizeof(short). Other calculations using y appear to be byte-counts, so the scaling in this case is almost certainly incorrect.

    Pointer arithmetic scaling is normal ordinary C, not something special for Objective-C.

    I recommend writing a test case, then stepping through it with the debugger, making sure all your pointers and indexes are correct. For test data, I recommend an increasing ramp waveform, partly because it's simple to generate with a plain counter, but also because it will make mistakes with pointers and indexes easier to see, because the data will no longer be a simple series of increasing numbers.
     
  6. firewood macrumors 604

    Joined:
    Jul 29, 2003
    Location:
    Silicon Valley
    #6
    I don't see where in your callback you are ever copying or otherwise reading the last 52 bytes of the audio data buffer at mData.

    Can you point out which line?
     

Share This Page