And you should set the Audio Session When your app starts.
The important references are here.
However, What is the extension of Audio you are willing to decode?AudioStreamPacketDescription
is important if the Audio is Variable Frame per packet.
Otherwise, if One Frame per One Packet, AudioStreamPacketDescription
is not significant.
What you do next is
To Set the audio session, To Get raw audio frame using decoder, To Put the frame into the Audio Buffer.
Instead of you, Make the system to fill the empty buffer.
iphone - AudioQueue ate my buffer (first 15 milliseconds of it) - Stack Overflow
I'm not familiar with the iPhone audio APIs but it appears to be similar to other ones where generally you would queue up more than one buffer, This way when the system is finished processing the first buffer, it can immediately start processing the next buffer (since it's already been queued up) while the completion callback on the 1st buffer is being executed.
iphone - how to encode_decode speex with AudioQueue in ios - Stack Overflow
iphone - Using Audio Queue Services to play PCM data over a socket connection - Stack Overflow
iphone - AudioQueue callback in simulator but not on device - Stack Overflow
AudioSessionInitialize(NULL, NULL, NULL, NULL);UInt32 category = kAudioSessionCategory_PlayAndRecord;int error =AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,sizeof(category),&category);if(error) printf("Couldn't set audio category!");