1. AudioUnit 驱动扬声器

  参考官方 DefaultOutputUnit,经修改使其产生扬声器可播放范围内且人耳可以听到的音频,官方给的示例无法听到声音,通过本例学会驱动默认扬声器;

  输出声音的时候,AudioOutputUnitStart 是非阻塞的,且要运行在事件循环下:

CFRunLoopRunInMode 或 CFRunLoopRun

  扬声器会在回调函数里面传入它想要的数据量,通过以下公式产生sin波形数据:

Float32 nextFloat = sin(j / cycleLength * (M_PI * 2.0)) * amplitude;

  配置参数, Amplitude 在0~1之间即可,扬声器支持16位、32位的采样,精度对音质有稍许影响,不过我的耳朵是分辨不出来:

AuContext AuCtx =
{
    .WhichFormat       = kAs16Bit,  // kAsFloat,
    .SampleRate        = 44100,
    .NumChannels       = 2,
    .SinWaveFrameCount = 0,
    .Amplitude         = 0.25,
    .ToneFrequency     = 128000,
    .sampleNextPrinted = 0,
};

  注意通道数据的拷贝,0为左声道,1为右声道,同样是 AudioBufferList 的结构。

    

audiotrack 架构 audio toolbox_数据

1.1 应用场景

  测试扬声器;

  可以很方便的和 AUGraph 整合,完成混音等操作,然后使用音频输出单元播放声音;

  通过改变部分参数,测试特定参数下的扬声器音效、质量;

 

2. AudioUnit 驱动麦克风

本例使用 AUHAL 采集音频数据,通过 AudioToolbox 的 AudioConverterFillComplexBuffer 接口将数据转码为AAC数据,同时完成了降速采样然后编码的功能,因为硬件是192000hz的采样率,AAC一般为44100hz,然后封装为 flv 数据流,通过 librtmp 发送到 srs 服务器,通过 ffplay 播放音频,音质完美呈现!

同样,AUHAL的缓冲区会受到节能模式的影响,上面这几点在voip的那片文章里面已经说明过了。

  另外提一点,AUHAL这一层在网上能找到很多文章介绍,与之对应的视频的这一层叫做 DAL,相关的 SampleCode 叫 MediaIO。

    

audiotrack 架构 audio toolbox_Graph_02

3. 从AVCapptureSession记录aac文件

  Apple为音频记录提供了更简易的接口,但是某些场景下又是调用的该 API 采集数据,那就要与之对应的方法记录文件。

  主要使用的是 ExtAudioFile 这一些列接口。

  官方的 Demo 要想很好的使用,要构造合适的的buffer存储数据,里面的每一点都要吃透,在RTMP推流的那个项目里面有这些内容的应用。

  该项目是非常好的示例,但是需要对各种音频格式有清晰的认识才能熟练运用、修改,譬如:

  

audiotrack 架构 audio toolbox_#pragma_03

   

audiotrack 架构 audio toolbox_audiotrack 架构_04

4. 获取所有音视频设备列表

  这个就比较基础了,获取设备类型、数量、对应的ID、通道数、采样频率等。

  注意到通道数是采样了一帧数据,通过AudioBufferList计算的。部分设备的采样率、通道数等参数是可以设置的,但是设置之前要查询硬件是否支持设置才能设置。

  

audiotrack 架构 audio toolbox_Graph_05

5. 音频文件转码

主要涉及CBR-固定比特率、VBR-变比特率、ABR-平均比特率等模式,其中前两种用的多,苹果官方规定以PCM作为中间类型文件,可以从苹果支持的各种编码格式转为PCM,也可以从PCM转码为各种格式。

转码的Demo在Mac、IOS都有,同时,某些编码器如果采用了硬件编码,那就是独占的,比如如果硬件编码aac,那么就不能同时多线程编码。

该程序参考官方Demo,做了简化和重构,方法如下:

将PCM转为AAC,转码的时候打印 LOG,看哪些分支有执行到,哪些分支没有执行到,那么对格式相关的内容就很清楚,最终就能清除完成一种VBR到CBR的转码需要执行哪些代码去处理数据,反之亦然。

网上转码的代码示例并不多,这部分的需求应该不大,主要是PCM转AAC会用到,其它格式的很少涉及,就不详细描写了。在RTMP推流的程序中就是用的该部分作为参考。

6. AUGraph播放音频文件

  直接放代码吧,反正就一个文件可以解决:

audiotrack 架构 audio toolbox_Graph_06

audiotrack 架构 audio toolbox_Graph_07

// clang main_AUG.m -framework AVFoundation -framework CoreAudio -framework AudioToolbox -framework CoreFoundation -o app

#import <AudioToolbox/AudioToolbox.h>

#define kInputFileLocation CFSTR("/Users/alex/Desktop/Audio/qianzhihe.m4a")

#pragma mark user-data struct
typedef struct MyAUGraphPlayer{
    AudioStreamBasicDescription inputFormat;
    AudioFileID inputFile;
    
    AUGraph graph;
    AudioUnit fileAU;
}MyAUGraphPlayer;

#pragma mark utility functions
static void CheckError(OSStatus err, const char* operation){
    if (err == noErr)return;
    
    char errorString[20];
    *(UInt32 *)(errorString+1) = CFSwapInt32HostToBig(err);
    if(isprint(errorString[1]) && isprint(errorString[2])
       && isprint(errorString[3]) && isprint(errorString[4])){
        errorString[0] = errorString[5] = '\'';
        errorString[6] = '\0';
    }
    else{
        sprintf(errorString, "%d", (int)err);
    }
    fprintf(stderr, "Error: %s (%s)\n",operation, errorString);
    exit(1);
}

void CreateMyAUGraph(MyAUGraphPlayer *player){
    // 0. 创建Graph
    CheckError(NewAUGraph(&player->graph), "NewAUGraph failed");
    // 1.1 创建输出节点(扬声器)
    AudioComponentDescription outputcd = {0};
    outputcd.componentType = kAudioUnitType_Output;
    outputcd.componentSubType = kAudioUnitSubType_DefaultOutput;
    outputcd.componentManufacturer = kAudioUnitManufacturer_Apple;
    // 1.2 添加输出节点至Graph
    AUNode outputNode;
    CheckError(AUGraphAddNode(player->graph,
                              &outputcd,
                              &outputNode),
               "AUGraphAddNode output node failed");
    // 2.1 创建输入节点(音频文件)
    AudioComponentDescription fileplayercd = {0};
    fileplayercd.componentType = kAudioUnitType_Generator;
    fileplayercd.componentSubType = kAudioUnitSubType_AudioFilePlayer;
    fileplayercd.componentManufacturer = kAudioUnitManufacturer_Apple;
    // 2.2 输入节点添加至Graph
    AUNode fileplayerNode;
    CheckError(AUGraphAddNode(player->graph,
                              &fileplayercd,
                              &fileplayerNode),
               "AUGraphAddNode File Player failed");
    // 3. 打开Graph
    CheckError(AUGraphOpen(player->graph),
               "AUGraphOpen failed");
    
    CheckError(AUGraphNodeInfo(player->graph, fileplayerNode, NULL, &player->fileAU),
               "AUGraphNodeInfo failed");
    // 4. 连接至输入节点
    CheckError(AUGraphConnectNodeInput(player->graph,
                                       fileplayerNode,
                                       0,
                                       outputNode,
                                       0),
               "Graph connect failed");
    // 5. 初始化Graph
    CheckError(AUGraphInitialize(player->graph),
               "AUGraphInitialize failed");
}

Float64 PrepareFileAU(MyAUGraphPlayer *player){
    // 1. 设置AU输入源为文件
    CheckError(AudioUnitSetProperty(player->fileAU,
                                    kAudioUnitProperty_ScheduledFileIDs,
                                    kAudioUnitScope_Global,
                                    0,
                                    &player->inputFile,
                                    sizeof(player->inputFile)),
               "AudioUnitSetProperty failed");
    // 2. 获取文件的Packets总数
    UInt64 nPackets;
    UInt32 propSize = sizeof(nPackets);
    CheckError(AudioFileGetProperty(player->inputFile,
                                    kAudioFilePropertyAudioDataPacketCount,
                                    &propSize,
                                    &nPackets),
               "AudioFileGetProperty");
    // 3. 设置播放模式
    ScheduledAudioFileRegion rgn = {0};
    rgn.mTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
    rgn.mTimeStamp.mSampleTime = 0;
    rgn.mCompletionProc = NULL;
    rgn.mCompletionProcUserData = NULL;     // 回调?自动处理
    rgn.mAudioFile = player->inputFile;     // 类似文件列表
    rgn.mLoopCount  = 1;                    // 循环播放次数
    rgn.mStartFrame = 0;                    // 起始位置
    rgn.mFramesToPlay = nPackets * (player->inputFormat.mFramesPerPacket);
    
    CheckError(AudioUnitSetProperty(player->fileAU,
                                    kAudioUnitProperty_ScheduledFileRegion,
                                    kAudioUnitScope_Global,
                                    0,
                                    &rgn,
                                    sizeof(rgn)),
               "AudioUnitSetProperty failed");
    // 4. 设置起始时间戳
    AudioTimeStamp startTime = {0};
    startTime.mFlags = kAudioTimeStampSampleTimeValid;
    startTime.mSampleTime = -1;
    CheckError(AudioUnitSetProperty(player->fileAU,
                                    kAudioUnitProperty_ScheduleStartTimeStamp,
                                    kAudioUnitScope_Global,
                                    0,
                                    &startTime,
                                    sizeof(startTime)),
               "AudioUnitSetProperty Schedule Start time stamp failed");
    return (nPackets * player->inputFormat.mFramesPerPacket) / player->inputFormat.mSampleRate;
}



#pragma mark main function
int main(int argc, const char * argv[])
{
    // 初始化文件
    CFURLRef inputFileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault,
                                                          kInputFileLocation,
                                                          kCFURLPOSIXPathStyle,
                                                          false);
    MyAUGraphPlayer player = {0};
    CheckError(AudioFileOpenURL(inputFileURL,
                                kAudioFileReadPermission,
                                0,
                                &player.inputFile),
               "Audio File Open failed");
    CFRelease(inputFileURL);
    // 用文件初始化ASBD
    UInt32 propSize = sizeof(player.inputFormat);
    CheckError(AudioFileGetProperty(player.inputFile,
                                    kAudioFilePropertyDataFormat,
                                    &propSize,
                                    &player.inputFormat),       // ASBD
               "Audio File Get Property failed");
    
    CreateMyAUGraph(&player);                                   // 创建AUG
    Float64 fileDuration = PrepareFileAU(&player);              // 创建文件输入
    
    CheckError(AUGraphStart(player.graph),                      // 启动AUG
               "Audio Unit start failed");
    usleep((int)fileDuration * 1000.0 * 1000.0);                // AUG相当于多个子线程。主线程在这里阻塞等待。
    
    
    AUGraphStop(player.graph);
    AUGraphUninitialize(player.graph);
    AUGraphClose(player.graph);
    AudioFileClose(player.inputFile);
    
    return 0;
}

View Code

  编译命令就在文件开头,需要修改开头的文件路径为正确的音频文件路径。

  m4a格式的文件就是aac格式的封装,可以使用 mac 上的 avconfert 命令转码音频文件或从mp4文件里面提取该格式的文件。

7. 音频队列采集aac数据

  也不讲了,会AudioUnit,这个也不会是问题。直接放代码:

audiotrack 架构 audio toolbox_Graph_06

audiotrack 架构 audio toolbox_Graph_07

// clang main.m -framework AVFoundation -framework CoreAudio -framework AudioToolbox -framework CoreFoundation -o app

// Audio Queue 录制 aac 音频文件
// .caf格式的音频可以包含许多不同的音频格式,元数据,可以不用写 Magic Cookie。如果保存为ma4的文件需要写Cookie,就麻烦了。

// 参考:

#import <AudioToolbox/AudioToolbox.h>

#define kNumberRecordBuffers 3

#pragma mark user data struct
typedef struct MyRecorder{
    AudioFileID recordFile;
    SInt64 recordPacket;
    Boolean running;
}MyRecorder;


#pragma mark utility functions
static void CheckError(OSStatus err, const char* operation){
    if (err == noErr)return;
    
    char errorString[20];
    *(UInt32 *)(errorString+1) = CFSwapInt32HostToBig(err);
    if(isprint(errorString[1]) && isprint(errorString[2])
       && isprint(errorString[3]) && isprint(errorString[4])){
        errorString[0] = errorString[5] = '\'';
        errorString[6] = '\0';
    }
    else{
        sprintf(errorString, "%d", (int)err);
    }
    fprintf(stderr, "Error: %s (%s)\n",operation, errorString);
    exit(1);
}

OSStatus MyGetDefaultInputDeviceSampleRate(Float64 *outSampleRate)
{
    OSStatus error = noErr;
    AudioDeviceID deviceID = 0;
    UInt32 propertySize = sizeof(AudioDeviceID);
    
    // 1. 从麦克风获取硬件采样率,先从AOPA获取默认输入设备
    AudioObjectPropertyAddress propertyAddress = {0};
    propertyAddress.mSelector = kAudioHardwarePropertyDefaultInputDevice;
    propertyAddress.mScope    = kAudioObjectPropertyScopeGlobal;
    propertyAddress.mElement  = 0;
    
    error = AudioHardwareServiceGetPropertyData(kAudioObjectSystemObject,
                                         &propertyAddress,
                                         0,
                                         NULL,
                                         &propertySize,
                                         &deviceID);
    if (error)
    {
        return error;
    }
    // 2. 再从默认输入设备获取 NominalSampleRate
    propertyAddress.mSelector = kAudioDevicePropertyNominalSampleRate;
    propertyAddress.mScope    = kAudioObjectPropertyScopeGlobal;
    propertyAddress.mElement  = 0;
    propertySize = sizeof(Float64);
    
    error = AudioHardwareServiceGetPropertyData(deviceID,
                                         &propertyAddress,
                                         0,
                                         NULL,
                                         &propertySize,
                                         outSampleRate);
    return error;
}

static void MyCopyEncoderCookieToFile(AudioQueueRef queue, AudioFileID theFile)
{
    OSStatus error;
    UInt32 propertySize;
    // 1. 从 Audio Queue 获取转码/压缩后的 Magic Cookie 的大小,如果获取到说明录制的数据输出格式是 VBR 的,支持 Magic Cookie
    error = AudioQueueGetPropertySize(queue, kAudioConverterCompressionMagicCookie, &propertySize);
    
    if(error == noErr && propertySize > 0)
    {
        Byte *magicCookie = (Byte*)malloc(propertySize);
        CheckError(AudioQueueGetProperty(queue,
                                         kAudioQueueProperty_MagicCookie,
                                         magicCookie,
                                         &propertySize),            // 按该大小去获取 Cookie 数据
                   "Couldn't get audio queue's magic cookie");
        
        CheckError(AudioFileSetProperty(theFile,
                                        kAudioFilePropertyMagicCookieData,
                                        propertySize,
                                        magicCookie),               // 将获取到的Cookie设置到文件中去
                   "Couldn't set file's magic cookie");
        free(magicCookie);
    }
}

static int MyComputeRecordBufferSize(const AudioStreamBasicDescription  *format,
                                     AudioQueueRef                      queue,
                                     float                              seconds)
{
    // 根据时长计算总帧数
    int packets, frames, bytes;
    frames = (int)ceil(seconds * format->mSampleRate);              // 先计算帧数。固定时间的帧数是一定的,帧的大小是固定的
    
    if (format->mBytesPerFrame > 0)
    {
        bytes = frames * format->mBytesPerFrame;                    // 按照固定的容量计算buffer的大小。for CBR
    }
    else
    {
        // 从 AQ 获取最大 Packets 的大小。按理说,一个 Packet 是一个完整的可播放单位
        UInt32 maxPacketSize;
        if (format->mBytesPerPacket)
        {
            maxPacketSize = format->mBytesPerPacket;                // 获取到实际的大小
        }
        else
        {
            UInt32 propertySize = sizeof(maxPacketSize);
            CheckError(AudioQueueGetProperty(queue,
                                             kAudioConverterPropertyMaximumOutputPacketSize,
                                             &maxPacketSize,        // 获取最大值
                                             &propertySize),
                       "Couldn't get queue's maximum output size");
        }
        if (format->mFramesPerPacket > 0) {                 // 获取到了则计算。总的packets数 = frames总数 / mFramesPerPacket每个包的frame数
            packets = frames / format->mFramesPerPacket;    // 注意: format->mBytesPerPacket  format->mFramesPerPacket format->mBytesPerFrame
        }
        else{
            packets = frames;                               // CBR,没获取到按照PCM计算。
        }
        if (packets == 0) {
            packets = 1;
        }
        bytes = packets * maxPacketSize;                    // 按照最大的 packets 大小计算 buffer 容量,for VBR
    }
    return bytes;
}




#pragma mark record callback function
// 由AQ调用,AQ满了则调用,从硬件获取数据,在回调中输出数据,写到文件中
// 边录音边转码。从硬件获取的数据,每填充一个buffer就调用一次该回调函数。
// 由此,该回调函数可以选择性处理数据?
// 回调将 inBuffer 重新入队以便重用?说明在回调之前,inBuffer被出队了一次。影响的应该是转码
// 在回调里面控制AQ的结束(录制结束或播放结束时)
// 录制和播放流程相反,录制是从回调函数中获取数据写入到文件,播放是播放完第一个Buffer时,从回调中喂给播放器数据以继续播放
// 假设录制到文件、从文件播放,那么,前者的AQ是硬件的 Raw data 的数据,后者的 AQ 是从文件获取到的数据。
// Queue的作用就是为了流畅录制、流畅播放。一个紧接麦克风,一个紧接扬声器

static void MyAQInputCallback(void                                  *inUserData,
                              AudioQueueRef                         inQueue,
                              AudioQueueBufferRef                   inBuffer,
                              const AudioTimeStamp                  *inStartTime,
                              UInt32                                inNumPackets,
                              const AudioStreamPacketDescription    *inPacketDesc)
{
    MyRecorder *recorder = (MyRecorder*)inUserData;
    // 写入数据到文件
    if (inNumPackets > 0)
    {
        CheckError(AudioFileWritePackets(recorder->recordFile,
                                         FALSE,
                                         inBuffer->mAudioDataByteSize,
                                         inPacketDesc,
                                         recorder->recordPacket,
                                         &inNumPackets,
                                         inBuffer->mAudioData),
                   "Writing Audio File Packets failed");
    }
    recorder->recordPacket += inNumPackets;
    
    // 入队以重用
    if (recorder->running)      // 不入队了就结束了
    {
        CheckError(AudioQueueEnqueueBuffer(inQueue,
                                           inBuffer,
                                           0,
                                           NULL),
                   "AudioQueueEnqueueBuffer failed");
    }
}



#pragma mark main function
int main(int argc, const char * argv[])
{
    // 初始化 outASBD,用于录制文件的格式
    MyRecorder recorder = {0};
    AudioStreamBasicDescription recordFormat = {0};
    
    MyGetDefaultInputDeviceSampleRate(&recordFormat.mSampleRate);
    // Configure the output data format to be AAC
    recordFormat.mFormatID = kAudioFormatAppleIMA4;     // kAudioFormatMPEG4AAC;
    recordFormat.mChannelsPerFrame = 2;
    
    UInt32 propSize = sizeof(recordFormat);
    CheckError(AudioFormatGetProperty(kAudioFormatProperty_FormatInfo,
                                      0,
                                      NULL,
                                      &propSize,
                                      &recordFormat),
               "AudioFormatGetProperty failed");
    // 创建音频队列、回调函数
    AudioQueueRef queue = {0};
    CheckError(AudioQueueNewInput(&recordFormat,
                                  MyAQInputCallback,
                                  &recorder,            // User data
                                  NULL,                 // inCallbackRunLoop
                                  NULL,                 // inCallbackRunLoopMode
                                  0,                    // Reserved for future use. Pass 0.
                                  &queue),              // outAQ,返回时指向最新创建的 AQ 对象
               "AudioQueueNewInput failed");
    // 自动填充
    UInt32 size = sizeof(recordFormat);
    CheckError(AudioQueueGetProperty(queue,
                                     kAudioConverterCurrentOutputStreamDescription,
                                     &recordFormat,
                                     &size),
               "AudioQueueGetProperty failed");
    // 创建输出文件
    CFURLRef myFileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault,
                                                       CFSTR("output.caf"),
                                                       kCFURLPOSIXPathStyle,
                                                       false);
    CheckError(AudioFileCreateWithURL(myFileURL, kAudioFileCAFType, &recordFormat, kAudioFileFlags_EraseFile, &recorder.recordFile), "AudioFileCreateWithURL failed");
    CFRelease(myFileURL);
    // 从 AQ 拷贝 Magic Cookie 到输出文件
    MyCopyEncoderCookieToFile(queue, recorder.recordFile);
    // 分配输出的 buffer。AQ由这三个 Buffer 组成。
    int bufferByteSize = MyComputeRecordBufferSize(&recordFormat, queue, 0.5);
    // 分配三个 buffer 入队列
    int bufferIndex;
    for (bufferIndex = 0; bufferIndex < kNumberRecordBuffers; ++bufferIndex)
    {
        AudioQueueBufferRef buffer;
        CheckError(AudioQueueAllocateBuffer(queue,
                                            bufferByteSize,
                                            &buffer),
                   "AudioQueueAllocateBuffer failed");
        CheckError(AudioQueueEnqueueBuffer(queue,
                                           buffer,
                                           0,
                                           NULL),
                   "AudioQueueEnqueueBuffer failed");
    }
    
    recorder.running = TRUE;
    // 开始录制
    CheckError(AudioQueueStart(queue, NULL),"AudioQueueStart failed");
    
    printf("Recording... press <enter> to end:\n");
    getchar();
    
    printf("Recording done...\n");
    recorder.running = FALSE;
    CheckError(AudioQueueStop(queue, TRUE), "AudioQueueStop failed");
    MyCopyEncoderCookieToFile(queue,recorder.recordFile);
    AudioQueueDispose(queue, TRUE);
    AudioFileClose(recorder.recordFile);
    
    return 0;
}

View Code

  im4a 也是 aac 数据的封装

8. 音频队列播放aac数据

audiotrack 架构 audio toolbox_Graph_06

audiotrack 架构 audio toolbox_Graph_07

// clang main.m -framework AVFoundation -framework CoreAudio -framework AudioToolbox -framework CoreFoundation -o app

#import <AudioToolbox/AudioToolbox.h>

//Change the filename to something on your computer...
#define kPlaybackFileLocation CFSTR("/Users/alex/Desktop/Audio/qianzhihe.m4a")
#define kNumberPlaybackBuffers 3

#pragma mark user data struct
//5.2
typedef struct MyPlayer{
    AudioFileID playbackFile;
    SInt64 packetPosition;
    UInt32 numPacketsToRead;
    AudioStreamPacketDescription *packetDesc;
    Boolean isDone;
}MyPlayer;

#pragma mark utility functions
//4.2
static void CheckError(OSStatus err, const char* operation){
    if (err == noErr)return;
    
    char errorString[20];
    *(UInt32 *)(errorString+1) = CFSwapInt32HostToBig(err);
    if(isprint(errorString[1]) && isprint(errorString[2])
       && isprint(errorString[3]) && isprint(errorString[4])){
        errorString[0] = errorString[5] = '\'';
        errorString[6] = '\0';
    }
    else{
        sprintf(errorString, "%d", (int)err);
    }
    fprintf(stderr, "Error: %s (%s)\n",operation, errorString);
    exit(1);
}
//5.14
static void MyCopyEncoderCookieToQueue(AudioFileID theFile,
                                       AudioQueueRef queue)
{
    UInt32 propertySize;
    OSStatus result = AudioFileGetProperty(theFile,
                                           kAudioFilePropertyMagicCookieData,
                                           &propertySize,
                                           NULL);
    if (result == noErr && propertySize > 0) {
        Byte* magicCookie = (UInt8*)malloc(sizeof(UInt8)*propertySize);
        CheckError(AudioFileGetProperty(theFile,
                                        kAudioFilePropertyMagicCookieData,
                                        &propertySize,
                                        magicCookie),
                   "Audio File get magic cookie failed");
        CheckError(AudioQueueSetProperty(queue,
                                         kAudioQueueProperty_MagicCookie,
                                         magicCookie,
                                         propertySize),
                   "Audio Queue set magic cookie failed");
        free(magicCookie);
    }
}
//5.15
void CalculateBytesForTime(AudioFileID inAudioFile,
                              AudioStreamBasicDescription inDesc,
                              Float64 inSeconds,
                              UInt32 *outBufferSize,
                              UInt32 *outNumPackets)
{
    UInt32 maxPacketSize;
    UInt32 propSize = sizeof(maxPacketSize);
    CheckError(AudioFileGetProperty(inAudioFile,
                                    kAudioFilePropertyPacketSizeUpperBound,
                                    &propSize,
                                    &maxPacketSize),
               "Couldn't get file's max packet size");
    static const int maxBufferSize = 0x10000;
    static const int minBufferSize = 0x4000;
    
    if (inDesc.mFramesPerPacket) {
        Float64 numPacketsForTime = inDesc.mSampleRate / inDesc.mFramesPerPacket * inSeconds;
        *outBufferSize = numPacketsForTime * maxPacketSize;
    }
    else{
        *outBufferSize = maxBufferSize > maxPacketSize ? maxBufferSize : maxPacketSize;
    }
    
    if(*outBufferSize > maxBufferSize &&
       *outBufferSize > maxPacketSize){
        *outBufferSize = maxBufferSize;
    }
    else{
        if(*outBufferSize < minBufferSize){
            *outBufferSize = minBufferSize;
        }
    }
    *outNumPackets = *outBufferSize / maxPacketSize;
}

#pragma mark playback callback function
//replace with listings 5.16-5.19
static void MyAQOutputCallback(void *inUserData,
                               AudioQueueRef inAQ,
                               AudioQueueBufferRef inCompleteAQBuffer)
{
    MyPlayer *aqp = (MyPlayer*)inUserData;
    if (aqp->isDone) return;    
    UInt32 numBytes;
    UInt32 nPackets = aqp->numPacketsToRead;
    CheckError(AudioFileReadPackets(aqp->playbackFile,
                                    false,
                                    &numBytes,
                                    aqp->packetDesc,
                                    aqp->packetPosition,
                                    &nPackets,
                                    inCompleteAQBuffer->mAudioData),
               "AudioFileReadPackets failed");
    if (nPackets > 0)
    {
        inCompleteAQBuffer->mAudioDataByteSize = numBytes;
        AudioQueueEnqueueBuffer(inAQ,
                                inCompleteAQBuffer,
                                (aqp->packetDesc ? nPackets : 0),
                                aqp->packetDesc);
        aqp->packetPosition += nPackets;
    }
    else
    {
        CheckError(AudioQueueStop(inAQ, false), "AudioQueueStop failed");
        aqp->isDone = true;
    }
}


#pragma mark main function
int    main(int argc, const char *argv[])
{
    MyPlayer player = {0};
    
    CFURLRef myFileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, kPlaybackFileLocation, kCFURLPOSIXPathStyle, false);

    CheckError(AudioFileOpenURL(myFileURL, kAudioFileReadPermission, 0, &player.playbackFile), "AudioFileOpenURL failed");
    CFRelease(myFileURL);
    
    AudioStreamBasicDescription dataFormat;
    UInt32 propSize = sizeof(dataFormat);
    CheckError(AudioFileGetProperty(player.playbackFile, kAudioFilePropertyDataFormat,
                                    &propSize, &dataFormat), "couldn't get file's data format");
    
    AudioQueueRef queue;
    CheckError(AudioQueueNewOutput(&dataFormat, // ASBD
                                   MyAQOutputCallback, // Callback
                                   &player, // user data
                                   NULL, // run loop
                                   NULL, // run loop mode
                                   0, // flags (always 0)
                                   &queue), // output: reference to AudioQueue object
               "AudioQueueNewOutput failed");
    
    
     UInt32 bufferByteSize;
    CalculateBytesForTime(player.playbackFile, dataFormat,  0.5, &bufferByteSize, &player.numPacketsToRead);
    bool isFormatVBR = (dataFormat.mBytesPerPacket == 0 || dataFormat.mFramesPerPacket == 0);
    if (isFormatVBR)
        player.packetDesc = (AudioStreamPacketDescription*)malloc(sizeof(AudioStreamPacketDescription) * player.numPacketsToRead);
    else
        player.packetDesc = NULL;    
    MyCopyEncoderCookieToQueue(player.playbackFile, queue);
    
    AudioQueueBufferRef    buffers[kNumberPlaybackBuffers];
    player.isDone = false;
    player.packetPosition = 0;
    int i;
    for (i = 0; i < kNumberPlaybackBuffers; ++i)
    {
        CheckError(AudioQueueAllocateBuffer(queue, bufferByteSize, &buffers[i]), "AudioQueueAllocateBuffer failed");
        
        MyAQOutputCallback(&player, queue, buffers[i]);
        
        if (player.isDone)
            break;
    }
    CheckError(AudioQueueStart(queue, NULL), "AudioQueueStart failed");
    
    printf("Playing...\n");
    do
    {
        CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0.25, false);
    } while (!player.isDone /*|| gIsRunning*/);

    CFRunLoopRunInMode(kCFRunLoopDefaultMode, 2, false);
    
    player.isDone = true;
    CheckError(AudioQueueStop(queue, TRUE), "AudioQueueStop failed");
    
    AudioQueueDispose(queue, TRUE);
    AudioFileClose(player.playbackFile);
    
    return 0;
}

View Code

  以上都是oc的代码,直接用代码开头的命令编译即可使用。