一、前提知识:

       App中的一个surface对应SufaceFlinger中的一个layout,surface需要申请GraphicBuffer来绘制UI内容,然后交给SurfaceFlinger去合成,最后显示;ImageReader中获得的image,也就是GraphicBuffer,用于绘制surface中的UI。

应用层经常会搭配如下两个函数来获取camera数据:

// 需要什么样的图像
mImageReader = ImageReader.newInstance(1920, 1080, ImageFormat.YUV_420_888, 2);
// 设置图像绘制完毕的监听
mImageReader.setOnImageAvailableListener(mmOnImageAvailableListener, mBackgroundHandler);

二、函数解析:

       首先看下ImageReader::newInstance的实现:

文件路径:Z:\Android9_beta\frameworks\base\media\java\android\media\ImageReader.java

public static ImageReader newInstance(int width, int height, int format, int maxImages) {

        return new ImageReader(width, height, format, maxImages,  

               BUFFER_USAGE_UNKNOWN);

}

接下来看看ImageReader的构造:

protected ImageReader(int width, int height, int format, int maxImages) {
        mWidth = width;
        mHeight = height;
        mFormat = format;
        mMaxImages = maxImages;
      // 检查width 、height 、mMaxImages ,且不允许申请NV21格式的buffer
        if (width < 1 || height < 1) {
            throw new IllegalArgumentException(
                "The image dimensions must be positive");
        }
        if (mMaxImages < 1) {
            throw new IllegalArgumentException(
               "Maximum outstanding image count must be at least 1");
        }
      // 不允许创建nv21格式数据
        if (format == ImageFormat.NV21) {
            throw new IllegalArgumentException(
                    "NV21 format is not supported");
        }
        mNumPlanes = ImageUtils.getNumPlanesForFormat(mFormat);
      // 下面两个是主要实现
        nativeInit(new WeakReference<>(this), width, height, format, maxImages, usage);
        mSurface = nativeGetSurface();
      // 标记申请的是image是合法的
        mIsReaderValid = true;
        mEstimatedNativeAllocBytes = ImageUtils.getEstimatedNativeAllocBytes(
                width, height, format, /*buffer count*/ 1);
        VMRuntime.getRuntime().registerNativeAllocation(mEstimatedNativeAllocBytes);
 }

这里面主要的就是执行nativeInit和nativeGetSurface两个函数,这两个函数的实现在native文件android_media_ImageReader.cpp(Image.java和ImageReader.java共同的native文件)中,路径为:Z:\Android9_beta\frameworks\base\media\jni\android_media_ImageReader.cpp

函数映射如下:

{"nativeInit",             "(Ljava/lang/Object;IIIIJ)V",  (void*)ImageReader_init },
{"nativeImageSetup",       "(Landroid/media/Image;)I",   (void*)ImageReader_imageSetup },

ImageReader_init:

static void ImageReader_init(JNIEnv* env, jobject thiz, jobject weakThiz, jint width, jint height,jint format, jint maxImages, jlong ndkUsage) {
    status_t res;
    int nativeFormat;
    android_dataspace nativeDataspace;
    ALOGV("%s: width:%d, height: %d, format: 0x%x, maxImages:%d",
         __FUNCTION__, width, height, format, maxImages);
   // 这里是将申请的格式转化成公共格式,然后获取hal层中跟该格式对应的Format
    PublicFormat publicFormat = static_cast<PublicFormat>(format);
    nativeFormat = android_view_Surface_mapPublicFormatToHalFormat(
        publicFormat);
    nativeDataspace = android_view_Surface_mapPublicFormatToHalDataspace(
        publicFormat);
    jclass clazz = env->GetObjectClass(thiz);
    if (clazz == NULL) {
        jniThrowRuntimeException(env, "Can't find android/graphics/ImageReader");
        return;
    }
    // 注意这里new出来的JNIImageReaderContext,这是管理buffer的核心。
    // 在JNIImageReaderContext的构造里,会创建一个大小为maxImages的buffer队列mBuffers,
    // 用于将ConsumerBase中获得的buffer放进此队列中。
    sp<JNIImageReaderContext> ctx(new JNIImageReaderContext(env, weakThiz, clazz, maxImages));
    sp<IGraphicBufferProducer> gbProducer;
    sp<IGraphicBufferConsumer> gbConsumer;
    // 创建生产者和消费者。
    BufferQueue::createBufferQueue(&gbProducer, &gbConsumer);
    // 创建消费者,BufferItemConsumer是ConsumerBase的子类,这里可以从 
    // BufferQueueConsumer中拿取已经填充好的buffer数据。
    sp<BufferItemConsumer> bufferConsumer;
    String8 consumerName = String8::format("ImageReader-%dx%df%xm%d-%d-%d",
            width, height, format, maxImages, getpid(),
            createProcessUniqueId());
    uint32_t consumerUsage = GRALLOC_USAGE_SW_READ_OFTEN;
    bool needUsageOverride = ndkUsage != CONSUMER_BUFFER_USAGE_UNKNOWN;
    uint64_t outProducerUsage = 0;
    uint64_t outConsumerUsage = 0;
    android_hardware_HardwareBuffer_convertToGrallocUsageBits(&outProducerUsage, &outConsumerUsage,
            ndkUsage, 0);
    if (isFormatOpaque(nativeFormat)) {
        // Use the SW_READ_NEVER usage to tell producer that this format is not for preview or video
        // encoding. The only possibility will be ZSL output.
        consumerUsage = GRALLOC_USAGE_SW_READ_NEVER;
        if (needUsageOverride) {
            consumerUsage = android_convertGralloc1To0Usage(0, outConsumerUsage);
        }
    } else if (needUsageOverride) {
        ALOGW("Consumer usage override for non-opaque format is not implemented yet, "
                "ignore the provided usage from the application");
    }
    // 将gbConsumer作为参数构造BufferItemConsumer。
    bufferConsumer = new BufferItemConsumer(gbConsumer, consumerUsage, maxImages,
            /*controlledByApp*/true);
    if (bufferConsumer == nullptr) {
        jniThrowExceptionFmt(env, "java/lang/RuntimeException",
                "Failed to allocate native buffer consumer for format 0x%x and usage 0x%x", nativeFormat, consumerUsage);
        return;

    }
    // 为ctx设置消费者,以便于调用acquireNextImage时能拿到ConsumerBase中的buffer
    ctx->setBufferConsumer(bufferConsumer);
    bufferConsumer->setName(consumerName);
   // 后面会使用该gbProducer创建surface
    ctx->setProducer(gbProducer);
   // 将ctx赋值给ConsumerBase中的mFrameAvailableListener。
    bufferConsumer->setFrameAvailableListener(ctx);
    ImageReader_setNativeContext(env, thiz, ctx);
    // 将申请的format保存起来,后面和acquire到的buffer->format进行比较,以确定是否是需要的buffer
    ctx->setBufferFormat(nativeFormat);
    ctx->setBufferDataspace(nativeDataspace);
    ctx->setBufferWidth(width);
    ctx->setBufferHeight(height);
    // Set the width/height/format/dataspace to the bufferConsumer.
    res = bufferConsumer->setDefaultBufferSize(width, height);
    if (res != OK) {
        jniThrowExceptionFmt(env, "java/lang/IllegalStateException",
                          "Failed to set buffer consumer default size (%dx%d) for format 0x%x",width, height, nativeFormat);
        return;
    }
    res = bufferConsumer->setDefaultBufferFormat(nativeFormat);
    if (res != OK) {
        jniThrowExceptionFmt(env, "java/lang/IllegalStateException",
              "Failed to set buffer consumer default format 0x%x", nativeFormat);
    }
    res = bufferConsumer->setDefaultBufferDataSpace(nativeDataspace);
    if (res != OK) {
        jniThrowExceptionFmt(env, "java/lang/IllegalStateException",
             "Failed to set buffer consumer default dataSpace 0x%x", nativeDataspace);
    }
}

下面对上面涉及到的几个函数进行解析:

①JNIImageReaderContext(继承于ConsumerBase::FrameAvailableListener)构造:

JNIImageReaderContext::JNIImageReaderContext(JNIEnv* env,
        jobject weakThiz, jclass clazz, int maxImages) :
    mWeakThiz(env->NewGlobalRef(weakThiz)),
    mClazz((jclass)env->NewGlobalRef(clazz)),
    mFormat(0),
    mDataSpace(HAL_DATASPACE_UNKNOWN),
    mWidth(-1),
    mHeight(-1) {
    for (int i = 0; i < maxImages; i++) {
        BufferItem* buffer = new BufferItem;
        mBuffers.push_back(buffer);
    }
}

这里会为往其成员变量mBuffers中push maxImagees个BufferItem*类型的buffer,用于接收ConsumerBase中获取的已经绘制完毕的buffer。

② createBufferQueue方法的实现:

void BufferQueue::createBufferQueue(sp<IGraphicBufferProducer>* outProducer,
        sp<IGraphicBufferConsumer>* outConsumer,
        bool consumerIsSurfaceFlinger) {
    LOG_ALWAYS_FATAL_IF(outProducer == NULL,
            "BufferQueue: outProducer must not be NULL");
    LOG_ALWAYS_FATAL_IF(outConsumer == NULL,
            "BufferQueue: outConsumer must not be NULL");

    sp<BufferQueueCore> core(new BufferQueueCore());
    LOG_ALWAYS_FATAL_IF(core == NULL,
            "BufferQueue: failed to create BufferQueueCore");

    sp<IGraphicBufferProducer> producer(new BufferQueueProducer(core, consumerIsSurfaceFlinger));
    LOG_ALWAYS_FATAL_IF(producer == NULL,
            "BufferQueue: failed to create BufferQueueProducer");

    sp<IGraphicBufferConsumer> consumer(new BufferQueueConsumer(core));
    LOG_ALWAYS_FATAL_IF(consumer == NULL,
            "BufferQueue: failed to create BufferQueueConsumer");

    *outProducer = producer;
    *outConsumer = consumer;
}

可以看出,这里会new BufferQueueCore,其是BufferQueue的服务中心,用来管理生产者和消费者中的buffer(生产者和消费者由同一个bufferQueueCore构造)。IGraphicBufferProducer是生产者接口类,服务端实现在BufferQueueProducer中,同理,IGraphicBufferConsumer是消费者接口类,服务端实现在BufferQueueConsumer中。这里创建完生产者和消费者后传递给外面的指针。

③ BufferItemConsumer(继承于ConsumerBase)构造:

BufferItemConsumer::BufferItemConsumer(
        const sp<IGraphicBufferConsumer>& consumer, uint64_t consumerUsage,
        int bufferCount, bool controlledByApp) :
    ConsumerBase(consumer, controlledByApp) {

    status_t err = mConsumer->setConsumerUsageBits(consumerUsage);
    LOG_ALWAYS_FATAL_IF(err != OK,
            "Failed to set consumer usage bits to %#" PRIx64, consumerUsage);
    if (bufferCount != DEFAULT_MAX_BUFFERS) {
        err = mConsumer->setMaxAcquiredBufferCount(bufferCount);
        LOG_ALWAYS_FATAL_IF(err != OK,
                "Failed to set max acquired buffer count to %d", bufferCount);
    }
}

这里会明确表明ConsumerBase对象最多能从BufferQueueConsumer中获取maxImages个buffer。因为ConsumerBase对象拿到的buffer都是赋值给JniImageReaderContext中mBuffers的成员,而mBuffers中正好push了maxImages个buffer。

④ ctx->setBufferConsumer(bufferConsumer);ctx->setProducer(gbProducer)分别给JNIImageReaderContextm成员变量mConsumer、mProducer赋值。因为上层都是指望JNIImageReaderContext来获取buffer。

⑤ bufferConsumer->setDefaultBufferFormat(nativeFormat),这个设置很重要,它可以决定我们的生产者该生产什么格式的buffer,这样我们在camera configure的时候,会向hal层请求该格式的buffer。

ConsumerBase::setDefaultBufferFormat最终会执行BufferQueueConsumer::setDefaultBufferFormat,具体实现如下:

status_t BufferQueueConsumer::setDefaultBufferFormat(PixelFormat defaultFormat) {
    ATRACE_CALL();
    BQ_LOGV("setDefaultBufferFormat: %u", defaultFormat);
    Mutex::Autolock lock(mCore->mMutex);
    mCore->mDefaultBufferFormat = defaultFormat;
    return NO_ERROR;
}

可以看出,这里会给mCore设置格式,而生产者和消费者都是用同一个mCore创建的,也就是说在确定消费者bufferFormat时,也确定了生产者bufferFormat,这样我们在向相机发送createCaptureSession请求时,就明确了需要申请什么格式的buffer。

⑥ bufferConsumer->setFrameAvailableListener(ctx),当buffer绘制完毕会触发该监听:

void ConsumerBase::setFrameAvailableListener(
        const wp<FrameAvailableListener>& listener) {
    CB_LOGV("setFrameAvailableListener");
    Mutex::Autolock lock(mFrameAvailableMutex);
    mFrameAvailableListener = listener;
}

JNIImageReaderContext是继承自ConsumerBase::FrameAvailableListener的,这里将其对象ctx赋给ConsumerBase的成员变量mFrameAvailableListener。当有buffer绘制完毕时,会执行ConsumerBase::onFrameAvailable:

void ConsumerBase::onFrameAvailable(const BufferItem& item) {
    CB_LOGV("onFrameAvailable");
    sp<FrameAvailableListener> listener;
    { // scope for the lock
        Mutex::Autolock lock(mFrameAvailableMutex);
        listener = mFrameAvailableListener.promote();
    }
    if (listener != NULL) {
        CB_LOGV("actually calling onFrameAvailable");
        listener->onFrameAvailable(item);
    }
}

可以看出这里也就会去执行JNIImageReaderContext::onFrameAvailable:

void JNIImageReaderContext::onFrameAvailable(const BufferItem& /*item*/) {
    ALOGV("%s: frame available", __FUNCTION__);
    bool needsDetach = false;
    JNIEnv* env = getJNIEnv(&needsDetach);
    if (env != NULL) {
        env->CallStaticVoidMethod(mClazz, gImageReaderClassInfo.postEventFromNative, mWeakThiz);
    } else {
        ALOGW("onFrameAvailable event will not posted");
    }
    if (needsDetach) {
        detachJNI();
    }
}

这里通过jni调用ImagerReader.java中的postEventFromNative方法:

private static void postEventFromNative(Object selfRef) {
        @SuppressWarnings("unchecked")
        WeakReference<ImageReader> weakSelf = (WeakReference<ImageReader>)selfRef;
        final ImageReader ir = weakSelf.get();
        if (ir == null) {
            return;
        }
        final Handler handler;
        synchronized (ir.mListenerLock) {
            handler = ir.mListenerHandler;
        }
        if (handler != null) {
            handler.sendEmptyMessage(0);
        }
    }

这里会获取ImageReader中的mListenerHandler,并用mListenerHandler发送一个空消息。mListenerHandler是在ImageReader::setOnImageAvailableListener中进行设置的。

public void setOnImageAvailableListener(OnImageAvailableListener listener, Handler handler) {
        synchronized (mListenerLock) {
            if (listener != null) {
                Looper looper = handler != null ? handler.getLooper() : Looper.myLooper();
                if (looper == null) {
                    throw new IllegalArgumentException(
                            "handler is null but the current thread is not a looper");
                }
                if (mListenerHandler == null || mListenerHandler.getLooper() != looper) {
                    mListenerHandler = new ListenerHandler(looper);
                }
                mListener = listener;
            } else {
                mListener = null;
                mListenerHandler = null;
            }
        }
}

可以看出这里会先去获取Looper,它是MessageQueue的管家,调用Looper的loop()方法后,就会进入到一个无限循环当中,然后每当发现MessageQueue中有消息时,就会将它取出,并传递给Handler的handlerMessage()方法中。

这里可能会有一个疑问:为什么handler要先sendMessage,然后再接收回来对其进行处理,直接处理不行么?实际上,handler是用来处理ui的,而ui的更新只能在主线程中去操作,不能在子线程中去处理ui,否则会报错,因此需要在子线程中将消息发出去,在主线程中接收该消息,让主线程去更新ui。因此处理消息时,handler对象只能存在一个,子线程通过该handler执行sendMessage,主线程通过该handler来handleMessage。

可以看到,上面将OnImageAvailableListener listener赋值给成员变量mListener,并且new了一个ListenerHandler对象。看一下ListenerHandler对象是如何处理消息的:

private final class ListenerHandler extends Handler {
        public ListenerHandler(Looper looper) {
            super(looper, null, true /*async*/);
        }
        @Override
        public void handleMessage(Message msg) {
            OnImageAvailableListener listener;
            synchronized (mListenerLock) {
                listener = mListener;
            }
            // It's dangerous to fire onImageAvailable() callback when the ImageReader is being
            // closed, as application could acquire next image in the onImageAvailable() callback.
            boolean isReaderValid = false;
            synchronized (mCloseLock) {
                isReaderValid = mIsReaderValid;
            }
            if (listener != null && isReaderValid) {
                listener.onImageAvailable(ImageReader.this);
            }
        }
    }

可以看到在handleMessage时,会执行listener.onImageAvailable。OnImageAvailableListener是一个接口类,因此new的时候创建一个匿名类(也就是下面{}中的方法)去重新实现其内部唯一方法onImageAvailable。当收到buffer填充好的回调时,会执行此方法,可以在此做进一步处理:

private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
            = new ImageReader.OnImageAvailableListener() {
        @Override
        public void onImageAvailable(ImageReader reader) {
            Log.e(TAG, "WUHAN onImageAvailable");
            Image img = reader.acquireNextImage();
            ByteBuffer buffer = img.getPlanes()[0].getBuffer();
            byte[] data = new byte[buffer.remaining()];
            buffer.get(data);
            // dump图像数据,需要自己实现
            writeData(data, frameId);
            img.close();
        }
 };

当buffer准备好了以后,此时我们可以通过acquireNextImage()去获取该buffer,看下其流程:Image::acquireNextImage->ImageReader::acquireNextSurfaceImage->nativeImageSetup->ImageReader_imageSetup,直接看ImageReader_imageSetup的实现:

static jint ImageReader_imageSetup(JNIEnv* env, jobject thiz, jobject image) {
    ALOGV("%s:", __FUNCTION__);
    // 获取ImageReader_init时创建的ctx
    JNIImageReaderContext* ctx = ImageReader_getContext(env, thiz);
    if (ctx == NULL) {
        jniThrowException(env, "java/lang/IllegalStateException",
                "ImageReader is not initialized or was already closed");
        return -1;
    }
    // 获取ImageReader_init时设置进去的mConsumer.
    BufferItemConsumer* bufferConsumer = ctx->getBufferConsumer();
   // 从ctx中创建的mBuffers中取出一个buffer
    BufferItem* buffer = ctx->getBufferItem();
    if (buffer == NULL) {
        ALOGW("Unable to acquire a buffer item, very likely client tried to acquire more than maxImages buffers");
        return ACQUIRE_MAX_IMAGES;
    }
   // 这里是将ConsumerBase中获取到的buffer给到上述mBuffers中的buffer。
    status_t res = bufferConsumer->acquireBuffer(buffer, 0);
    if (res != OK) {
        ctx->returnBufferItem(buffer);
        if (res != BufferQueue::NO_BUFFER_AVAILABLE) {
            if (res == INVALID_OPERATION) {
                // Max number of images were already acquired.
                ALOGE("%s: Max number of buffers allowed are already acquired : %s (%d)",
                        __FUNCTION__, strerror(-res), res);
                return ACQUIRE_MAX_IMAGES;
            } else {
                ALOGE("%s: Acquire image failed with some unknown error: %s (%d)",
                        __FUNCTION__, strerror(-res), res);
                jniThrowExceptionFmt(env, "java/lang/IllegalStateException",
                        "Unknown error (%d) when we tried to acquire an image.", res);
                return ACQUIRE_NO_BUFFERS;
            }
        }
        // This isn't really an error case, as the application may acquire buffer at any time.
        return ACQUIRE_NO_BUFFERS;
    }
   // 这是需要的format类型
   int imgReaderFmt = ctx->getBufferFormat();
   // 这是获取到的绘制完毕的buffer类型
   int bufferFormat = buffer->mGraphicBuffer->getPixelFormat();
   if (imgReaderFmt != bufferFormat) {
      // 如果申请的format和获取到的buffer的format不匹配,进一步处理。
      ........
   }
}

这里会通过BufferItemConsumer::acquireBuffer来获取buffer,继续往下看:

status_t BufferItemConsumer::acquireBuffer(BufferItem *item,
        nsecs_t presentWhen, bool waitForFence) {
    status_t err;
    if (!item) return BAD_VALUE;
    Mutex::Autolock _l(mMutex);
    err = acquireBufferLocked(item, presentWhen);
    if (err != OK) {
        if (err != NO_BUFFER_AVAILABLE) {
            BI_LOGE("Error acquiring buffer: %s (%d)", strerror(err), err);
        }
        return err;
    }
    if (waitForFence) {
        err = item->mFence->waitForever("BufferItemConsumer::acquireBuffer");
        if (err != OK) {
            BI_LOGE("Failed to wait for fence of acquired buffer: %s (%d)",
                    strerror(-err), err);
            return err;
        }
    }
    item->mGraphicBuffer = mSlots[item->mSlot].mGraphicBuffer;
    return OK;
}

这里要说一下,ConsumerBase中有一个重要的成员变量mSlots,它是一个Slot类型数组:

Slot mSlots[BufferQueueDefs::NUM_BUFFER_SLOTS];

其中,NUM_BUFFER_SLOTS大小是在BufferQueueDefs.h中定义的:

static constexpr int NUM_BUFFER_SLOTS = 64;

再看一下Slot的定义:

struct Slot {
        sp<GraphicBuffer> mGraphicBuffer;
        sp<Fence> mFence;
        uint64_t mFrameNumber;
};

其中mGraphicBuffer就是我们要用于显示的buffer,即图形缓存区。

这里取出mSlots中item->mSlot下标所在的slot对应的mGraphicBuffer,看来在执行完acquireBufferLocked后,buffer就会添加到ConsumerBase的mSlots某个卡槽中。我们继续看下ConsumerBase::acquireBufferLocked的实现:

status_t ConsumerBase::acquireBufferLocked(BufferItem *item,
        nsecs_t presentWhen, uint64_t maxFrameNumber) {
    if (mAbandoned) {
        CB_LOGE("acquireBufferLocked: ConsumerBase is abandoned!");
        return NO_INIT;
    }
    status_t err = mConsumer->acquireBuffer(item, presentWhen, maxFrameNumber);
    if (err != NO_ERROR) {
        return err;
    }
    if (item->mGraphicBuffer != NULL) {
        if (mSlots[item->mSlot].mGraphicBuffer != NULL) {
            freeBufferLocked(item->mSlot);
        }
        mSlots[item->mSlot].mGraphicBuffer = item->mGraphicBuffer;
    }
    mSlots[item->mSlot].mFrameNumber = item->mFrameNumber;
    mSlots[item->mSlot].mFence = item->mFence;
    CB_LOGV("acquireBufferLocked: -> slot=%d/%" PRIu64,
            item->mSlot, item->mFrameNumber);
    return OK;
}

这里buffer又是执行mConsumer->acquireBuffer后拿到的,然后将其填充到mSlots[item->mSlot]中。mConsumer是ConsumerBase的一个成员变量(在ImageReader_init中执行new BufferItemConsumer时传递进去的),是sp<IGraphicBufferConsumer>类型,该类的具体实现在类BufferQueueConsumer中,接下来看看BufferItemConsumer::acquireBuffer的重要实现(代码太多,只关注需要的部分):

status_t BufferQueueConsumer::acquireBuffer(BufferItem* outBuffer,
        nsecs_t expectedPresent, uint64_t maxFrameNumber) {
     int slot = BufferQueueCore::INVALID_BUFFER_SLOT;
     .......
     if (sharedBufferAvailable && mCore->mQueue.empty()) {
            // make sure the buffer has finished allocating before acquiring it
            mCore->waitWhileAllocatingLocked();
            // 记录准备好buffer的卡槽下标
            slot = mCore->mSharedBufferSlot; 
            // 获取buffer
            outBuffer->mGraphicBuffer = mSlots[slot].mGraphicBuffer; 
            // 记录该buffer在mSlots中下的下标
            outBuffer->mSlot = slot;  
            // 标记该buffer已被取出  
            outBuffer->mAcquireCalled = mSlots[slot].mAcquireCalled;
}

也就是从BufferQueueConsumer::mSlot中拿到已经绘制好的buffer,并将该buffer在mSlots中的位置记录下来。看一下这里的mSlots:

BufferQueueConsumer::BufferQueueConsumer(const sp<BufferQueueCore>& core) :
    mCore(core),
    mSlots(core->mSlots),
BufferQueueProducer::BufferQueueProducer(const sp<BufferQueueCore>& core,
        bool consumerIsSurfaceFlinger) :
    mCore(core),
    mSlots(core->mSlots),

从上可以看出:BufferQueueCore、BufferQueueConsumer、BufferQueueProducer中都使用的同一个mSlots,类型为BufferQueueDefs::SlotsType& mSlots,其中,SlotsType的定义如下:

typedef BufferSlot SlotsType[NUM_BUFFER_SLOTS];    // NUM_BUFFER_SLOTS = 64;

在ConsumerBase中维护着一个同样大小的mSlots,与BufferQueueConsumer中的mSlots大小相同(类型不同),两者存放的buffer的位置一一对应。

现在,我们知道了buffer传递程:BufferQueueCore->BufferQueueConsumer->ConsumerBase。同时也发现JNIImageReader在获取buffer时提供的巨大帮助,因此需要特别注意其对象ctx。

至此,nativeInit()已经解析完毕,接下来解析nativeGetSurface:

static jobject ImageReader_getSurface(JNIEnv* env, jobject thiz) {
    ALOGV("%s: ", __FUNCTION__);
    IGraphicBufferProducer* gbp = ImageReader_getProducer(env, thiz);
    if (gbp == NULL) {
        jniThrowRuntimeException(env, "Buffer consumer is uninitialized");
        return NULL;
    }
    // Wrap the IGBP in a Java-language Surface.
    return android_view_Surface_createFromIGraphicBufferProducer(env, gbp);
}

这里会获取ImageReader_init里面new出来的ctx,然后取其IGraphicBufferProducer对象,一般拿到gbp多是用来构造surface的:

jobject android_view_Surface_createFromIGraphicBufferProducer(JNIEnv* env,
        const sp<IGraphicBufferProducer>& bufferProducer) {
    if (bufferProducer == NULL) {
        return NULL;
    }
    sp<Surface> surface(new Surface(bufferProducer, true));
    return android_view_Surface_createFromSurface(env, surface);
}

用gbp创建surface,然后返回给ImageReader::mSurface。面通过getSurface就可以获得该surface。当获取了GraphicBuffer,接下来surface就可以使用该GraphicBuffer来绘制UI内容。在app端,我们可以将多个surface传递到一个Arrays数组中,这样当有buffer绘制好了以后,就会通知该Arrays中的每个surface:

mCameraDevice.createCaptureSession(Arrays.asList(mImageReader.getSurface(),…),
                    new CameraCaptureSession.StateCallback() {}

我们在ImageReader中只关心消费者,那buffer又是如何产生的呢?下一节我会介绍surface与BufferQueueProducer之间的联系。若有不足,后期补充,Bye~