哈哈镜的经历,当时想如果相机在拍照时能直接把镜头前的物体通过各种拉伸变换呈现给我们,那岂不是很酷!既然这样,废话不多说,那就让我们在android上尝试下如何DIY我们的相机......

        一. 准备工作

        1. 配置android的开发环境,安装JDK,Eclipse,Android SDK,Eclipse插件ADT等。

        2. 在android配置OpenCV开发环境,一般有两种方式:第一种是下载预编译的OpenCV库,第二种是check out OpenCV源码,然后build。这里具体介绍下第一种,因为第二种网上有很多相关的中文博客介绍,而且对我这种菜鸟来说比较复杂。首先,可以到官网(opencv-android)上下载最新版本的OpenCV-2.3.1-beta1-android-bin.tar.bz2(推荐这个,因为里面有很多sample),解压后可以看到OpenCV-2.3.1和samples这两个文件夹,其中OpenCV-2.3.1中主要是一些用java重写的OpenCV类库,samples顾名思义是一些例子。

        二. 运行samples

        1. 在Eclipse中“File”—>"Import..."—>"Existing Projects into Workspace",“Next”后你选择解压后文件的根目录,"Finish"后你可一看到Eclipse里导入了OpenCV-2.3.1和samples的工程:

             

             

             

              解决方法:先选中OpenCV-2.3.1工程,然后摁F5,其他工程类似操作,如果这样操作后还有错误,可以删除相应工程的gen文件,然后Eclipse会自动生成,这就行了。如下图:

              

               2. 接下来就可以连上真机运行samples了,15-puzzel是一个小游戏,Sample-face-detection是人脸检测程序,用真机试了下不是很灵敏,Sample-image-manipulations是一使用了opencv的灰度图,边缘检测等基本函数的例子。

         三. 窥探"Tutorial"代码

               跳过"Basic-0"直接看"Basic-1"(Basic-0和Basic-1的仅仅是前者没有使用OpenCV类库处理图像),有三

               个类:

,其中Sample1Java是个Activity,SampleViewBase是Sample1View的基类,先看SampleViewBase:

                      

package org.opencv.samples.tutorial1;

import java.util.List;

import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.Canvas;
import android.hardware.Camera;
import android.hardware.Camera.PreviewCallback;
import android.util.Log;
import android.view.SurfaceHolder;
import android.view.SurfaceView;

public abstract class SampleViewBase extends SurfaceView implements SurfaceHolder.Callback, Runnable {
    private static final String TAG = "Sample::SurfaceView";

    private Camera              mCamera;
    private SurfaceHolder       mHolder;
    private int                 mFrameWidth;
    private int                 mFrameHeight;
    private byte[]              mFrame;
    private boolean             mThreadRun;

    public SampleViewBase(Context context) {
        super(context);
        mHolder = getHolder();
        mHolder.addCallback(this);
        Log.i(TAG, "Instantiated new " + this.getClass());
    }

    public int getFrameWidth() {
        return mFrameWidth;
    }

    public int getFrameHeight() {
        return mFrameHeight;
    }

    /**
     * 每当surfaceview显示的格式,大小有变化时调用
     * 
     * @param _holder	变化的surfaceview的surfaceholder
     * @param format	新的格式
     * @param width		新的宽
     * @param height	新的高
     * 
     * @return
     */
    public void surfaceChanged(SurfaceHolder _holder, int format, int width, int height) {
        Log.i(TAG, "surfaceCreated");
        if (mCamera != null) {
            Camera.Parameters params = mCamera.getParameters();
            List<Camera.Size> sizes = params.getSupportedPreviewSizes();
            mFrameWidth = width;
            mFrameHeight = height;

            // selecting optimal camera preview size
            {
                double minDiff = Double.MAX_VALUE;
                for (Camera.Size size : sizes) {
                    if (Math.abs(size.height - height) < minDiff) {
                        mFrameWidth = size.width;
                        mFrameHeight = size.height;
                        minDiff = Math.abs(size.height - height);
                    }
                }
            }

            params.setPreviewSize(getFrameWidth(), getFrameHeight());
            mCamera.setParameters(params);
            mCamera.startPreview();
        }
    }

    public void surfaceCreated(SurfaceHolder holder) {
        Log.i(TAG, "surfaceCreated");
        mCamera = Camera.open();
        mCamera.setPreviewCallback(new PreviewCallback() {
            public void onPreviewFrame(byte[] data, Camera camera) {
                synchronized (SampleViewBase.this) {
                    mFrame = data;
                    SampleViewBase.this.notify();
                }
            }
        });
        (new Thread(this)).start();
    }

    public void surfaceDestroyed(SurfaceHolder holder) {
        Log.i(TAG, "surfaceDestroyed");
        mThreadRun = false;
        if (mCamera != null) {
            synchronized (this) {
                mCamera.stopPreview();
                mCamera.setPreviewCallback(null);
                mCamera.release();
                mCamera = null;
            }
        }
    }
    
    //抽象方法
    protected abstract Bitmap processFrame(byte[] data);

    public void run() {
        mThreadRun = true;
        Log.i(TAG, "Starting processing thread");
        while (mThreadRun) {
            Bitmap bmp = null;

            synchronized (this) {
                try {
                    this.wait();
                    //处理图像
                    bmp = processFrame(mFrame);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            }

            if (bmp != null) {
                Canvas canvas = mHolder.lockCanvas();
                if (canvas != null) {
                    canvas.drawBitmap(bmp, (canvas.getWidth() - getFrameWidth()) / 2, (canvas.getHeight() - getFrameHeight()) / 2, null);
                    mHolder.unlockCanvasAndPost(canvas);
                }
                bmp.recycle();
            }
        }
    }
}

这个类继承SurfaceView,实现SurfaceHoler.Callback和Runnable接口,维护了一个Camera对象,主要来显示图像。surfaceCreated,surfaceChanged,surfaceDestroyed都是SurfaceHolder.Callback的接口,没什么好说的。关键是红色部分的代码,有一个抽象方法processFrame(byte[ ] data),其中参数data是相机返回的图像数据,通过新开一个线程调用这个方法处理原始图像,然后把处理过的图像显示在屏幕上。所以可想而知,Sample1View肯定是实现了processFrame(byte[ ] data)的具体操作,让我们验证下,Sample1View的源码:

         

package org.opencv.samples.tutorial1;

import org.opencv.android;
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.Point;
import org.opencv.core.Scalar;
import org.opencv.core.CvType;
import org.opencv.imgproc.Imgproc;

import android.content.Context;
import android.graphics.Bitmap;
import android.view.SurfaceHolder;

class Sample1View extends SampleViewBase {
    private Mat mYuv;
    private Mat mRgba;
    private Mat mGraySubmat;
    private Mat mIntermediateMat;

    public Sample1View(Context context) {
        super(context);
    }

    @Override
    public void surfaceChanged(SurfaceHolder _holder, int format, int width, int height) {
        super.surfaceChanged(_holder, format, width, height);

        synchronized (this) {
            // initialize Mats before usage
            mYuv = new Mat(getFrameHeight() + getFrameHeight() / 2, getFrameWidth(), CvType.CV_8UC1);
            mGraySubmat = mYuv.submat(0, getFrameHeight(), 0, getFrameWidth());

            mRgba = new Mat();
            mIntermediateMat = new Mat();
        }
    }

    @Override
    protected Bitmap processFrame(byte[] data) {
        mYuv.put(0, 0, data);

        switch (Sample1Java.viewMode) {
        case Sample1Java.VIEW_MODE_GRAY:
            Imgproc.cvtColor(mGraySubmat, mRgba, Imgproc.COLOR_GRAY2RGBA, 4);
            break;
        case Sample1Java.VIEW_MODE_RGBA:
            Imgproc.cvtColor(mYuv, mRgba, Imgproc.COLOR_YUV420i2RGB, 4);
            Core.putText(mRgba, "OpenCV + Android", new Point(10, 100), 3/* CV_FONT_HERSHEY_COMPLEX */, 2, new Scalar(255, 0, 0, 255), 3);
            break;
        case Sample1Java.VIEW_MODE_CANNY:
            Imgproc.Canny(mGraySubmat, mIntermediateMat, 80, 100);
            Imgproc.cvtColor(mIntermediateMat, mRgba, Imgproc.COLOR_GRAY2BGRA, 4);
            break;
        }

        Bitmap bmp = Bitmap.createBitmap(getFrameWidth(), getFrameHeight(), Bitmap.Config.ARGB_8888);

        if (android.MatToBitmap(mRgba, bmp))
            return bmp;

        bmp.recycle();
        return null;
    }

    @Override
    public void run() {
        super.run();

        synchronized (this) {
            // Explicitly deallocate Mats
            if (mYuv != null)
                mYuv.dispose();
            if (mRgba != null)
                mRgba.dispose();
            if (mGraySubmat != null)
                mGraySubmat.dispose();
            if (mIntermediateMat != null)
                mIntermediateMat.dispose();

            mYuv = null;
            mRgba = null;
            mGraySubmat = null;
            mIntermediateMat = null;
        }
    }
}

         果然,它不仅实现了processFrame的具体实现,还调用了OpenCV的函数,就这麽简单,一个可以拍摄怀旧

图片(灰度图),物体轮廓(边缘检测)的DIY相机完成了,当然还没有实现图像保存,但这不难...

         虽然说我想写的类似哈哈镜的功能还没实现,但我们基本上已经知道怎么做了,其实我们要做的就是process-Frame方法的具体实现,OpenCV库有如此多的图像处理函数,当然我们也可以自己写算法,可见利用OpenCV在Android上DIY我们的相机并不是难事!

         PS:如果我们为了追求更快的图像处理速度,可以使用JNI调用OpenCV的C++实现库,我还没试过,就不说大话了......文中有什麽讲得不对的地方,望各位斧正。