垃圾硕士的第一篇博文,最近可能不太做图像处理这块了,做个总结吧。我写代码的时间不长,代码也非常的混乱,如果有想交流批评指教的同学留言就好。(害怕脸)

    寒假前被老板叫去做机械臂和双目视觉(我专业是航天工程啊喂!),反正有的没的做了做,虽然也遇到了一些问题但老实说都不是大问题,总之就是参考了很多大牛们的代码和方法。结果这次汇报老板居然对我说。。。说。。。你做的这些东西都是自己想做的吧。。。。。。人上了年纪记性都这么好的吗。。。怕是要把我这个学生给忘了。。。

    我这个项目要求是通过双目摄像头来对物体进行定位,然后抓取,都做烂了哈哈(尴尬的微笑)。

    对于双目视觉,我个人的理解,双目视觉分为这几步:

    1、获取摄像头图像

    2、双目摄像头标定

    3、摄像头畸变矫正

    4、目标识别

    5、计算位置

---------------------------------------------------------分割线------------------------------------------------------

    硬件设备需求  :  

   1、双目摄像头

    双目摄像头是某宝上买的那种现成,插上直接识别为两个摄像头,还是蛮好用的。

opencv打开双目相机 opencv双目视觉_双目视觉

    2、一台你心爱的电脑,出bug或者速度太慢的时候不至于把它砸了。

    3、一块足够平的标定板,板不平何以平天下!

---------------------------------------------------------分割线------------------------------------------------------

接下来是一步步的过程哈!

    实现的软件:主要由VS2017+OPENCV3.4完成,Matlab Calib工具箱进行标定工作。感谢这帮做开源的人,真心感谢!!!!!!!

    1、获取摄像头图像

    单目相机直接用VideoCapture就能读了,但是双目要稍微注意一下顺序(我的是笔记本电脑,如果是台式机大概就是0和1而不是1和2了):

#define wi 320
        #define he 240
	VideoCapture camera0(1);
	camera0.set(CV_CAP_PROP_FRAME_WIDTH, wi);
	camera0.set(CV_CAP_PROP_FRAME_HEIGHT, he);
	VideoCapture camera1(2);
	camera1.set(CV_CAP_PROP_FRAME_WIDTH, wi);
	camera1.set(CV_CAP_PROP_FRAME_HEIGHT, he);

    摄像头程序初始化如上,这边值得注意的是如果我写成:

#define wi 320
        #define he 240
	VideoCapture camera0(1);
	VideoCapture camera1(2);
	camera0.set(CV_CAP_PROP_FRAME_WIDTH, wi);
	camera0.set(CV_CAP_PROP_FRAME_HEIGHT, he);
	camera1.set(CV_CAP_PROP_FRAME_WIDTH, wi);
	camera1.set(CV_CAP_PROP_FRAME_HEIGHT, he);

    程序就会报错。一定要设置好第一个摄像头后立刻设置它的长宽。推测的原因是买的摄像头是USB2.0,不支持太大的图像传输,如果直接这样一下子设置两个会导致图像传输来不及。听说DirectShow可以传输的更大一些,没有试过,有画质要求的同学可以试试。

   然后就是截图,把你举着标定板各种奇怪的姿势拍下来。

while(1)
	{
		capture0 >> frame0;
		capture1 >> frame1;
		imshow("L",frame0);
		imshow("R",frame1);
		if (waitKey(30) == 's')
		{
			ss1 << i << "L.jpg" << endl;
			ss1 >> str1;
			imwrite(str1, frame0); //延时30ms
			ss2 << i << "R.jpg" << endl;
			ss2 >> str2;
			imwrite(str2, frame1); //延时30ms
		}
	}

    这段时截图的代码,按下‘s’就会截图保存。也是拿别人的代码改的,渣渣硕cpp水平真的是0。。被matlab宠惯了根本不会写这些代码。。。

    主要是截图的程序被我不小心删掉了,,这里就不贴出来了。放两张举着标定板的英姿。

    

opencv打开双目相机 opencv双目视觉_双目视觉_02

opencv打开双目相机 opencv双目视觉_级联分类器_03

    就像这样各个角度多来几张,大概L、R各20多张吧,就可以先把标定板收起来了。以后看谁不爽就抄起标定板莽他!

---------------------------------------------------------分割线------------------------------------------------------

    2、相机标定

    有请Matlab闪亮登场!(把握好登场的这段时间吧。。)这边不用OPENCV做标定是因为看到很多资料说OPENCV的标定并不好用,而且以前我也用过Matlab这套标定,算是老相识了,确实比较好用。

    用了Matlab Calib工具箱,这里不多做赘述了,就是比较麻烦,而且要很小心!心有猛虎细嗅蔷薇的机会来了!写这块标定的大神已经很多了,感谢他们!

    这边我对大神们做个补充,就是如何把Matlab得出的数据填入OPENCV所需的XML文件中。

<?xml version="1.0"?>
<opencv_storage>
<calibrationDate>"2015.5.31 14:23"</calibrationDate>

<Intrinsic_Camera_L type_id="opencv-matrix">
  <rows>3</rows>
  <cols>3</cols>
  <dt>d</dt>
  <data>
		2.3892563e+002 0. 1.5892866e+002
		0. 2.3865967e+002 1.3122136e+002
		0. 0. 1.
    </data></Intrinsic_Camera_L>

<Intrinsic_Distortion_L type_id="opencv-matrix">
	<rows>4</rows>
	<cols>1</cols>
	<dt>d</dt>
	<data>
		-0.43373   0.16967   0.00095   -0.00027
	</data>
</Intrinsic_Distortion_L>

<Intrinsic_Camera_R type_id="opencv-matrix">
	<rows>3</rows>
	<cols>3</cols>
	<dt>d</dt>
	<data>
		2.3892563e+002 0. 1.6092744e+002
		0. 2.3865967e+002 1.2619615e+002
		0. 0. 1.
	</data>
</Intrinsic_Camera_R>

<Intrinsic_Distortion_R type_id="opencv-matrix">
	<rows>4</rows>
	<cols>1</cols>
	<dt>d</dt>
	<data>
		-0.43376   0.16801   0.00007   -0.00042
	</data>
</Intrinsic_Distortion_R>


<Extrinsic_Rotation_vector type_id="opencv-matrix">
	<rows>3</rows>
	<cols>1</cols>
	<dt>d</dt>
	<data>
		0.00870   -0.00792  0.00152
	</data>	
</Extrinsic_Rotation_vector>

<Extrinsic_Translation_vector type_id="opencv-matrix">
	<rows>3</rows>
	<cols>1</cols>
	<dt>d</dt>
	<data>
		-0.06184  -0.00025  0.00081
	</data>	
</Extrinsic_Translation_vector>


</opencv_storage>

    里面包括左相机内部参数,左相机畸变参数,右相机内部参数,右相机畸变参数,两个相机的旋转矩阵,平移矩阵。大家可以对号入座,把相应的参数填入这个文件中。

---------------------------------------------------------分割线------------------------------------------------------

    3、摄像头畸变矫正

    虽然不懂原理但是就是那么用的。。

Mat cameraMatrix_Left = Mat::eye(3, 3, CV_64F);
	cameraMatrix_Left.at<double>(0, 0) = 2.3892563e+002;
	cameraMatrix_Left.at<double>(0, 2) = 1.5892866e+002;
	cameraMatrix_Left.at<double>(1, 1) = 2.3865967e+002;
	cameraMatrix_Left.at<double>(1, 2) = 1.3122136e+002;

	Mat distCoeffs_Left = Mat::zeros(5, 1, CV_64F);
	distCoeffs_Left.at<double>(0, 0) = -0.43373;
	distCoeffs_Left.at<double>(1, 0) = 0.16967;
	distCoeffs_Left.at<double>(2, 0) = 0.00095;
	distCoeffs_Left.at<double>(3, 0) = -0.00027;
	distCoeffs_Left.at<double>(4, 0) = 0;

	Mat cameraMatrix_Right = Mat::eye(3, 3, CV_64F);
	cameraMatrix_Right.at<double>(0, 0) = 2.3892563e+002;
	cameraMatrix_Right.at<double>(0, 2) = 1.6092744e+002;
	cameraMatrix_Right.at<double>(1, 1) = 2.3865967e+002;
	cameraMatrix_Right.at<double>(1, 2) = 1.2619615e+002;

	Mat distCoeffs_Right = Mat::zeros(5, 1, CV_64F);
	distCoeffs_Right.at<double>(0, 0) = -0.43376;
	distCoeffs_Right.at<double>(1, 0) = 0.16801;
	distCoeffs_Right.at<double>(2, 0) = 0.00007;
	distCoeffs_Right.at<double>(3, 0) = -0.00042;
	distCoeffs_Right.at<double>(4, 0) = 0;

	Mat R = Mat::zeros(3, 1, CV_64F);
	Mat T = Mat::zeros(3, 1, CV_64F);

	R.at<double>(0, 0) = 0.00870;
	R.at<double>(1, 0) = -0.00792;
	R.at<double>(2, 0) = 0.00152;

	T.at<double>(0, 0) = -61.84;
	T.at<double>(1, 0) = -0.00025;
	T.at<double>(2, 0) = 0.0008;
	camera0 >> frame0;
	camera1 >> frame1;
	Size imageSize;
	imageSize = frame0.size();

	Rect roi3, roi2;
	Mat Q, R1, P1, R2, P2, BWL, BWR;
	stereoRectify(cameraMatrix_Left, distCoeffs_Left, cameraMatrix_Right, distCoeffs_Right, imageSize, R, T, R1, R2, P1, P2, Q, CALIB_ZERO_DISPARITY, -1, imageSize, &roi2, &roi3);

	initUndistortRectifyMap(cameraMatrix_Left, distCoeffs_Left, R1, P1, imageSize, CV_16SC2, map1, map2);

	initUndistortRectifyMap(cameraMatrix_Right, distCoeffs_Right, R2, P2, imageSize, CV_16SC2, map3, map4);
        while (true)
	{

		camera0 >> frame0;
		camera1 >> frame1;
		remap(frame1, frame1_ud, map1, map2, INTER_LINEAR);
		remap(frame0, frame0_ud, map3, map4, INTER_LINEAR);
                		 
                imshow("R", frame1_ud);
		imshow("L", frame0_ud);
        }

这边我是抄了一个大腿的代码,小小修改了一下,这边我没有直接把XML读进来,而是自己定义了这些参数。。都行吧。。。

上点结果图


opencv打开双目相机 opencv双目视觉_双目视觉_04

opencv打开双目相机 opencv双目视觉_级联分类器_05

opencv打开双目相机 opencv双目视觉_opencv_06

opencv打开双目相机 opencv双目视觉_opencv打开双目相机_07

---------------------------------------------------------分割线------------------------------------------------------

目标识别+定位:

为了获取目标物体的位置,就要知道物体在两张图像中的位置;为了知道物体在两张图像中的位置,就要能识别物体出来。在OPENCV下面大概是有这些方法来识别:1、霍夫变换找圆之类的,简单快捷,效果一般2、SIFT,效果好到爆炸,速度也慢到爆炸,不适用于实时系统(18.4.11看到里面处理速度竟然能达到20ms一张图,可能还是有点慢,但是已经很不错了,改天试试)3、级联分类器,速度挺快,就是训练难度有点大。

    这边识别我拿霍夫找圆来做的,后面补充级联分类器训练的方法。因为这个效果说真的真不怎么样。。


cvtColor(frame1_ud, frame1_ud, CV_BGR2GRAY);
		cvtColor(frame0_ud, frame0_ud, CV_BGR2GRAY);

		HoughCircles(frame0_ud, circles0, CV_HOUGH_GRADIENT, 1.5, 10, 200, 100, 0, 0);
		HoughCircles(frame1_ud, circles1, CV_HOUGH_GRADIENT, 1.5, 10, 200, 100, 0, 0);

		for (size_t i = 0; i < circles0.size(); i++)
		{
			center_l.x = cvRound(circles0[i][0]);
			center_l.y= cvRound(circles0[i][1]);
			int radius = cvRound(circles0[i][2]);
 			//绘制圆心  
			circle(frame0_ud, center_l, 3, Scalar(0, 255, 0), -1, 8, 0);
			//绘制圆轮廓  
			circle(frame0_ud, center_l, radius, Scalar(155, 50, 255), 3, 8, 0);
			std::cout << center_l.x <<","<< center_l.y << endl;
		}
		for (size_t i = 0; i < circles1.size(); i++)
		{
			center_r.x = cvRound(circles1[i][0]);
			center_r.y = cvRound(circles1[i][1]);
			int radius = cvRound(circles1[i][2]);
			//绘制圆心  
			circle(frame1_ud, center_r, 3, Scalar(0, 255, 0), -1, 8, 0);
			//绘制圆轮廓  
			circle(frame1_ud, center_r, radius, Scalar(155, 50, 255), 3, 8, 0);
			std::cout << center_r.x << "," << center_r.y << endl;
		}

    关于目标定位其实就是一个初中数学的计算,我是看了大神们的说明,然后还是自己算了算,用了自己算的公式:

int disparity = center_l.x -center_r.x;
		//基线/视差 w = B/d

		double w = -T.at<double>(0, 0) / disparity;

		int m_point_3d_strcut_camZ = cameraMatrix_Left.at<double>(0, 0) * w;
		int m_point_3d_strcut_camX = (center_l.x - 160) * w ;
		int m_point_3d_strcut_camY = (120 - center_l.y ) * w ;
		std::cout << "at " << m_point_3d_strcut_camX << "," << m_point_3d_strcut_camY << "," << m_point_3d_strcut_camZ << endl;

    顺便提一句,我一开始用BM,SGBM等双目匹配算法效果很不好,不知道为什么,所以才出此下策做了这个识别来确定位置。

opencv打开双目相机 opencv双目视觉_opencv打开双目相机_08

这个是效果,at后面是物体的XYZ坐标。x,y轴不好测只测了z轴(即深度)的精度,老实说精度其实还可以!

下面是我识别的全部代码,因为当时试了比较多的方案又不舍得删掉,导致代码看起来很长其实有用的很少,如果能帮到大家那再好不过了。

#include <opencv2/opencv.hpp>
#include "opencv2/calib3d/calib3d.hpp"  
#include "opencv2/imgproc/imgproc.hpp"  
#include "opencv2/highgui/highgui.hpp"  

#define wi 320
#define he 240
using namespace std;
using namespace cv;
stringstream ss1, ss2;
string str1, str2;
int  i = 0;
CvMat *m_IntrinsicMat_L;  //左相机 - 内参矩阵
CvMat *m_DistortionMat_L;  //左相机 - 扭曲矩阵
CvMat *m_IntrinsicMat_R;
CvMat *m_DistortionMat_R;
CvMat *m_TranslateMat;  //平移矩阵
double m_F_L;  //左相机焦距
double m_T;  //基线长
double m_Cx_L;
Rect leftROI, rightROI;
Mat  disparity,xyz;
Mat imgDisparity8U, imgDisparity16S;
int flag = true;
Point center_l, center_r;
void detectAndDraw_l(Mat& img, CascadeClassifier& cascade,
	CascadeClassifier& nestedCascade,
	double scale, bool tryflip);
void detectAndDraw_r(Mat& img, CascadeClassifier& cascade,
	CascadeClassifier& nestedCascade,
	double scale, bool tryflip);
//void onMouse(int event, int x, int y, int flags, void* param);
int main()
{
	//initialize and allocate memory to load the video stream from camera 
	VideoCapture camera0(1);
	camera0.set(CV_CAP_PROP_FRAME_WIDTH, wi);
	camera0.set(CV_CAP_PROP_FRAME_HEIGHT, he);
	VideoCapture camera1(2);
	camera1.set(CV_CAP_PROP_FRAME_WIDTH, wi);
	camera1.set(CV_CAP_PROP_FRAME_HEIGHT, he);
	Mat frame0, frame1, frame0_ud, frame1_ud, map1, map2, map3, map4;
	if (!camera0.isOpened()) return 1;
	if (!camera1.isOpened()) return 1;
	CascadeClassifier cascade, nestedCascade;
	bool stop = false;
	//训练好的文件名称,放置在可执行文件同目录下
	cascade.load("haarcascade_frontalface_alt.xml");
	nestedCascade.load("haarcascade_eye.xml");
	//CvFileStorage* fs = cvOpenFileStorage("/home/calib.xml", 0, CV_STORAGE_READ);
	//m_IntrinsicMat_L = (CvMat *)cvReadByName(fs, 0, "Intrinsic_Camera_L");  //相机内参矩阵
	//m_DistortionMat_L = (CvMat *)cvReadByName(fs, 0, "Intrinsic_Distortion_L");  //扭曲矩阵
	//m_IntrinsicMat_R = (CvMat *)cvReadByName(fs, 0, "Intrinsic_Camera_R");
	//m_DistortionMat_R = (CvMat *)cvReadByName(fs, 0, "Intrinsic_Distortion_R");
	//m_TranslateMat = (CvMat *)cvReadByName(fs, 0, "Extrinsic_Translation_vector");
	//m_T = CV_MAT_ELEM(*m_TranslateMat, double, 0, 0) * -1;  //基线
	//m_F_L = CV_MAT_ELEM(*m_IntrinsicMat_L, double, 0, 0);  //左相机焦距
	//m_Cx_L = CV_MAT_ELEM(*m_IntrinsicMat_L, double, 0, 2);
	Mat cameraMatrix_Left = Mat::eye(3, 3, CV_64F);
	cameraMatrix_Left.at<double>(0, 0) = 2.3892563e+002;
	cameraMatrix_Left.at<double>(0, 2) = 1.5892866e+002;
	cameraMatrix_Left.at<double>(1, 1) = 2.3865967e+002;
	cameraMatrix_Left.at<double>(1, 2) = 1.3122136e+002;

	Mat distCoeffs_Left = Mat::zeros(5, 1, CV_64F);
	distCoeffs_Left.at<double>(0, 0) = -0.43373;
	distCoeffs_Left.at<double>(1, 0) = 0.16967;
	distCoeffs_Left.at<double>(2, 0) = 0.00095;
	distCoeffs_Left.at<double>(3, 0) = -0.00027;
	distCoeffs_Left.at<double>(4, 0) = 0;

	Mat cameraMatrix_Right = Mat::eye(3, 3, CV_64F);
	cameraMatrix_Right.at<double>(0, 0) = 2.3892563e+002;
	cameraMatrix_Right.at<double>(0, 2) = 1.6092744e+002;
	cameraMatrix_Right.at<double>(1, 1) = 2.3865967e+002;
	cameraMatrix_Right.at<double>(1, 2) = 1.2619615e+002;

	Mat distCoeffs_Right = Mat::zeros(5, 1, CV_64F);
	distCoeffs_Right.at<double>(0, 0) = -0.43376;
	distCoeffs_Right.at<double>(1, 0) = 0.16801;
	distCoeffs_Right.at<double>(2, 0) = 0.00007;
	distCoeffs_Right.at<double>(3, 0) = -0.00042;
	distCoeffs_Right.at<double>(4, 0) = 0;

	Mat R = Mat::zeros(3, 1, CV_64F);
	Mat T = Mat::zeros(3, 1, CV_64F);

	R.at<double>(0, 0) = 0.00870;
	R.at<double>(1, 0) = -0.00792;
	R.at<double>(2, 0) = 0.00152;

	T.at<double>(0, 0) = -61.84;
	T.at<double>(1, 0) = -0.00025;
	T.at<double>(2, 0) = 0.0008;


	int ndisparities = 16;   /**< Range of disparity */
	int SADWindowSize = 9; /**< Size of the block window. Must be odd */
	int slider_pos=75;
	Mat alineImgL;
	vector<Vec3f> circles0;
	vector<Vec3f> circles1;

	camera0 >> frame0;
	camera1 >> frame1;
	Size imageSize;
	imageSize = frame0.size();

	Rect roi3, roi2;
	Mat Q, R1, P1, R2, P2, BWL, BWR;
	stereoRectify(cameraMatrix_Left, distCoeffs_Left, cameraMatrix_Right, distCoeffs_Right, imageSize, R, T, R1, R2, P1, P2, Q, CALIB_ZERO_DISPARITY, -1, imageSize, &roi2, &roi3);

	initUndistortRectifyMap(cameraMatrix_Left, distCoeffs_Left, R1, P1, imageSize, CV_16SC2, map1, map2);

	initUndistortRectifyMap(cameraMatrix_Right, distCoeffs_Right, R2, P2, imageSize, CV_16SC2, map3, map4);


	//      std::cout << frame1.rows() << std::endl;
	//wait for 40 milliseconds
	while (true)
	{

		camera0 >> frame0;
		camera1 >> frame1;
		remap(frame1, frame1_ud, map1, map2, INTER_LINEAR);
		remap(frame0, frame0_ud, map3, map4, INTER_LINEAR);

		cvtColor(frame1_ud, frame1_ud, CV_BGR2GRAY);
		cvtColor(frame0_ud, frame0_ud, CV_BGR2GRAY);

		//Mat roi1 = frame1_ud;
		//Mat roi0 = frame0_ud;
		
		//threshold(frame0_ud, BWL ,slider_pos , 255 , THRESH_BINARY);
		//threshold(frame1_ud, BWR ,slider_pos , 255 , THRESH_BINARY);

		//Mat roi1(frame1_ud, Rect(30, 30, 260, 180));
		//Mat roi0(frame0_ud, Rect(30, 30, 260, 180));

		//GaussianBlur(roi0 ,roi0, Size(7, 7), 2, 2);
		//GaussianBlur(roi1 ,roi1, Size(7, 7), 2, 2);

		HoughCircles(frame0_ud, circles0, CV_HOUGH_GRADIENT, 1.5, 10, 200, 100, 0, 0);
		HoughCircles(frame1_ud, circles1, CV_HOUGH_GRADIENT, 1.5, 10, 200, 100, 0, 0);

		for (size_t i = 0; i < circles0.size(); i++)
		{
			center_l.x = cvRound(circles0[i][0]);
			center_l.y= cvRound(circles0[i][1]);
			int radius = cvRound(circles0[i][2]);
 			//绘制圆心  
			circle(frame0_ud, center_l, 3, Scalar(0, 255, 0), -1, 8, 0);
			//绘制圆轮廓  
			circle(frame0_ud, center_l, radius, Scalar(155, 50, 255), 3, 8, 0);
			std::cout << center_l.x <<","<< center_l.y << endl;
		}
		for (size_t i = 0; i < circles1.size(); i++)
		{
			center_r.x = cvRound(circles1[i][0]);
			center_r.y = cvRound(circles1[i][1]);
			int radius = cvRound(circles1[i][2]);
			//绘制圆心  
			circle(frame1_ud, center_r, 3, Scalar(0, 255, 0), -1, 8, 0);
			//绘制圆轮廓  
			circle(frame1_ud, center_r, radius, Scalar(155, 50, 255), 3, 8, 0);
			std::cout << center_r.x << "," << center_r.y << endl;
		}



		//detectAndDraw_l(roi0, cascade, nestedCascade, 1, 0);
		//detectAndDraw_r(roi1, cascade, nestedCascade, 1, 0);

		//————————————————————————以下是BM算法————————————————————————————
		//imgDisparity16S = Mat(roi1.rows, roi1.cols, CV_16S);
		//imgDisparity8U = Mat(roi1.rows, roi1.cols, CV_8UC1);


		//Ptr<StereoBM> bm = StereoBM::create(64, 21);
		bm->setPreFilterType(CV_STEREO_BM_XSOBEL);
		bm->setPreFilterSize(9);
		bm->setPreFilterCap(20);
		//bm->setBlockSize(7);//1,15 2,21
		bm->setMinDisparity(-16);//1,0 
		//bm->setNumDisparities(64);//2,64
		//bm->setTextureThreshold(10);
		//bm->setUniquenessRatio(20);//2,8
		//bm->setSpeckleWindowSize(100);
		//bm->setSpeckleRange(32);
		//bm->setROI1(roi2);
		//bm->setROI2(roi3);
		-- 3. Calculate the disparity image
		//bm->compute(roi1, roi0, imgDisparity16S);

		-- Check its extreme values
		//double minVal; double maxVal;

		//minMaxLoc(imgDisparity16S, &minVal, &maxVal);

		//printf("X: %d Y: %d \n", center.x, center.y);

		-- 4. Display it as a CV_8UC1 image
		//imgDisparity16S.convertTo(imgDisparity8U, CV_8UC1, 255 / (maxVal - minVal));
		//reprojectImageTo3D(imgDisparity16S, xyz, Q, true);
		//xyz = xyz * 16;
		//cout << "in world coordinate is: " << xyz.at<Vec3f>(center) << endl;
		//namedWindow("Disparity", 0);
		setMouseCallback("Disparity", onMouse, reinterpret_cast<void*> (&alineImgL));
		//imshow("Disparity", imgDisparity8U);
		//——————————————————————————————————————————————————

		//————————————————————————以下是SGBM算法————————————————————————————
		//Mat imgDisparity16S = Mat(roi1.rows, roi1.cols, CV_16S);
		//Mat imgDisparity8U = Mat(roi1.rows, roi1.cols, CV_8UC1);

		//Ptr<StereoSGBM> sgbm = StereoSGBM::create(0,16,3,0,0,0,0,0,0,0,StereoSGBM::MODE_SGBM);		//int 	minDisparity = 0,
		//																							//	int 	numDisparities = 16,
		//																							//	int 	blockSize = 3,
		//																							//	int 	P1 = 0,
		//																							//	int 	P2 = 0,
		//																							//	int 	disp12MaxDiff = 0,
		//																							//	int 	preFilterCap = 0,
		//																							//	int 	uniquenessRatio = 0,
		//																							//	int 	speckleWindowSize = 0,
		//																							//	int 	speckleRange = 0,
		//																							//	int 	mode = StereoSGBM::MODE_SGBM
		//sgbm->setP1(8 * SADWindowSize*SADWindowSize);
		//sgbm->setP2(32 * SADWindowSize*SADWindowSize);
		//
		//sgbm->setUniquenessRatio ( 10);
		//sgbm->setSpeckleWindowSize ( 100);
		//sgbm->setSpeckleRange(32);
		//sgbm->setDisp12MaxDiff( 1);
		//sgbm->compute(roi1, roi0, imgDisparity16S);
		//double minVal; double maxVal;

		//minMaxLoc(imgDisparity16S, &minVal, &maxVal);

		//printf("Min disp: %f Max value: %f \n", minVal, maxVal);

		//cv::normalize(imgDisparity16S, imgDisparity8U, 0, 256, cv::NORM_MINMAX, CV_8U);

		-- 4. Display it as a CV_8UC1 image
		//namedWindow("Disparity", 0);
		//setMouseCallback("Disparity", onMouse, reinterpret_cast<void*> (&alineImgL));
		//imshow("Disparity", imgDisparity8U);


		//——————————————————————————————————————————————————

		//————————————————————————以下是不匹配直接计算的方法————————————————————————————
		int disparity = center_l.x -center_r.x;
		//基线/视差 w = B/d

		double w = -T.at<double>(0, 0) / disparity;

		int m_point_3d_strcut_camZ = cameraMatrix_Left.at<double>(0, 0) * w/1.5;
		int m_point_3d_strcut_camX = (center_l.x - 160) * w ;
		int m_point_3d_strcut_camY = (120 - center_l.y ) * w ;
		std::cout << "at " << m_point_3d_strcut_camX << "," << m_point_3d_strcut_camY << "," << m_point_3d_strcut_camZ << endl;


		//——————————————————————————————————————————————————

		imshow("R", frame1_ud);
		imshow("L", frame0_ud);
		//imshow("R_Undistort", roi1);
		//imshow("L_Undistort", roi0);


		/*imshow("disp", disparity);*/
		//grab and retrieve each frames of the video sequentially 
		if (waitKey(10) == 's')
		{
			ss1 << i << "R.jpg" << endl;
			ss1 >> str1;
			imwrite(str1, frame1); //延时30ms
			ss1 << i << "R_Undistort.jpg" << endl;
			ss1 >> str1;
			imwrite(str1, frame1_ud); //延时30ms
			ss2 << i << "L.jpg" << endl;
			ss2 >> str2;
			imwrite(str2, frame0); //延时30ms
			ss2 << i << "L_Undistort.jpg" << endl;
			ss2 >> str2;
			imwrite(str2, frame0_ud); //延时30ms
			i++;
		}

	}

	return 0;
}

//void onMouse(int event, int x, int y, int flags, void * param)
//{
//	int valuess;
//	Mat *im = reinterpret_cast<Mat*>(param);
//	switch (event)
//	{
//	case CV_EVENT_LBUTTONDOWN:     //鼠标左键按下响应:返回坐标和灰度 
//	{
//		valuess = imgDisparity8U.at<uchar>(x, y);
//		std::cout << "at(" << x << "," << y << ")value is: SGM: " << valuess
//			<< endl;
//	}
//
//
//	flag = false;
//	break;
//	}
//}
void detectAndDraw_l(Mat& img, CascadeClassifier& cascade,
	CascadeClassifier& nestedCascade,
	double scale, bool tryflip)
{
	int i = 0;
	double t = 0;
	//建立用于存放人脸的向量容器
	vector<Rect> faces, faces2;
	//定义一些颜色,用来标示不同的人脸
	const static Scalar colors[] = {
		CV_RGB(0,0,255),
		CV_RGB(0,128,255),
		CV_RGB(0,255,255),
		CV_RGB(0,255,0),
		CV_RGB(255,128,0),
		CV_RGB(255,255,0),
		CV_RGB(255,0,0),
		CV_RGB(255,0,255) };
	//建立缩小的图片,加快检测速度
	//nt cvRound (double value) 对一个double型的数进行四舍五入,并返回一个整型数!
	Mat gray, smallImg(cvRound(img.rows / scale), cvRound(img.cols / scale), CV_8UC1);
	//转成灰度图像,Harr特征基于灰度图
	//cvtColor(img, gray, CV_BGR2GRAY);
	gray = img;
	//改变图像大小,使用双线性差值
	resize(gray, smallImg, smallImg.size(), 0, 0, INTER_LINEAR);
	//imshow("缩小尺寸", smallImg);
	//变换后的图像进行直方图均值化处理
	equalizeHist(smallImg, smallImg);
	//imshow("直方图均值处理", smallImg);
	//程序开始和结束插入此函数获取时间,经过计算求得算法执行时间
	t = (double)cvGetTickCount();
	//检测人脸
	//detectMultiScale函数中smallImg表示的是要检测的输入图像为smallImg,faces表示检测到的人脸目标序列,1.1表示
	//每次图像尺寸减小的比例为1.1,2表示每一个目标至少要被检测到3次才算是真的目标(因为周围的像素和不同的窗口大
	//小都可以检测到人脸),CV_HAAR_SCALE_IMAGE表示不是缩放分类器来检测,而是缩放图像,Size(30, 30)为目标的
	//最小最大尺寸
	cascade.detectMultiScale(smallImg, faces,
		1.1, 2, 0
		//|CV_HAAR_FIND_BIGGEST_OBJECT
		//|CV_HAAR_DO_ROUGH_SEARCH
		| CV_HAAR_SCALE_IMAGE
		, Size(30, 30));
	//如果使能,翻转图像继续检测
	if (tryflip)
	{
		flip(smallImg, smallImg, 1);
		//imshow("反转图像", smallImg);
		cascade.detectMultiScale(smallImg, faces2,
			1.1, 2, 0
			//|CV_HAAR_FIND_BIGGEST_OBJECT
			//|CV_HAAR_DO_ROUGH_SEARCH
			| CV_HAAR_SCALE_IMAGE
			, Size(30, 30));
		for (vector<Rect>::const_iterator r = faces2.begin(); r != faces2.end(); r++)
		{
			faces.push_back(Rect(smallImg.cols - r->x - r->width, r->y, r->width, r->height));
		}
	}
	t = (double)cvGetTickCount() - t;
	//   qDebug( "detection time = %g ms\n", t/((double)cvGetTickFrequency()*1000.) );
	for (vector<Rect>::const_iterator r = faces.begin(); r != faces.end(); r++, i++)
	{
		Mat smallImgROI, img_crop;
		vector<Rect> nestedObjects;
		Point pt1, pt2;
		Scalar color = colors[i % 8];
		int radius;

		double aspect_ratio = (double)r->width / r->height;
		if (0.75 < aspect_ratio && aspect_ratio < 1.3)
		{
			//标示人脸时在缩小之前的图像上标示,所以这里根据缩放比例换算回去
			pt1.x = cvRound(r->x*scale);
			pt1.y = cvRound(r->y*scale);
			pt2.x = cvRound(r->x*scale + r->width*scale);
			pt2.y = cvRound(r->y*scale + r->height*scale);
			center_l.x = cvRound(r->x*scale + r->width*scale/2);
			center_l.y = cvRound(r->y*scale + r->height*scale/2);
			//radius = cvRound((r->width + r->height)*0.25*scale);
			//img_crop = img(Range(pt1.y, pt2.y), Range(pt1.x, pt2.x));
			//circle(img, center, radius, color, 3, 8, 0);
			cv::rectangle(img, pt1, pt2, Scalar(255, 0, 0), 2);

		}
		else
			rectangle(img, cvPoint(cvRound(r->x*scale), cvRound(r->y*scale)),
				cvPoint(cvRound((r->x + r->width - 1)*scale), cvRound((r->y + r->height - 1)*scale)),
				color, 3, 8, 0);


		//if (nestedCascade.empty())
		//	continue;
		//smallImgROI = smallImg(*r);
		同样方法检测人眼
		//nestedCascade.detectMultiScale(smallImgROI, nestedObjects,
		//	1.1, 2, 0
		//	//|CV_HAAR_FIND_BIGGEST_OBJECT
		//	//|CV_HAAR_DO_ROUGH_SEARCH
		//	//|CV_HAAR_DO_CANNY_PRUNING
		//	| CV_HAAR_SCALE_IMAGE
		//	, Size(30, 30));
		//for (vector<Rect>::const_iterator nr = nestedObjects.begin(); nr != nestedObjects.end(); nr++)
		//{
		//	center.x = cvRound((r->x + nr->x + nr->width*0.5)*scale);
		//	center.y = cvRound((r->y + nr->y + nr->height*0.5)*scale);
		//	radius = cvRound((nr->width + nr->height)*0.25*scale);
		//	circle(img, center, radius, color, 3, 8, 0);
		//}
	}
	imshow("识别结果", img);
}
void detectAndDraw_r(Mat& img, CascadeClassifier& cascade,
	CascadeClassifier& nestedCascade,
	double scale, bool tryflip)
{
	int i = 0;
	double t = 0;
	//建立用于存放人脸的向量容器
	vector<Rect> faces, faces2;
	//定义一些颜色,用来标示不同的人脸
	const static Scalar colors[] = {
		CV_RGB(0,0,255),
		CV_RGB(0,128,255),
		CV_RGB(0,255,255),
		CV_RGB(0,255,0),
		CV_RGB(255,128,0),
		CV_RGB(255,255,0),
		CV_RGB(255,0,0),
		CV_RGB(255,0,255) };
	//建立缩小的图片,加快检测速度
	//nt cvRound (double value) 对一个double型的数进行四舍五入,并返回一个整型数!
	Mat gray, smallImg(cvRound(img.rows / scale), cvRound(img.cols / scale), CV_8UC1);
	//转成灰度图像,Harr特征基于灰度图
	//cvtColor(img, gray, CV_BGR2GRAY);
	gray = img;
	//imshow("灰度", gray);
	//改变图像大小,使用双线性差值
	resize(gray, smallImg, smallImg.size(), 0, 0, INTER_LINEAR);
	//imshow("缩小尺寸", smallImg);
	//变换后的图像进行直方图均值化处理
	equalizeHist(smallImg, smallImg);
	//imshow("直方图均值处理", smallImg);
	//程序开始和结束插入此函数获取时间,经过计算求得算法执行时间
	t = (double)cvGetTickCount();
	//检测人脸
	//detectMultiScale函数中smallImg表示的是要检测的输入图像为smallImg,faces表示检测到的人脸目标序列,1.1表示
	//每次图像尺寸减小的比例为1.1,2表示每一个目标至少要被检测到3次才算是真的目标(因为周围的像素和不同的窗口大
	//小都可以检测到人脸),CV_HAAR_SCALE_IMAGE表示不是缩放分类器来检测,而是缩放图像,Size(30, 30)为目标的
	//最小最大尺寸
	cascade.detectMultiScale(smallImg, faces,
		1.1, 2, 0
		//|CV_HAAR_FIND_BIGGEST_OBJECT
		//|CV_HAAR_DO_ROUGH_SEARCH
		| CV_HAAR_SCALE_IMAGE
		, Size(30, 30));
	//如果使能,翻转图像继续检测
	if (tryflip)
	{
		flip(smallImg, smallImg, 1);
		//imshow("反转图像", smallImg);
		cascade.detectMultiScale(smallImg, faces2,
			1.1, 2, 0
			//|CV_HAAR_FIND_BIGGEST_OBJECT
			//|CV_HAAR_DO_ROUGH_SEARCH
			| CV_HAAR_SCALE_IMAGE
			, Size(30, 30));
		for (vector<Rect>::const_iterator r = faces2.begin(); r != faces2.end(); r++)
		{
			faces.push_back(Rect(smallImg.cols - r->x - r->width, r->y, r->width, r->height));
		}
	}
	t = (double)cvGetTickCount() - t;
	//   qDebug( "detection time = %g ms\n", t/((double)cvGetTickFrequency()*1000.) );
	for (vector<Rect>::const_iterator r = faces.begin(); r != faces.end(); r++, i++)
	{
		Mat smallImgROI, img_crop;
		vector<Rect> nestedObjects;
		Point pt1, pt2;
		Scalar color = colors[i % 8];
		int radius;

		double aspect_ratio = (double)r->width / r->height;
		if (0.75 < aspect_ratio && aspect_ratio < 1.3)
		{
			//标示人脸时在缩小之前的图像上标示,所以这里根据缩放比例换算回去
			pt1.x = cvRound(r->x*scale);
			pt1.y = cvRound(r->y*scale);
			pt2.x = cvRound(r->x*scale + r->width*scale);
			pt2.y = cvRound(r->y*scale + r->height*scale);
			center_r.x = cvRound(r->x*scale + r->width*scale / 2);
			center_r.y = cvRound(r->y*scale + r->height*scale / 2);
			//radius = cvRound((r->width + r->height)*0.25*scale);
			//img_crop = img(Range(pt1.y, pt2.y), Range(pt1.x, pt2.x));
			//circle(img, center, radius, color, 3, 8, 0);
			cv::rectangle(img, pt1, pt2, Scalar(255, 0, 0), 2);

		}
		else
			rectangle(img, cvPoint(cvRound(r->x*scale), cvRound(r->y*scale)),
				cvPoint(cvRound((r->x + r->width - 1)*scale), cvRound((r->y + r->height - 1)*scale)),
				color, 3, 8, 0);


		//if (nestedCascade.empty())
		//	continue;
		//smallImgROI = smallImg(*r);
		同样方法检测人眼
		//nestedCascade.detectMultiScale(smallImgROI, nestedObjects,
		//	1.1, 2, 0
		//	//|CV_HAAR_FIND_BIGGEST_OBJECT
		//	//|CV_HAAR_DO_ROUGH_SEARCH
		//	//|CV_HAAR_DO_CANNY_PRUNING
		//	| CV_HAAR_SCALE_IMAGE
		//	, Size(30, 30));
		//for (vector<Rect>::const_iterator nr = nestedObjects.begin(); nr != nestedObjects.end(); nr++)
		//{
		//	center.x = cvRound((r->x + nr->x + nr->width*0.5)*scale);
		//	center.y = cvRound((r->y + nr->y + nr->height*0.5)*scale);
		//	radius = cvRound((nr->width + nr->height)*0.25*scale);
		//	circle(img, center, radius, color, 3, 8, 0);
		//}
	}
	imshow("识别结果", img);
}



  里面人脸检测的部分是用来试级联分类器好不好用的,那到底好不好用呢,反正OPENCV自带的那几个人脸分类器特别好用,我自己训练的特别不好用,可能也是因为我的正样本有点少。

---------------------------------------------------------分割线------------------------------------------------------

级联分类器我基本参考了大神的文章,大神介绍的很详细了,由于不会java,我写了matlab的几个脚本作为大神的代替。

批量图片重命名:

Files = dir(fullfile('D:\matlab folder\opencvca_file','*.jpg'));
time=size(Files);
for i=1:time(1)
     oldfilename=Files(i).name;
 
     newfilename=[num2str(i),'.jpg'];
    system(['rename ' oldfilename ' ' newfilename]);
end

制作灰色正样本图(读取文件夹下全部的图片,截取出正样本,并保存为灰色图片):

Files = dir(fullfile('D:\matlab folder\cul_area','*.jpg'));
time=size(Files);
for i=1:time(1)
    A=imread(Files(i).name);
    I=rgb2gray(A);
    imshow(I);
    [x,y] = ginput(2);%确定图像上的两点利用ginput函数,返回值是两点的坐标
    I = imcrop(I,[x(1),y(1),abs(x(1)-x(2)),abs(y(1)-y(2))]);
    I=imresize(I,[40 40],'nearest');
 
   newfilename=[num2str(i),'.jpg'];
    imwrite(I,newfilename);
end

制作正样本txt

for i=1:24
    a=['pos/',num2str(i),'.jpg 1 0 0 40 40'];
    fid=fopen('pos.txt','a');
    fprintf(fid,'%s\r',a);
    fclose(fid);
end

制作负样本txt

for i=1:2483
    a=['neg/',num2str(i),'.jpg'];
    fid=fopen('neg.txt','a');
    fprintf(fid,'%s\r',a);
    fclose(fid);
end

但是效果一般,识别效果上图

opencv打开双目相机 opencv双目视觉_双目视觉_09

只能识别出来一个。。。

大致就是以上了!大家交流!!