OpenCV2.4.9实现图像拼接与融合三种方法【SUF ORB stitch 】

  • 将四副分割图融合为一张完整的图片

【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_opencv【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_计算机视觉_02【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_opencv_03【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_SURF_04

  • 特征检测和特征匹配后:

【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_cv_05【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_SURF_06


【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_opencv_07

  • 最后效果:

【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_SURF_08


实现图像拼接具体步骤:


  1. 对每幅图进行特征点提取
  2. 对对特征点进行匹配
  3. 进行图像配准
  4. 把图像拷贝到另一幅图像的特定位置
  5. 对重叠边界进行特殊处理


特征点提取

全景图像的拼接,主要是特征点的提取、特征匹配和图像融合;现在CV领域有很多特征点的定义,比如sift、surf、harris角点、ORB都是很有名的特征因子。为了提高拼接的速度和质量,本文在特征提取时采用了改进的特征提取的算法,基于可靠性检测的SURF 算法,特征点粗匹配时采用快速匹配法。

特征点定义:一幅图像中总存在着其独特的像素点,这些点我们可以认为就是这幅图像的特征

1. SURF(Speeded Up Robust Feature)

SURF算法是对图像进行不同尺寸空间的高斯卷积,然后进行特征点的提取,但是SURF对图像步骤进行了近似替换和简化,降低了计算量。不仅具有很好的鲁棒性和准确性,实时性也提高了不少。

  • 积分图像的生成

设L(x, y)为原图中的像素点,其积分图像的面积等于该点到原点的所有点的总和,计算公式如下:

【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_计算机视觉_09

由上式可得,任意一块矩形区域(下图:计算积分图像)的积分面积可由式得:                                        

【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_opencv_10

【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_计算机视觉_11

  • 特征点的提取

SURF 算法在积分图像的基础上,利用 Hessian 检测子进行特征点的求取。

(1)计算像素点I(x,y)在尺度s上的Hessian矩阵

【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_stitch_12

然后离散化上面的高斯函数。

(2)SURF 特征向量的生成

首先以特征点为中心确定边长为 20s 的正方形区域,然后再划分为4×4 的小区域,每个小区域又分为5×5个采样点,最后用Harr小波计算每个小区域垂直和水平方向的响应,并统计5×5个采样点的总的响应,推导出下面的矢量

【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_opencv_13

可得4×4×4=64维的SURF 特征的描述符,完成预处理后,再进行特征匹配。

//提取特征点    
SurfFeatureDetector Detector(2000);
vector<KeyPoint> keyPoint1, keyPoint2;
Detector.detect(image1, keyPoint1);
Detector.detect(image2, keyPoint2);

//特征点描述,为下边的特征点匹配做准备
SurfDescriptorExtractor Descriptor;
Mat imageDesc1, imageDesc2;
Descriptor.compute(image1, keyPoint1, imageDesc1);
Descriptor.compute(image2, keyPoint2, imageDesc2);

FlannBasedMatcher matcher;
vector<vector<DMatch> > matchePoints;
vector<DMatch> GoodMatchePoints;

vector<Mat> train_desc(1, imageDesc1);
matcher.add(train_desc);
matcher.train();

matcher.knnMatch(imageDesc2, matchePoints, 2);
cout << "total match points: " << matchePoints.size() << endl;

// Lowe's algorithm,获取优秀匹配点
for (int i = 0; i < matchePoints.size(); i++)
{
if (matchePoints[i][0].distance < 0.4 * matchePoints[i][1].distance)
{
GoodMatchePoints.push_back(matchePoints[i][0]);
}
}

Mat first_match;
drawMatches(image02, keyPoint2, image01, keyPoint1, GoodMatchePoints, first_match);
//imshow("first_match ", first_match);
imwrite("H:/opencv2.4/picture/first_match.jpg", first_match);

【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_opencv_14

为了排除因为图像遮挡和背景混乱而产生的无匹配关系的关键点,SIFT的作者Lowe提出了比较最近邻距离与次近邻距离的SIFT匹配方式:取一幅图像中的一个SIFT关键点,并找出其与另一幅图像中欧式距离最近的前两个关键点,在这两个关键点中,如果最近的距离除以次近的距离得到的比率ratio少于某个阈值T,则接受这一对匹配点。因为对于错误匹配,由于特征空间的高维性,相似的距离可能有大量其他的错误匹配,从而它的ratio值比较高。显然降低这个比例阈值T,SIFT匹配点数目会减少,但更加稳定,反之亦然。

Lowe推荐ratio的阈值为0.8,但作者对大量任意存在尺度、旋转和亮度变化的两幅图片进行匹配,结果表明ratio取值在0. 4~0. 6 之间最佳,小于0. 4的很少有匹配点,大于0. 6的则存在大量错误匹配点​,所以建议ratio的取值原则如下:

ratio=0. 4:对于准确度要求高的匹配;

ratio=0. 6:对于匹配点数目要求比较多的匹配;

ratio=0. 5:一般情况下。

最终融合效果如下:

【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_计算机视觉_15

完整代码代码如下:

#include "highgui/highgui.hpp"    
#include "opencv2/nonfree/nonfree.hpp"
#include "opencv2/legacy/legacy.hpp"
#include <iostream>

using namespace cv;
using namespace std;

void OptimizeSeam(Mat& img1, Mat& trans, Mat& dst);

typedef struct
{
Point2f left_top;
Point2f left_bottom;
Point2f right_top;
Point2f right_bottom;
}four_corners_t;

four_corners_t corners;

void CalcCorners(const Mat& H, const Mat& src)
{
double v2[] = { 0, 0, 1 };//左上角
double v1[3];//变换后的坐标值
Mat V2 = Mat(3, 1, CV_64FC1, v2); //列向量
Mat V1 = Mat(3, 1, CV_64FC1, v1); //列向量

V1 = H * V2;
//左上角(0,0,1)
cout << "V2: " << V2 << endl;
cout << "V1: " << V1 << endl;
corners.left_top.x = v1[0] / v1[2];
corners.left_top.y = v1[1] / v1[2];

//左下角(0,src.rows,1)
v2[0] = 0;
v2[1] = src.rows;
v2[2] = 1;
V2 = Mat(3, 1, CV_64FC1, v2); //列向量
V1 = Mat(3, 1, CV_64FC1, v1); //列向量
V1 = H * V2;
corners.left_bottom.x = v1[0] / v1[2];
corners.left_bottom.y = v1[1] / v1[2];

//右上角(src.cols,0,1)
v2[0] = src.cols;
v2[1] = 0;
v2[2] = 1;
V2 = Mat(3, 1, CV_64FC1, v2); //列向量
V1 = Mat(3, 1, CV_64FC1, v1); //列向量
V1 = H * V2;
corners.right_top.x = v1[0] / v1[2];
corners.right_top.y = v1[1] / v1[2];

//右下角(src.cols,src.rows,1)
v2[0] = src.cols;
v2[1] = src.rows;
v2[2] = 1;
V2 = Mat(3, 1, CV_64FC1, v2); //列向量
V1 = Mat(3, 1, CV_64FC1, v1); //列向量
V1 = H * V2;
corners.right_bottom.x = v1[0] / v1[2];
corners.right_bottom.y = v1[1] / v1[2];

}

int main(int argc, char *argv[])
{
Mat image01 = imread("H:/opencv2.4/picture/7.2.jpg", 1); //右图
Mat image02 = imread("H:/opencv2.4/picture/7.1.jpg", 1); //左图
//imshow("右", image01);
//imshow("左", image02);

//灰度图转换
Mat image1, image2;
cvtColor(image01, image1, CV_RGB2GRAY);
cvtColor(image02, image2, CV_RGB2GRAY);


//提取特征点
SurfFeatureDetector Detector(2000);
vector<KeyPoint> keyPoint1, keyPoint2;
Detector.detect(image1, keyPoint1);
Detector.detect(image2, keyPoint2);

//特征点描述,为下边的特征点匹配做准备
SurfDescriptorExtractor Descriptor;
Mat imageDesc1, imageDesc2;
Descriptor.compute(image1, keyPoint1, imageDesc1);
Descriptor.compute(image2, keyPoint2, imageDesc2);

FlannBasedMatcher matcher;
vector<vector<DMatch> > matchePoints;
vector<DMatch> GoodMatchePoints;

vector<Mat> train_desc(1, imageDesc1);
matcher.add(train_desc);
matcher.train();

matcher.knnMatch(imageDesc2, matchePoints, 2);
cout << "total match points: " << matchePoints.size() << endl;

// Lowe's algorithm,获取优秀匹配点
for (int i = 0; i < matchePoints.size(); i++)
{
if (matchePoints[i][0].distance < 0.4 * matchePoints[i][1].distance)
{
GoodMatchePoints.push_back(matchePoints[i][0]);
}
}

Mat first_match;
drawMatches(image02, keyPoint2, image01, keyPoint1, GoodMatchePoints, first_match);
//imshow("first_match ", first_match);
imwrite("H:/opencv2.4/picture/first_match.jpg", first_match);

vector<Point2f> imagePoints1, imagePoints2;

for (int i = 0; i < GoodMatchePoints.size(); i++)
{
imagePoints2.push_back(keyPoint2[GoodMatchePoints[i].queryIdx].pt);
imagePoints1.push_back(keyPoint1[GoodMatchePoints[i].trainIdx].pt);
}



//获取图像1到图像2的投影映射矩阵 尺寸为3*3
Mat homo = findHomography(imagePoints1, imagePoints2, CV_RANSAC);
也可以使用getPerspectiveTransform方法获得透视变换矩阵,不过要求只能有4个点,效果稍差
//Mat homo=getPerspectiveTransform(imagePoints1,imagePoints2);
cout << "变换矩阵为:\n" << homo << endl << endl; //输出映射矩阵

//计算配准图的四个顶点坐标
CalcCorners(homo, image01);
cout << "left_top:" << corners.left_top << endl;
cout << "left_bottom:" << corners.left_bottom << endl;
cout << "right_top:" << corners.right_top << endl;
cout << "right_bottom:" << corners.right_bottom << endl;

//图像配准
Mat imageTransform1, imageTransform2;
warpPerspective(image01, imageTransform1, homo, Size(MAX(corners.right_top.x, corners.right_bottom.x), image02.rows));
//warpPerspective(image01, imageTransform2, adjustMat*homo, Size(image02.cols*1.3, image02.rows*1.8));
//imshow("直接经过透视矩阵变换", imageTransform1);
//imwrite("H:/opencv2.4/picture/trans1.jpg", imageTransform1);


//创建拼接后的图,需提前计算图的大小
int dst_width = imageTransform1.cols; //取最右点的长度为拼接图的长度
int dst_height = image02.rows;

Mat dst(dst_height, dst_width, CV_8UC3);
dst.setTo(0);

imageTransform1.copyTo(dst(Rect(0, 0, imageTransform1.cols, imageTransform1.rows)));
image02.copyTo(dst(Rect(0, 0, image02.cols, image02.rows)));

imshow("b_dst", dst);


OptimizeSeam(image02, imageTransform1, dst);


imshow("拼接图片", dst);
imwrite("H:/opencv2.4/picture/拼接图片7.jpg", dst);

waitKey();

return 0;
}


//优化两图的连接处,使得拼接自然
void OptimizeSeam(Mat& img1, Mat& trans, Mat& dst)
{
int start = MIN(corners.left_top.x, corners.left_bottom.x);//开始位置,即重叠区域的左边界

double processWidth = img1.cols - start;//重叠区域的宽度
int rows = dst.rows;
int cols = img1.cols; //注意,是列数*通道数
double alpha = 1;//img1中像素的权重
for (int i = 0; i < rows; i++)
{
uchar* p = img1.ptr<uchar>(i); //获取第i行的首地址
uchar* t = trans.ptr<uchar>(i);
uchar* d = dst.ptr<uchar>(i);
for (int j = start; j < cols; j++)
{
//如果遇到图像trans中无像素的黑点,则完全拷贝img1中的数据
if (t[j * 3] == 0 && t[j * 3 + 1] == 0 && t[j * 3 + 2] == 0)
{
alpha = 1;
}
else
{
//img1中像素的权重,与当前处理点距重叠区域左边界的距离成正比,实验证明,这种方法确实好
alpha = (processWidth - (j - start)) / processWidth;
}

d[j * 3] = p[j * 3] * alpha + t[j * 3] * (1 - alpha);
d[j * 3 + 1] = p[j * 3 + 1] * alpha + t[j * 3 + 1] * (1 - alpha);
d[j * 3 + 2] = p[j * 3 + 2] * alpha + t[j * 3 + 2] * (1 - alpha);

}
}

}

2 .ORB(ORiented Brief)

ORB是ORiented Brief的简称,是brief算法的改进版。ORB算法比SIFT算法快100倍,比SURF算法快10倍。在计算机视觉领域有种说法,ORB算法的综合性能在各种测评里较其他特征提取算法是最好的。

ORB算法是brief算法的改进,那么我们先说一下brief算法有什么去缺点。

BRIEF的优点在于其速度,其缺点是:


  • 不具备旋转不变性
  • 对噪声敏感
  • 不具备尺度不变性

而ORB算法就是试图解决上述缺点中1和2提出的一种新概念。值得注意的是,​ORB没有解决尺度不变性

#include "highgui/highgui.hpp"    
#include "opencv2/nonfree/nonfree.hpp"
#include "opencv2/legacy/legacy.hpp"
#include <iostream>

using namespace cv;
using namespace std;

void OptimizeSeam(Mat& img1, Mat& trans, Mat& dst);

typedef struct
{
Point2f left_top;
Point2f left_bottom;
Point2f right_top;
Point2f right_bottom;
}four_corners_t;

four_corners_t corners;

void CalcCorners(const Mat& H, const Mat& src)
{
double v2[] = { 0, 0, 1 };//左上角
double v1[3];//变换后的坐标值
Mat V2 = Mat(3, 1, CV_64FC1, v2); //列向量
Mat V1 = Mat(3, 1, CV_64FC1, v1); //列向量

V1 = H * V2;
//左上角(0,0,1)
cout << "V2: " << V2 << endl;
cout << "V1: " << V1 << endl;
corners.left_top.x = v1[0] / v1[2];
corners.left_top.y = v1[1] / v1[2];

//左下角(0,src.rows,1)
v2[0] = 0;
v2[1] = src.rows;
v2[2] = 1;
V2 = Mat(3, 1, CV_64FC1, v2); //列向量
V1 = Mat(3, 1, CV_64FC1, v1); //列向量
V1 = H * V2;
corners.left_bottom.x = v1[0] / v1[2];
corners.left_bottom.y = v1[1] / v1[2];

//右上角(src.cols,0,1)
v2[0] = src.cols;
v2[1] = 0;
v2[2] = 1;
V2 = Mat(3, 1, CV_64FC1, v2); //列向量
V1 = Mat(3, 1, CV_64FC1, v1); //列向量
V1 = H * V2;
corners.right_top.x = v1[0] / v1[2];
corners.right_top.y = v1[1] / v1[2];

//右下角(src.cols,src.rows,1)
v2[0] = src.cols;
v2[1] = src.rows;
v2[2] = 1;
V2 = Mat(3, 1, CV_64FC1, v2); //列向量
V1 = Mat(3, 1, CV_64FC1, v1); //列向量
V1 = H * V2;
corners.right_bottom.x = v1[0] / v1[2];
corners.right_bottom.y = v1[1] / v1[2];

}

int main(int argc, char *argv[])
{
Mat image01 = imread("H:/opencv2.4/picture/1.2.jpg", 1); //右图
Mat image02 = imread("H:/opencv2.4/picture/1.1.jpg", 1); //左图
imshow("p2", image01);
imshow("p1", image02);

//灰度图转换
Mat image1, image2;
cvtColor(image01, image1, CV_RGB2GRAY);
cvtColor(image02, image2, CV_RGB2GRAY);


//提取特征点
OrbFeatureDetector surfDetector(3000);
vector<KeyPoint> keyPoint1, keyPoint2;
surfDetector.detect(image1, keyPoint1);
surfDetector.detect(image2, keyPoint2);

//特征点描述,为下边的特征点匹配做准备
OrbDescriptorExtractor SurfDescriptor;
Mat imageDesc1, imageDesc2;
SurfDescriptor.compute(image1, keyPoint1, imageDesc1);
SurfDescriptor.compute(image2, keyPoint2, imageDesc2);

flann::Index flannIndex(imageDesc1, flann::LshIndexParams(12, 20, 2), cvflann::FLANN_DIST_HAMMING);

vector<DMatch> GoodMatchePoints;

Mat macthIndex(imageDesc2.rows, 2, CV_32SC1), matchDistance(imageDesc2.rows, 2, CV_32FC1);
flannIndex.knnSearch(imageDesc2, macthIndex, matchDistance, 2, flann::SearchParams());

// Lowe's algorithm,获取优秀匹配点
for (int i = 0; i < matchDistance.rows; i++)
{
if (matchDistance.at<float>(i, 0) < 0.4 * matchDistance.at<float>(i, 1))
{
DMatch dmatches(i, macthIndex.at<int>(i, 0), matchDistance.at<float>(i, 0));
GoodMatchePoints.push_back(dmatches);
}
}

Mat first_match;
drawMatches(image02, keyPoint2, image01, keyPoint1, GoodMatchePoints, first_match);
imshow("first_match ", first_match);

vector<Point2f> imagePoints1, imagePoints2;

for (int i = 0; i < GoodMatchePoints.size(); i++)
{
imagePoints2.push_back(keyPoint2[GoodMatchePoints[i].queryIdx].pt);
imagePoints1.push_back(keyPoint1[GoodMatchePoints[i].trainIdx].pt);
}



//获取图像1到图像2的投影映射矩阵 尺寸为3*3
Mat homo = findHomography(imagePoints1, imagePoints2, CV_RANSAC);
也可以使用getPerspectiveTransform方法获得透视变换矩阵,不过要求只能有4个点,效果稍差
//Mat homo=getPerspectiveTransform(imagePoints1,imagePoints2);
cout << "变换矩阵为:\n" << homo << endl << endl; //输出映射矩阵

//计算配准图的四个顶点坐标
CalcCorners(homo, image01);
cout << "left_top:" << corners.left_top << endl;
cout << "left_bottom:" << corners.left_bottom << endl;
cout << "right_top:" << corners.right_top << endl;
cout << "right_bottom:" << corners.right_bottom << endl;

//图像配准
Mat imageTransform1, imageTransform2;
warpPerspective(image01, imageTransform1, homo, Size(MAX(corners.right_top.x, corners.right_bottom.x), image02.rows));
//warpPerspective(image01, imageTransform2, adjustMat*homo, Size(image02.cols*1.3, image02.rows*1.8));
imshow("直接经过透视矩阵变换", imageTransform1);
imwrite("trans1.jpg", imageTransform1);


//创建拼接后的图,需提前计算图的大小
int dst_width = imageTransform1.cols; //取最右点的长度为拼接图的长度
int dst_height = image02.rows;

Mat dst(dst_height, dst_width, CV_8UC3);
dst.setTo(0);

imageTransform1.copyTo(dst(Rect(0, 0, imageTransform1.cols, imageTransform1.rows)));
image02.copyTo(dst(Rect(0, 0, image02.cols, image02.rows)));

imshow("b_dst", dst);


OptimizeSeam(image02, imageTransform1, dst);


imshow("拼接图", dst);
imwrite("H:/opencv2.4/picture/拼接图.jpg", dst);

waitKey();

return 0;
}


//优化两图的连接处,使得拼接自然
void OptimizeSeam(Mat& img1, Mat& trans, Mat& dst)
{
int start = MIN(corners.left_top.x, corners.left_bottom.x);//开始位置,即重叠区域的左边界

double processWidth = img1.cols - start;//重叠区域的宽度
int rows = dst.rows;
int cols = img1.cols; //注意,是列数*通道数
double alpha = 1;//img1中像素的权重
for (int i = 0; i < rows; i++)
{
uchar* p = img1.ptr<uchar>(i); //获取第i行的首地址
uchar* t = trans.ptr<uchar>(i);
uchar* d = dst.ptr<uchar>(i);
for (int j = start; j < cols; j++)
{
//如果遇到图像trans中无像素的黑点,则完全拷贝img1中的数据
if (t[j * 3] == 0 && t[j * 3 + 1] == 0 && t[j * 3 + 2] == 0)
{
alpha = 1;
}
else
{
//img1中像素的权重,与当前处理点距重叠区域左边界的距离成正比,实验证明,这种方法确实好
alpha = (processWidth - (j - start)) / processWidth;
}

d[j * 3] = p[j * 3] * alpha + t[j * 3] * (1 - alpha);
d[j * 3 + 1] = p[j * 3 + 1] * alpha + t[j * 3 + 1] * (1 - alpha);
d[j * 3 + 2] = p[j * 3 + 2] * alpha + t[j * 3 + 2] * (1 - alpha);

}
}

}

效果:

【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_SURF_16

3. stitch

opencv其实自己就有实现图像拼接的算法,opencv stitch算法到底选用了哪个算法作为其特征检测方式:

#ifdef HAVE_OPENCV_NONFREE
stitcher.setFeaturesFinder(new detail::SurfFeaturesFinder());
#else
stitcher.setFeaturesFinder(new detail::OrbFeaturesFinder());
#endif

在源码createDefault函数中(默认设置),第一选择是SURF,第二选择才是ORB(没有NONFREE模块才选)

效果:

【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_opencv_17【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_计算机视觉_18【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_SURF_19【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_opencv_20

【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_SURF_21


【5】OpenCV2.4.9实现图像拼接与融合方法【SURF、SIFT、ORB、FAST、Harris角点 、stitch 】_SURF_22

以下是码源:

#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/stitching/stitcher.hpp>
using namespace std;
using namespace cv;
bool try_use_gpu = false;
vector<Mat> imgs;
string result_name = "dst1.jpg";
int main(int argc, char * argv[])
{
Mat img4 = imread("H:/opencv2.4/picture/4.4.jpg");//右图
Mat img3 = imread("H:/opencv2.4/picture/4.3.jpg");//右图
Mat img2 = imread("H:/opencv2.4/picture/4.2.jpg");//右图
Mat img1 = imread("H:/opencv2.4/picture/4.1.jpg");//左图

imshow("p4", img4);
imshow("p3", img3);
imshow("p2", img2);
imshow("p1", img1);

if (img1.empty() || img2.empty() || img3.empty() || img4.empty())
{
cout << "Can't read image" << endl;
return -1;
}
imgs.push_back(img4);
imgs.push_back(img3);
imgs.push_back(img2);
imgs.push_back(img1);


Stitcher stitcher = Stitcher::createDefault(try_use_gpu);
// 使用stitch函数进行拼接
Mat pano;
Stitcher::Status status = stitcher.stitch(imgs, pano);
if (status != Stitcher::OK)
{
cout << "Can't stitch images, error code = " << int(status) << endl;
return -1;
}
imwrite(result_name, pano);
Mat pano2 = pano.clone();
// 显示源图像,和结果图像
imshow("全景图像", pano);
imwrite("H:/opencv2.4/picture/拼接图4.jpg", pano);
if (waitKey() == 27)
return 0;
}

结论:

个人推荐stitch方法,集成封装调用简单,关键效果还好。

如有问题请参考文章开头的相关文链接!

参考博客:​​OpenCV探索之路(二十四)图像拼接和图像融合技术​

                  ​​OpenCV探索之路(二十三):特征检测和特征匹配方法汇总​​【SURF、SIFT、ORB、FAST、Harris角点】