有时候许多图像的对比度较低,在进行特征区域的识别时较为困难,下面我们利用几种方法进行分析,如何提高图像的对比度。
step 1:
利用下面代码,看一下原图像
#导包
import cv2
import numpy as np
import matplotlib.pyplot as plt
#读取外部图像并显示
def ReadAndShowImage():
image = cv2.imread("00.jpg")
cv2.imshow("OriginImage", image)
cv2.waitKey(0)
原图整体较黑,无法清晰明显的得到图像中的数字
step 2:
利用灰度直方图大致看一下灰度分布
#计算图像的灰度直方图
def GainHist():
image = cv2.imread("00.jpg")
grayImage = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
restHist = cv2.calcHist(grayImage,[0],None,[256],[0,256])
plt.title("Hist of GrayImage")
plt.xlabel("Amount of Pixel")
plt.ylabel("Gray Scale")
plt.plot(restHist)
plt.savefig("hist")
#plt.show()
灰度直方图如下
可以发现,大部分的图像像素都集中在灰度值较小的范围,这也是造成图像较黑的主要原因。能不能将他“拉直”一点,将像素集中到x轴的中间靠上部分
下面代码主要实现这个功能
#直方图均衡化
def EqualHist():
image = cv2.imread("00.jpg")
grayImage = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
equalHist = cv2.equalizeHist(grayImage,None)
hist = cv2.calcHist(equalHist,[0],None,[256],[0,255])
plt.title("EqualHist of GrayImage")
plt.xlabel("Amount of Pixel")
plt.ylabel("Gray Scale")
plt.plot(hist)
#plt.savefig("Equalhist")
# plt.show()
cv2.imshow("EqualHist",equalHist)
cv2.imwrite("equalHist.jpg",equalHist)
cv2.waitKey(0)
上面代码的效果如下图所示
上面经过均衡化后的图像,对比度明显较原图得到增强,效果也比较明显,最后我们要的肯定不是灰度图像,干扰太多了,二值化图像就可以避免这个问题,我们继续向下看
#在对比度受限的情况下,可以使用下面的方法进行
#自适应直方图均衡
def AdaptHist():
image = cv2.imread("00.jpg")
grayImage = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
equalHist = cv2.equalizeHist(grayImage, None)
#自适应直方图均衡,参数是自适应的阈值
adapthist = cv2.createCLAHE(clipLimit=40)
imageAdaptHist = adapthist.apply(equalHist)
cv2.imshow("adapthistimage",imageAdaptHist)
cv2.imwrite("adapthist.jpg",imageAdaptHist)
cv2.waitKey(0)
效果入下
上面照片比直方图均衡化后的图像,左下角好像没有那么白
通过对OpenCV提供的下面的一些参数,我们按照顺序统统试一下,看看有什么区别
cv2.THRESH_BINARY
cv2.THRESH_BINARY_INV
cv2.THRESH_TRUNC
cv2.THRESH_TOZERO
cv2.THRESH_TOZERO_INV
cv2.THRESH_OTSU
cv2.THRESH_TRIANGLE
#得到不同阈值
def Threshs():
image = cv2.imread("00.jpg")
grayImage = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
equalHist = cv2.equalizeHist(grayImage, None)
# 自适应直方图均衡,参数是自适应的阈值
adapthist = cv2.createCLAHE(clipLimit=40)
imageAdaptHist = adapthist.apply(equalHist)
thresh = 60
max_val = 255
ret, out1 = cv2.threshold(imageAdaptHist, thresh, max_val,cv2.THRESH_BINARY)
ret, out2 = cv2.threshold(imageAdaptHist, thresh, max_val, cv2.THRESH_BINARY_INV)
ret, out3 = cv2.threshold(imageAdaptHist, thresh, max_val, cv2.THRESH_TRUNC)
ret, out4 = cv2.threshold(imageAdaptHist, thresh, max_val, cv2.THRESH_TOZERO)
ret, out5 = cv2.threshold(imageAdaptHist, thresh, max_val, cv2.THRESH_TOZERO_INV)
ret, out6 = cv2.threshold(imageAdaptHist, thresh, max_val, cv2.THRESH_OTSU)
ret, out7 = cv2.threshold(imageAdaptHist, thresh, max_val, cv2.THRESH_TRIANGLE)
inal = np.concatenate((out1,out2,out3,out4), axis=0)
inal1 = np.concatenate((out5,out6,out7), axis=0)
cv2.imshow("test1",inal)
cv2.imshow("test2",inal1)
cv2.imwrite("test1.jpg",inal)
cv2.imwrite("test2.jpg",inal1)
cv2.waitKey(0)
嗯。。区别还是挺明显的,不过我最喜欢最后那个实验结果图,尽管有些不完整,最起码背景不那么复杂,比较好处理。
上面那些图像的高低阈值是我自己随便设置的,主观依赖性比较大,下面试试自适应阈值的方法
#自适应阈值处理
def AdaptThresh():
grayimage = cv2.imread('00.jpg', 0)
thresh1 = cv2.adaptiveThreshold(grayimage, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 11, 2)
thresh2 = cv2.adaptiveThreshold(grayimage, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 31, 3)
thresh3 = cv2.adaptiveThreshold(grayimage, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 13, 5)
thresh4 = cv2.adaptiveThreshold(grayimage, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 31, 4)
final = np.concatenate((thresh1, thresh2, thresh3, thresh4), axis=0)
cv2.imshow("test1", final)
cv2.imwrite("final.jpg", final)
cv2.waitKey(0)
结果如下
好像不太行,不顾哦最后一张图像的效果已经很好了,著所以看着乱是因为背景是白色的原因,如果将图像减去255,效果立马就能感觉出来,这里我就不试了。
def result():
gray_image = cv2.imread('00.jpg', 0)
ret, thresh1 = cv2.threshold(gray_image, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
cv2.imshow("test1", thresh1)
cv2.imwrite("rest.jpg",thresh1)
cv2.waitKey(0)
最后通过处理可以得到处理后的图像,如下所示
有干扰,尤其是左下角部分,但是整体上我们感兴趣的区域最起码可以看清了,而且效果还不错,应该不会影响对单个数字的分割,有兴趣的可以试试。