系列文章目录
文章目录
目录
系列文章目录
文章目录
前言
二、使用海康工业相机API接口GetImagebuffer配合pyqt中的QImage类进行界面显示
1.海康工业相机API接口GetImagebuffer介绍
2.在海康官方demo中修改添加QImage类进行显示
3.最终显示结果和代码
总结
前言
在工业视觉使用中,界面作为一个必不可少的软件直观体现,对于现场应用有着非常重要的作用,而在使用python进行工业相机的使用,对于界面的编辑必不可少是选择pyqt来编辑,而在pyqt中的图像显示就成了其中的重要实现,故对此在本文进行简略介绍;
一、QImage类粗略介绍
QImage 类在pyqt中的作用主要是I/O和直接逐像素点访问图像数据,QImage类提供了一个硬件无关的图像表示方法,该图像可以逐像素被访问和用于画图设备;故可以通过该 QImage 类对工业相机获取到的图像数据进行显示;
QImage类可以支持的像素格式:
Format_A2BGR30_Premultiplied = 20
Format_A2RGB30_Premultiplied = 22
Format_Alpha8 = 23
Format_ARGB32 = 5
Format_ARGB32_Premultiplied = 6
Format_ARGB4444_Premultiplied = 15
Format_ARGB6666_Premultiplied = 10
Format_ARGB8555_Premultiplied = 12
Format_ARGB8565_Premultiplied = 8
Format_BGR30 = 19
Format_Grayscale8 = 24
Format_Indexed8 = 3
Format_Invalid = 0
Format_Mono = 1
Format_MonoLSB = 2
Format_RGB16 = 7
Format_RGB30 = 21
Format_RGB32 = 4
Format_RGB444 = 14
Format_RGB555 = 11
Format_RGB666 = 9
Format_RGB888 = 13
Format_RGBA8888 = 17
Format_RGBA8888_Premultiplied = 18
Format_RGBX8888 = 16
InvertRgb = 0
InvertRgba = 1
本文在调用过程中主要使用 QImage 类中的 Format_RGB888 格式和 Format_Indexed8 格式,主要是因为 QImage 类中默认使用RGB格式,且与 opencv 中的RGB格式通道相反;
二、使用海康工业相机API接口GetImagebuffer配合pyqt中的QImage类进行界面显示
1.海康工业相机API接口GetImagebuffer介绍
在海康提供的底层SDK中,有相关主动取流之 getimagebuffer 取流的接口,并且在安装目录下的工业相机SDK开发指南中,有对该接口的具体介绍,具体如下:
接口:MV_CC_GetImageBuffer()
对应于 C 语言接口如下:
MV_CAMCTRL_API int __stdcall MV_CC_GetImageBuffer ( IN void * handle,
OUT MV_FRAME_OUT * pstFrame,
IN unsigned int nMsec
)
参数:
。handle :设备句柄
。pstFrame :图像数据和图像信息
。nMsec :等待超时时间,输入 INFINITE 时表示无限等待,直到收到一帧数据或者停止取流,
返回:
。调用成功,返回 MV_OK
。调用失败,返回错误码
注:
1、调用该接口获取图像数据帧之前需要先调用 MV_CC_StartGrabbing() 启动图像采集,该接口为主动式获取帧数据,上层应用程序需要根据帧率,控制好调用该接口的频率。该接口支持设置超时时间,SDK内部等待直到有数据时返回,可以增加取流平稳性,适合用于对平稳性要求较高的场合。
2、该接口与 MV_CC_FreeImageBuffer() 接口配套使用,当处理完取到的数据后,需要用 MV_CC_FreeImageBuffer() 接口将 pstFrame 内的数据指针权限进行释放;
3、该接口与 MV_CC_GetOneFrameTimeout() 接口相比, 该接口有更高的效率,并且取流缓存的分配由 SDK 内部自动分配,而 MV_CC_GetOneFrameTimeout() 接口是需要我们自行分配的;
4、该接口在调用了 MV_CCDisplay() 接口后会无法取流;
5、该接口不支持 Cameralink 设备,仅支持 GigE 、USB 设备;
根据以上说明可知,MV_CC_GetImageBuffer() 接口需要在开始采集图像接口调用之后使用,即官方例程中的该接口调用,如下:
# 为线程定义一个函数
def work_thread(cam=0, pData=0, nDataSize=0):
stOutFrame = MV_FRAME_OUT()
memset(byref(stOutFrame), 0, sizeof(stOutFrame))
while True:
ret = cam.MV_CC_GetImageBuffer(stOutFrame, 1000)
if None != stOutFrame.pBufAddr and 0 == ret:
print ("get one frame: Width[%d], Height[%d], nFrameNum[%d]" % (stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, stOutFrame.stFrameInfo.nFrameNum))
nRet = cam.MV_CC_FreeImageBuffer(stOutFrame)
else:
print ("no data[0x%x]" % ret)
if g_bExit == True:
break
在开启取流之后,通过该接口主动去抓取 SDK 底层的 buffer 中的图像数据和信息,通过该接口,可以将图像数据给到我们,但是官方例程中,并没有提供相关数据解析和显示的相关内容,这样就导致我们在此环节需要重新开发,本文第三部分将对图像数据的解析和显示做相关的代码实现;
并且在上面代码中可以看到该接口与 MV_CC_FreeImageBuffer()接口配套使用,在取到图像数据之后,在取下一张图像数据之前将 pstOutFrame 内的数据指针权限进行了释放;
2.在海康官方demo中修改添加QImage类进行显示
在海康官方demo中,使用 MV_CC_GetImageBuffer接口获取到图像数据之后,需要显示,但是使用pyqt在界面中显示是需要先定义界面,界面定义如下所示:
from PyQt5.QtWidgets import *
from PyQt5.QtGui import *
from PyQt5.QtCore import *
# 窗口设置
class initform(QWidget):
def __init__(self):
super().__init__()
return self.initUI()
def initUI(self):
# 设置窗口左上边距,宽度高度
self.setGeometry(300, 300, 1200, 900)
self.setWindowTitle("工业相机")
self.lable = QLabel("image", self)
self.lable.setAlignment(Qt.AlignLeft)
self.lable.setAlignment(Qt.AlignTop)
self.lable.setGeometry(0, 0, 800, 600)
self.lable.setScaledContents(True)
self.lable.move(400,300)
self.show()
def SetPic(self, img):
self.lable.setPixmap(QPixmap.fromImage(img))
其中为了便于说明,将界面大小设置比图像要大出许多,可以用于使用pyqt中界面上的其他参数设置等的加入;
基于官方demo对取流线程进行修改,直接将图像数据在取流线程中转到界面中显示,代码如下所示:
# 为线程定义一个函数
def work_thread(cam=0, pData=0, nDataSize=0):
stOutFrame = MV_FRAME_OUT()
memset(byref(stOutFrame), 0, sizeof(stOutFrame))
while True:
ret = cam.MV_CC_GetImageBuffer(stOutFrame, 1000)
if None != stOutFrame.pBufAddr and 0 == ret and stOutFrame.stFrameInfo.enPixelType == 35127316: # RGB8_Packed 格式
print ("get one frame: Width[%d], Height[%d], nFrameNum[%d]" % (stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, stOutFrame.stFrameInfo.nFrameNum))
pData = (c_ubyte * stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight * 3)()
memmove(byref(pData), stOutFrame.pBufAddr, stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight*3)
data = np.frombuffer(pData, count=int(stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight*3),dtype=np.uint8)
image = data.reshape(stOutFrame.stFrameInfo.nHeight ,stOutFrame.stFrameInfo.nWidth, 3)
image_show = QImage(image, stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, QImage.Format_RGB888)
elif None != stOutFrame.pBufAddr and 0 == ret and stOutFrame.stFrameInfo.enPixelType == 17301505: # Mono8 格式
print ("get one frame: Width[%d], Height[%d], nFrameNum[%d]" % (stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, stOutFrame.stFrameInfo.nFrameNum))
pData = (c_ubyte * stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight)()
memmove(byref(pData), stOutFrame.pBufAddr, stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight)
data = np.frombuffer(pData, count=int(stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight),dtype=np.uint8)
image = data.reshape(stOutFrame.stFrameInfo.nHeight ,stOutFrame.stFrameInfo.nWidth)
image_show = QImage(image, stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, QImage.Format_Indexed8)
elif None != stOutFrame.pBufAddr and 0 == ret and stOutFrame.stFrameInfo.enPixelType == 17301514: # BayerGB8 格式
print ("get one frame: Width[%d], Height[%d], nFrameNum[%d]" % (stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, stOutFrame.stFrameInfo.nFrameNum))
pData = (c_ubyte * stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight)()
memmove(byref(pData), stOutFrame.pBufAddr, stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight)
data = np.frombuffer(pData, count=int(stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight),dtype=np.uint8)
image = data.reshape(stOutFrame.stFrameInfo.nHeight ,stOutFrame.stFrameInfo.nWidth)
image = cv2.cvtColor(image, cv2.COLOR_BAYER_GB2BGR)
image_show = QImage(image, stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, QImage.Format_RGB888)
elif None != stOutFrame.pBufAddr and 0 == ret and stOutFrame.stFrameInfo.enPixelType == 34603039: # YUV422_Packed 格式
print ("get one frame: Width[%d], Height[%d], nFrameNum[%d]" % (stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, stOutFrame.stFrameInfo.nFrameNum))
pData = (c_ubyte * stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight* 2)()
memmove(byref(pData), stOutFrame.pBufAddr, stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight*2)
data = np.frombuffer(pData, count=int(stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight*2),dtype=np.uint8)
image = data.reshape(stOutFrame.stFrameInfo.nHeight ,stOutFrame.stFrameInfo.nWidth,2)
image = cv2.cvtColor(image, cv2.COLOR_YUV2RGB_Y422)
image_show = QImage(image, stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, QImage.Format_RGB888)
elif None != stOutFrame.pBufAddr and 0 == ret and stOutFrame.stFrameInfo.enPixelType == 34603058: # YUV_422_YUYV 格式
print ("get one frame: Width[%d], Height[%d], nFrameNum[%d]" % (stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, stOutFrame.stFrameInfo.nFrameNum))
pData = (c_ubyte * stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight* 2)()
memmove(byref(pData), stOutFrame.pBufAddr, stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight*2)
data = np.frombuffer(pData, count=int(stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight*2),dtype=np.uint8)
image = data.reshape(stOutFrame.stFrameInfo.nHeight ,stOutFrame.stFrameInfo.nWidth,2)
image = cv2.cvtColor(image, cv2.COLOR_YUV2RGB_YUYV)
image_show = QImage(image, stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, QImage.Format_RGB888)
elif None != stOutFrame.pBufAddr and 0 == ret and stOutFrame.stFrameInfo.enPixelType == 35127317: # BGR8_Packed 格式
print ("get one frame: Width[%d], Height[%d], nFrameNum[%d]" % (stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, stOutFrame.stFrameInfo.nFrameNum))
pData = (c_ubyte * stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight * 3)()
memmove(byref(pData), stOutFrame.pBufAddr, stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight*3)
data = np.frombuffer(pData, count=int(stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight*3),dtype=np.uint8)
image = data.reshape(stOutFrame.stFrameInfo.nHeight ,stOutFrame.stFrameInfo.nWidth, 3)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image_show = QImage(image, stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, QImage.Format_RGB888)
else:
print ("no data[0x%x]" % ret)
ex.SetPic(image_show)
nRet = cam.MV_CC_FreeImageBuffer(stOutFrame)
if g_bExit == True:
break
其中根据不同的像素格式,分别进行了转换,转换后将数据传到QImage类中进行显示;
界面显示需要在代码运行取到图像前就运行,然后将图像显示在该界面上,所以在代码运行前先对界面部分进行注册显示,代码如下:
if __name__ == "__main__":
app = QApplication(sys.argv)
ex = initform()
deviceList = MV_CC_DEVICE_INFO_LIST()
tlayerType = MV_GIGE_DEVICE | MV_USB_DEVICE
。。。。。。。。。。
最开始界面如下:
因为要在线程启动后对界面先行显示,再将数据显示在该界面中,所以在线程后添加如下代码,实现线程结束后关闭界面;
try:
hThreadHandle = threading.Thread(target=work_thread, args=(cam, None, None))
hThreadHandle.start()
# hThreadHandle_1 = threading.Thread(target=image_control, args=(image_data))
# hThreadHandle_1.start()
app.exec_()
except:
print ("error: unable to start thread")
3.最终显示结果和代码
其中绿色框出部分预留,便于后续加入参数设置等一些界面控件;
红色框出部分是工业相机获取到的图像数据显示的效果;
整体实现代码:
# -- coding: utf-8 --
import sys
import threading
import msvcrt
import numpy as np
from ctypes import *
import cv2
sys.path.append("../MvImport")
from MvCameraControl_class import *
from PyQt5.QtWidgets import *
from PyQt5.QtGui import *
from PyQt5.QtCore import *
# 窗口设置
class initform(QWidget):
def __init__(self):
super().__init__()
return self.initUI()
def initUI(self):
# 设置窗口左上边距,宽度高度
self.setGeometry(300, 300, 1200, 900)
self.setWindowTitle("工业相机")
self.lable = QLabel("image", self)
self.lable.setAlignment(Qt.AlignLeft)
self.lable.setAlignment(Qt.AlignTop)
self.lable.setGeometry(0, 0, 800, 600)
self.lable.setScaledContents(True)
self.lable.move(400,300)
self.show()
def SetPic(self, img):
self.lable.setPixmap(QPixmap.fromImage(img))
g_bExit = False
# 为线程定义一个函数
def work_thread(cam=0, pData=0, nDataSize=0):
stOutFrame = MV_FRAME_OUT()
memset(byref(stOutFrame), 0, sizeof(stOutFrame))
while True:
ret = cam.MV_CC_GetImageBuffer(stOutFrame, 1000)
if None != stOutFrame.pBufAddr and 0 == ret and stOutFrame.stFrameInfo.enPixelType == 35127316: # RGB8_Packed 格式
print ("get one frame: Width[%d], Height[%d], nFrameNum[%d]" % (stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, stOutFrame.stFrameInfo.nFrameNum))
pData = (c_ubyte * stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight * 3)()
memmove(byref(pData), stOutFrame.pBufAddr, stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight*3)
data = np.frombuffer(pData, count=int(stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight*3),dtype=np.uint8)
image = data.reshape(stOutFrame.stFrameInfo.nHeight ,stOutFrame.stFrameInfo.nWidth, 3)
image_show = QImage(image, stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, QImage.Format_RGB888)
elif None != stOutFrame.pBufAddr and 0 == ret and stOutFrame.stFrameInfo.enPixelType == 17301505: # Mono8 格式
print ("get one frame: Width[%d], Height[%d], nFrameNum[%d]" % (stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, stOutFrame.stFrameInfo.nFrameNum))
pData = (c_ubyte * stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight)()
memmove(byref(pData), stOutFrame.pBufAddr, stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight)
data = np.frombuffer(pData, count=int(stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight),dtype=np.uint8)
image = data.reshape(stOutFrame.stFrameInfo.nHeight ,stOutFrame.stFrameInfo.nWidth)
image_show = QImage(image, stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, QImage.Format_Indexed8)
elif None != stOutFrame.pBufAddr and 0 == ret and stOutFrame.stFrameInfo.enPixelType == 17301514: # BayerGB8 格式
print ("get one frame: Width[%d], Height[%d], nFrameNum[%d]" % (stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, stOutFrame.stFrameInfo.nFrameNum))
pData = (c_ubyte * stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight)()
memmove(byref(pData), stOutFrame.pBufAddr, stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight)
data = np.frombuffer(pData, count=int(stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight),dtype=np.uint8)
image = data.reshape(stOutFrame.stFrameInfo.nHeight ,stOutFrame.stFrameInfo.nWidth)
image = cv2.cvtColor(image, cv2.COLOR_BAYER_GB2BGR)
image_show = QImage(image, stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, QImage.Format_RGB888)
elif None != stOutFrame.pBufAddr and 0 == ret and stOutFrame.stFrameInfo.enPixelType == 34603039: # YUV422_Packed 格式
print ("get one frame: Width[%d], Height[%d], nFrameNum[%d]" % (stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, stOutFrame.stFrameInfo.nFrameNum))
pData = (c_ubyte * stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight* 2)()
memmove(byref(pData), stOutFrame.pBufAddr, stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight*2)
data = np.frombuffer(pData, count=int(stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight*2),dtype=np.uint8)
image = data.reshape(stOutFrame.stFrameInfo.nHeight ,stOutFrame.stFrameInfo.nWidth,2)
image = cv2.cvtColor(image, cv2.COLOR_YUV2RGB_Y422)
image_show = QImage(image, stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, QImage.Format_RGB888)
elif None != stOutFrame.pBufAddr and 0 == ret and stOutFrame.stFrameInfo.enPixelType == 34603058: # YUV_422_YUYV 格式
print ("get one frame: Width[%d], Height[%d], nFrameNum[%d]" % (stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, stOutFrame.stFrameInfo.nFrameNum))
pData = (c_ubyte * stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight* 2)()
memmove(byref(pData), stOutFrame.pBufAddr, stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight*2)
data = np.frombuffer(pData, count=int(stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight*2),dtype=np.uint8)
image = data.reshape(stOutFrame.stFrameInfo.nHeight ,stOutFrame.stFrameInfo.nWidth,2)
image = cv2.cvtColor(image, cv2.COLOR_YUV2RGB_YUYV)
image_show = QImage(image, stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, QImage.Format_RGB888)
elif None != stOutFrame.pBufAddr and 0 == ret and stOutFrame.stFrameInfo.enPixelType == 35127317: # BGR8_Packed 格式
print ("get one frame: Width[%d], Height[%d], nFrameNum[%d]" % (stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, stOutFrame.stFrameInfo.nFrameNum))
pData = (c_ubyte * stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight * 3)()
memmove(byref(pData), stOutFrame.pBufAddr, stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight*3)
data = np.frombuffer(pData, count=int(stOutFrame.stFrameInfo.nWidth * stOutFrame.stFrameInfo.nHeight*3),dtype=np.uint8)
image = data.reshape(stOutFrame.stFrameInfo.nHeight ,stOutFrame.stFrameInfo.nWidth, 3)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image_show = QImage(image, stOutFrame.stFrameInfo.nWidth, stOutFrame.stFrameInfo.nHeight, QImage.Format_RGB888)
else:
print ("no data[0x%x]" % ret)
ex.SetPic(image_show)
nRet = cam.MV_CC_FreeImageBuffer(stOutFrame)
if g_bExit == True:
break
if __name__ == "__main__":
app = QApplication(sys.argv)
ex = initform()
deviceList = MV_CC_DEVICE_INFO_LIST()
tlayerType = MV_GIGE_DEVICE | MV_USB_DEVICE
# ch:枚举设备 | en:Enum device
ret = MvCamera.MV_CC_EnumDevices(tlayerType, deviceList)
if ret != 0:
print ("enum devices fail! ret[0x%x]" % ret)
sys.exit()
if deviceList.nDeviceNum == 0:
print ("find no device!")
sys.exit()
print ("Find %d devices!" % deviceList.nDeviceNum)
for i in range(0, deviceList.nDeviceNum):
mvcc_dev_info = cast(deviceList.pDeviceInfo[i], POINTER(MV_CC_DEVICE_INFO)).contents
if mvcc_dev_info.nTLayerType == MV_GIGE_DEVICE:
print ("\ngige device: [%d]" % i)
strModeName = ""
for per in mvcc_dev_info.SpecialInfo.stGigEInfo.chModelName:
strModeName = strModeName + chr(per)
print ("device model name: %s" % strModeName)
nip1 = ((mvcc_dev_info.SpecialInfo.stGigEInfo.nCurrentIp & 0xff000000) >> 24)
nip2 = ((mvcc_dev_info.SpecialInfo.stGigEInfo.nCurrentIp & 0x00ff0000) >> 16)
nip3 = ((mvcc_dev_info.SpecialInfo.stGigEInfo.nCurrentIp & 0x0000ff00) >> 8)
nip4 = (mvcc_dev_info.SpecialInfo.stGigEInfo.nCurrentIp & 0x000000ff)
print ("current ip: %d.%d.%d.%d\n" % (nip1, nip2, nip3, nip4))
elif mvcc_dev_info.nTLayerType == MV_USB_DEVICE:
print ("\nu3v device: [%d]" % i)
strModeName = ""
for per in mvcc_dev_info.SpecialInfo.stUsb3VInfo.chModelName:
if per == 0:
break
strModeName = strModeName + chr(per)
print ("device model name: %s" % strModeName)
strSerialNumber = ""
for per in mvcc_dev_info.SpecialInfo.stUsb3VInfo.chSerialNumber:
if per == 0:
break
strSerialNumber = strSerialNumber + chr(per)
print ("user serial number: %s" % strSerialNumber)
nConnectionNum = input("please input the number of the device to connect:")
if int(nConnectionNum) >= deviceList.nDeviceNum:
print ("intput error!")
sys.exit()
# ch:创建相机实例 | en:Creat Camera Object
cam = MvCamera()
# ch:选择设备并创建句柄 | en:Select device and create handle
stDeviceList = cast(deviceList.pDeviceInfo[int(nConnectionNum)], POINTER(MV_CC_DEVICE_INFO)).contents
ret = cam.MV_CC_CreateHandle(stDeviceList)
if ret != 0:
print ("create handle fail! ret[0x%x]" % ret)
sys.exit()
# ch:打开设备 | en:Open device
ret = cam.MV_CC_OpenDevice(MV_ACCESS_Exclusive, 0)
if ret != 0:
print ("open device fail! ret[0x%x]" % ret)
sys.exit()
# ch:探测网络最佳包大小(只对GigE相机有效) | en:Detection network optimal package size(It only works for the GigE camera)
if stDeviceList.nTLayerType == MV_GIGE_DEVICE:
nPacketSize = cam.MV_CC_GetOptimalPacketSize()
if int(nPacketSize) > 0:
ret = cam.MV_CC_SetIntValue("GevSCPSPacketSize",nPacketSize)
if ret != 0:
print ("Warning: Set Packet Size fail! ret[0x%x]" % ret)
else:
print ("Warning: Get Packet Size fail! ret[0x%x]" % nPacketSize)
stBool = c_bool(False)
ret =cam.MV_CC_GetBoolValue("AcquisitionFrameRateEnable", stBool)
if ret != 0:
print ("get AcquisitionFrameRateEnable fail! ret[0x%x]" % ret)
sys.exit()
# ch:设置触发模式为off | en:Set trigger mode as off
ret = cam.MV_CC_SetEnumValue("TriggerMode", MV_TRIGGER_MODE_OFF)
if ret != 0:
print ("set trigger mode fail! ret[0x%x]" % ret)
sys.exit()
# ch:开始取流 | en:Start grab image
ret = cam.MV_CC_StartGrabbing()
if ret != 0:
print ("start grabbing fail! ret[0x%x]" % ret)
sys.exit()
try:
hThreadHandle = threading.Thread(target=work_thread, args=(cam, None, None))
hThreadHandle.start()
# hThreadHandle_1 = threading.Thread(target=image_control, args=(image_data))
# hThreadHandle_1.start()
app.exec_()
except:
print ("error: unable to start thread")
print ("press a key to stop grabbing.")
msvcrt.getch()
g_bExit = True
hThreadHandle.join()
# hThreadHandle_1.join()
# ch:停止取流 | en:Stop grab image
ret = cam.MV_CC_StopGrabbing()
if ret != 0:
print ("stop grabbing fail! ret[0x%x]" % ret)
sys.exit()
# ch:关闭设备 | Close device
ret = cam.MV_CC_CloseDevice()
if ret != 0:
print ("close deivce fail! ret[0x%x]" % ret)
sys.exit()
# ch:销毁句柄 | Destroy handle
ret = cam.MV_CC_DestroyHandle()
if ret != 0:
print ("destroy handle fail! ret[0x%x]" % ret)
sys.exit()
总结
本文基于python、opencv、pyqt结合只做了简单的图像在 pyqt 中的显示,未作参数等设置的界面控件,后续添加后上传相关文章和代码,还清网上各位大佬指正!