模型介绍
OpenVINO支持头部姿态评估模型,预训练模型为:head-pose-estimation-adas-0001,在三个维度方向实现头部动作识别,它们分别是:
pitch是俯仰角,是“点头“
yaw是偏航角,是‘摇头’
roll是旋转角,是“翻滚
它们的角度范围分别为:
YAW [-90,90], PITCH [-70,70], ROLL [-70,70]
这三个专业词汇其实是来自无人机与航空领域,计算机视觉科学家一大爱好就是搞新词,就把它们借用到头部姿态评估中,它们的意思图示如下:
对应到头部姿态评估中
输入格式:[1x3x60x60] BGR顺序
输出格式:
name: "angle_y_fc", shape: [1, 1] - Estimated
name: "angle_p_fc", shape: [1, 1] - Estimated pitch
name: "angle_r_fc", shape: [1, 1] - Estimated roll
人脸检测
基于OpenVINO中MobileNetv2 SSD人脸检测模型,实现人脸检测,然后得到ROI区域,基于ROI实现头部姿态评估,完成头部动作识别,这里只会识别幅度超过正负20度以上的头部动作。实现模型加载与输入输出格式解析的代码如下:
1ie = IECore()
2for device in ie.available_devices:
3 print(device)
4
5net = ie.read_network(model=model_xml, weights=model_bin)
6input_blob = next(iter(net.input_info))
7out_blob = next(iter(net.outputs))
8
9n, c, h, w = net.input_info[input_blob].input_data.shape
10print(n, c, h, w)
11
12# cap = cv.VideoCapture("D:/images/video/Boogie_Up.mp4")
13cap = cv.VideoCapture("D:/images/video/example_dsh.mp4")
14# cap = cv.VideoCapture(0)
15exec_net = ie.load_network(network=net, device_name="CPU")
16
17em_net = ie.read_network(model=em_xml, weights=em_bin)
18em_input_blob = next(iter(em_net.input_info))
19em_it = iter(em_net.outputs)
20em_out_blob1 = next(em_it) # angle_y_fc
21em_out_blob2 = next(em_it) # angle_p_fc
22em_out_blob3 = next(em_it) # angle_r_fc
23print(em_out_blob1, em_out_blob2, em_out_blob3)
24en, ec, eh, ew = em_net.input_info[em_input_blob].input_data.shape
25print(en, ec, eh, ew)
26
27em_exec_net = ie.load_network(network=em_net, device_name="CPU"
实现头部动作检测
解析模型的输出,对视频流实现人脸检测与头部动作识别的代码如下:
1height = cap.get(cv.CAP_PROP_FRAME_HEIGHT)
2width = cap.get(cv.CAP_PROP_FRAME_WIDTH)
3count = cap.get(cv.CAP_PROP_FRAME_COUNT)
4fps = cap.get(cv.CAP_PROP_FPS)
5out = cv.VideoWriter("D:/test.mp4", cv.VideoWriter_fourcc('D', 'I', 'V', 'X'), 15, (np.int(width), np.int(height)),
6 True)
7while True:
8 ret, frame = cap.read()
9 if ret is not True:
10 break
11 image = cv.resize(frame, (w, h))
12 image = image.transpose(2, 0, 1)
13 inf_start = time.time()
14 res = exec_net.infer(inputs={input_blob: [image]})
15 inf_end = time.time() - inf_start
16 # print("infer time(ms):%.3f"%(inf_end*1000))
17 ih, iw, ic = frame.shape
18 res = res[out_blob]
19 for obj in res[0][0]:
20 if obj[2] > 0.75:
21 xmin = int(obj[3] * iw)-10
22 ymin = int(obj[4] * ih)-10
23 xmax = int(obj[5] * iw)+10
24 ymax = int(obj[6] * ih)+10
25 if xmin < 0:
26 xmin = 0
27 if ymin < 0:
28 ymin = 0
29 if xmax >= iw:
30 xmax = iw - 1
31 if ymax >= ih:
32 ymax = ih - 1
33 roi = frame[ymin:ymax, xmin:xmax, :]
34 roi_img = cv.resize(roi, (ew, eh))
35 roi_img = roi_img.transpose(2, 0, 1)
36 em_res = em_exec_net.infer(inputs={em_input_blob: [roi_img]})
37 angle_p_fc = em_res[em_out_blob1][0][0]
38 angle_r_fc = em_res[em_out_blob2][0][0]
39 angle_y_fc = em_res[em_out_blob3][0][0]
40 postxt = ""
41 if angle_p_fc > 10 or angle_p_fc < -10:
42 postxt += "pitch, "
43 if angle_y_fc > 10 or angle_y_fc < -10:
44 postxt += "yaw, "
45 if angle_r_fc > 10 or angle_r_fc < -10:
46 postxt += "roll, "
47
48 cv.putText(frame, postxt, (xmin, ymin-10), cv.FONT_HERSHEY_SIMPLEX, 1.0, (0, 0, 255), 2)
49 cv.rectangle(frame, (xmin, ymin), (xmax, ymax), (0, 255, 255), 2, 8)
50 cv.putText(frame, "infer time(ms): %.3f" % (inf_end * 1000), (50, 50), cv.FONT_HERSHEY_SIMPLEX, 1.0,
51 (255, 0, 255),
52 2, 8)
53 cv.imshow("Face & head pose demo", frame)
54 out.write(frame)
55 c = cv.waitKey(1)
56 if c == 27:
57 break
58cv.waitKey(0)
59out.release()
60cap.release()
运行结果如下:
视频文件
这个建议感兴趣的可以尝试一下,把视频文件换成摄像头,基本上实时识别点头,摇头,转头等动作毫无压力,我自所以不用我自己的测试截图,主要把我自己长的太丑!另外速度实时!真的好用!