使用环境:Azure Kinect SDK v1.4.1 + Azure Kinect Body Tracking SDK 1.0.1 + VS2019 + Opencv


文章目录

  • 一、获取深度图
  • 二、获取深度数据
  • 三、获取人体骨骼关键点三维坐标


一、获取深度图

//此处省略掉前面的初始化、传感器校准以及创建人体传感器工作

cv::Mat cv_depth;
cv::Mat cv_depth_8U;
//获取捕获
k4a_wait_result_t get_capture_result = k4a_device_get_capture(device, &sensor_capture, K4A_WAIT_INFINITE);
//从捕获中获取depth图像
k4a_image_t depthImage = k4a_capture_get_depth_image(sensor_capture);
//将depth图像转化为Mat图像,图像大小是288 * 320
cv_depth = cv::Mat(k4a_image_get_height_pixels(depthImage), k4a_image_get_width_pixels(depthImage), CV_16U, k4a_image_get_buffer(depthImage), k4a_image_get_stride_bytes(depthImage));
//此时已可以利用深度图进行处理,但opencv仅支持显示8位灰度图,若要可视化,则需进一步转化
cv_depth.convertTo(cv_depth_8U, CV_8U, 1); //也可使用normalize进行归一化处理
//显示图像
imshow("depth", cv_depth_8U);

可视化后结果如下图所示

深度学习抠图 深度图提取_深度学习抠图

二、获取深度数据

第一步我们可以得到深度图,而其实深度图中每个像素点的灰度值都代表着其实际的深度距离,因此秩序提取其灰度值即可。下例是将所有的深度数据保存到txt中。

//此处同上,即将获取到的depth图像转化为Mat16位灰度图
cv_depth = cv::Mat(k4a_image_get_height_pixels(depthImage), k4a_image_get_width_pixels(depthImage), CV_16U, k4a_image_get_buffer(depthImage), k4a_image_get_stride_bytes(depthImage));
//定义一个txt文本文件,需#include <fstream>
std::ofstream outfile("depth_data.txt");
outfile << "图像宽和高:" << k4a_image_get_width_pixels(depthImage) << "*" << k4a_image_get_height_pixels(depthImage) << std::endl;
outfile << "图像像素值" << std::endl;
for (int row = 0; row < cv_depth.rows; row++)
{
      for (int col = 0; col < cv_depth.cols; col++)
      {
       outfile << cv_depth.at<ushort>(row, col) << " ";
       }
       outfile << std::endl;
}
outfile.close();

三、获取人体骨骼关键点三维坐标

关键点三维坐标(skeleton.joints_HEAD->position.v为头部坐标点,数据结构float[3])






从Kinect获取捕获

将捕获排入队列并弹出结果

获取人体框架

从人体框架中获取关键点

输出显示


关键代码如下:

//此处已省略前面的准备工作

//从Kinect获取捕获
k4a_capture_t sensor_capture;
k4a_wait_result_t get_capture_result = k4a_device_get_capture(device, &sensor_capture, K4A_WAIT_INFINITE);

//将捕获排入队列并弹出结果
//排入队列
k4a_wait_result_t queue_capture_result = k4abt_tracker_enqueue_capture(tracker, sensor_capture, K4A_WAIT_INFINITE);
k4a_capture_release(sensor_capture); // Remember to release the sensor capture once you finish using it
if (queue_capture_result == K4A_WAIT_RESULT_TIMEOUT)
{
        // It should never hit timeout when K4A_WAIT_INFINITE is set.
        printf("Error! Add capture to tracker process queue timeout!\n");

}
else if (queue_capture_result == K4A_WAIT_RESULT_FAILED)
{
        printf("Error! Add capture to tracker process queue failed!\n");
}

//弹出结果
k4abt_frame_t body_frame = NULL;
k4a_wait_result_t pop_frame_result = k4abt_tracker_pop_result(tracker, &body_frame, K4A_WAIT_INFINITE);
if (pop_frame_result == K4A_WAIT_RESULT_SUCCEEDED)
{
       	// Successfully popped the body tracking result. Start your processing
       	//检测人体数
		size_t num_bodies = k4abt_frame_get_num_bodies(body_frame);

       for (size_t i = 0; i < num_bodies; i++)
       {
       			//获取人体框架
              	k4abt_skeleton_t skeleton;
             	k4abt_frame_get_body_skeleton(body_frame, i, &skeleton);
             	
             	//从人体框架中获取关键点
             	//鼻子
             	k4abt_joint_t  P_NOSE = skeleton.joints[K4ABT_JOINT_NOSE];
                //颈部
                k4abt_joint_t  P_NECK = skeleton.joints[K4ABT_JOINT_NECK];
                
                //输出显示,关键点坐标(skeleton.joints_HEAD->position.v为头部坐标点,数据结构float[3])
                std::cout << "鼻子坐标:";
               	for (size_t i = 0; i < 3; i++)
               	{
              			std::cout << P_NOSE.position.v[i] << " ";
                }
                printf("\n");
				std::cout << "颈部坐标:";
                for (size_t i = 0; i < 3; i++)
                {
                      	std::cout << P_NECK.position.v[i] << " ";
                }
                printf("\n");
      	}
}

输出如下

深度学习抠图 深度图提取_可视化_02