一、简介
什么是DeepStream应用程序?
DeepStream应用程序将深度神经网络和其他复杂的处理任务引入到流处理管道中,以实现对视频和其他传感器数据的近实时分析。从这些传感器中提取有意义的见解为提高运营效率和安全性创造了机会。例如,摄像头是当前使用最多的物联网传感器。在我们的家中,街道上,停车场,大型购物中心,仓库,工厂中都可以找到相机–无处不在。视频分析的潜在用途是巨大的:访问控制,防止丢失,自动结帐,监视,安全,自动检查(QA),包裹分类(智能物流),交通控制/工程,工业自动化等。
更具体地说,DeepStream应用程序是一组模块化插件,这些插件相连接以形成处理管道。每个插件代表一个功能块,例如,使用TensorRT进行推理或多流解码。硬件加速的插件与基础硬件(如果适用)交互以提供最佳性能。例如,解码插件与NVDEC交互,推理插件与GPU或DLA交互。每个插件可以根据需要在管道中实例化多次。
什么是DeepStream SDK?
NVIDIA DeepStream SDK是基于开源GStreamer多媒体框架的流分析工具包。DeepStream SDK加快了可伸缩IVA应用程序的开发速度,使开发人员更容易构建核心深度学习网络,而不必从头开始设计端到端应用程序。包含NVIDIA Jetson模块或NVIDIA dGPU适配器的系统均支持该SDK。它由可扩展的硬件加速插件集合组成,这些插件与低级库进行交互以优化性能,并定义了标准化的元数据结构,可实现自定义/用户特定的添加。
有关DeepStream SDK的更多详细信息和说明,请参考以下材料:NVIDIA DeepStream SDK开发指南NVIDIA DeepStream插件手册NVIDIA DeepStream SDK API参考文档
DeepStream SDK参考应用程序
DeepStream SDK随附了多个测试应用程序,包括预训练的模型,示例配置文件和可用于运行这些应用程序的示例视频流。其他示例和源代码示例为大多数IVA用例提供了足够的信息,以加快开发工作。测试应用程序演示:
如何使用DeepStream元素(例如,获取源代码,对多个流进行解码和多路复用,在经过预训练的模型上进行推理,对图像进行注释和渲染) 如何生成一批帧并对其进行推断以提高资源利用率 如何将自定义/特定于用户的元数据添加到DeepStream的任何组件中 以及更多… 有关完整的详细信息,请参阅《 NVIDIA DeepStream SDK开发指南》。
GStreamer插件:
GStreamer是用于插件,数据流和媒体类型处理/协商的框架。它用于创建流媒体应用程序。插件是在运行时动态加载的共享库,可以独立扩展和升级。当安排并链接在一起时,插件形成处理流水线,该流水线定义了流媒体应用程序的数据流。您可以通过其广泛的在线文档,从“什么是GStreamer?”开始了解有关GStreamer的更多信息。 除了在GStreamer框架库中提供的开源插件之外,DeepStream SDK还包括利用GPU功能的NVIDIA硬件加速插件。有关DeepStream GStreamer插件的完整列表,请参见《 NVIDIA DeepStream插件手册》。
开源GStreamer插件:
- GstFileSrc-从文件中读取数据:视频数据或图像。
- GstH264Parse-解析传入的H264流。对于H265编解码器,请使用H265Parse。
- GstRtpH264Pay-将H264编码的有效负载转换为RTP数据包(RFC 3984)。
- GstUDPSink-将UDP数据包发送到网络。与RTP有效负载(GstRtpH264Pay)配对时,它可以实现RTP流。
- GstCapsFilter-在不修改数据的情况下限制数据格式。
- GstV4l2Src-从v4l2设备捕获视频。
- GstQTMux-将流(音频和视频)合并到QuickTime(.mov)文件中。
- GstFileSink-将传入数据写入本地文件系统中的文件。
- GstURIDecodeBin-将数据从URI解码到原始媒体中。它选择可以处理给定“ uri”方案的源元素,并将其连接到解码器。
NVIDIA硬件加速插件:
- Gst-nvstreammux-在发送AI推理之前批处理流。
- Gst-nvinfer-使用TensorRT运行推理。
- Gst-nvvideo4linux2-使用硬件加速解码器(NVDEC)解码视频流;使用硬件加速编码器(NVENC)将I420格式的RAW数据编码为H264或H265输出视频流。
- Gst-nvvideoconvert-执行视频颜色格式转换。Gst-nvdsosd插件之前的第一个Gst-nvvideoconvert插件将流数据从I420转换为RGBA,Gst-nvdsosd插件将Gst-nvdsosd插件将数据从RGBA转换为I420。
- Gst-nvdsosd-绘制边界框,文本和关注区域(ROI)多边形。
- Gst-nvtracker-跟踪帧之间的对象。
- Gst-nvmultistreamtiler-从批处理缓冲区合成2D切片。
- Gst-nvv4l2decoder-解码视频流。
- Gst-Nvv4l2h264enc-编码视频流。
- Gst-NvArgusCameraSrc-提供使用Argus API控制ISP属性的选项。
视频分析
使用DeepStream SDK将视频转换为分析数据
DeepStream SDK通用流分析架构定义了可扩展的视频处理管道,可用于执行推理,对象跟踪和报告。当DeepStream应用程序分析每个视频帧时,插件会提取信息并将其存储为级联元数据记录的一部分,从而保持记录与源帧的关联。管道末尾的完整元数据集合表示深度学习模型和其他分析插件从框架中提取的完整信息集。DeepStream应用程序可以使用此信息进行显示,也可以作为消息的一部分在外部进行传输,以进行进一步的分析或长期归档。
DeepStream对元数据使用可扩展的标准结构。基本的元数据结构NvDsBatchMeta始于在所需Gst-nvstreammux插件内部创建的批处理级元数据。辅助元数据结构保存框架,对象,分类器和标签数据。DeepStream还提供了一种在批处理,框架或对象级别添加用户特定的元数据的机制。
有关元数据结构和使用的更多信息,请参阅《 DeepStream插件手册》。
ps:有需要的话可能会用到JupyterLab:Jetson 安装 Jupyter labTX2入门教程软件篇-安装Jupyter
二、下载
官网:https://developer.nvidia.com/deepstream-download 下载v4.0版本(需要登录):https://developer.nvidia.com/deepstream-402-jetson-deb 我实际使用的deepstream是在Jetson Xavier上的deepstream,是刷JetPack-4.3系统时一起刷入的,同样也是v4.0.2版本。
三、官方demo
1. 直接运行官方demo
# 我是在Xavier上研究的,deepstream路径在/opt/nvidia下
$ cd /opt/nvidia/deepstream/deepstream-4.0/samples/configs/deepstream-app
# 首次运行仅仅会生成模型文件,需要二次运行
$ deepstream-app -c source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt
$ deepstream-app -c source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt
# 当然,我们也可以用:source12_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx2.txt
# 我们还可以查看该配置文件:
$ vim source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt
# 可以看到使用的视频源
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file://../../streams/sample_1080p_h264.mp4
num-sources=8
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0
# 根据了解到的消息,如果运行之后没有任何日志输出,可以修改sink0里的type改成2、sync改成0(这些参数是开发人员一个一个修改出来的)
[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=5
sync=1
source-id=0
gpu-id=0
qos=0
nvbuf-memory-type=0
overlay-id=1
# 当然你也可以打开sink1(enable=1),可以保存为out.mp4文件。
[sink1]
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out.mp4
source-id=0
配置文件内容明细请查阅:https://docs.nvidia.com/metropolis/deepstream/4.0/dev-guide/index.html#page/DeepStream_Development_Guide%2Fdeepstream_app_config.3.2.html%23wwpID0E0WB0HA
示例/配置:参考应用程序的配置文件:
- source30_1080p_resnet_dec_infer_tiled_display_int8.txt:演示具有主要推理功能的30个流解码。(仅适用于dGPU和Jetson AGX Xavier平台。)
- source4_1080p_resnet_dec_infer_tiled_display_int8.txt:演示具有主要推理,对象跟踪和三个不同辅助分类器的四个流解码。(仅适用于dGPU和Jetson AGX Xavier平台。)
- source4_1080p_resnet_dec_infer_tracker_sgie_tiled_display_int8_gpu1.txt:在GPU 1上针对主要推理,对象跟踪和三个不同的二级分类器演示四个流解码(对于具有多个GPU卡的系统)。仅适用于dGPU平台。
- config_infer_primary.txt:将 nvinfer元素配置为主要检测器。
- config_infer_secondary_carcolor.txt, config_infer_secondary_carmake.txt, config_infer_secondary_vehicletypes.txt:将 nvinfer元素配置为辅助分类器。
- iou_config.txt:配置一个低级的IOU(联合路口)跟踪器。
- source1_usb_dec_infer_resnet_int8.txt:演示一台USB摄像机作为输入。
- source1_csi_dec_infer_resnet_int8.txt:演示一个CSI摄像机作为输入;仅限于Jetson。
- source2_csi_usb_dec_infer_resnet_int8.txt:演示一台CSI摄像机和一台USB摄像机作为输入;仅限于Jetson。
- source6_csi_dec_infer_resnet_int8.txt:演示六个CSI摄像机作为输入;仅限于Jetson。
- source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt:演示8解码+推断+跟踪器;仅适用于Jetson Nano。
- source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx1.txt:演示8解码+推断+跟踪器;仅适用于Jetson TX1。
- source12_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx2.txt:演示12个解码+推断+跟踪器;仅适用于Jetson TX2。
2. 修改配置文件来调用rtsp摄像头
使用下边这个配置文件,可以调用自己的网络摄像头: 其中做过的修改:
- [tiled-display]下修改了输出显示: rows=1 columns=1
- [source0]下修改了type类型、视频源、个数: type=4 uri=rtsp://admin:admin123@192.168.1.106:554/cam/realmonitor?channel=1&subtype=0 num-sources=1
- [sink0]修改了type=2(经测试应该是视频输出变为了窗口型,nomachine也能看到效果了) type=2
- [sink1]修改了enable=1和输出文件名 enable=1 output-file=out_toson.mp4
# Copyright (c) 2019 NVIDIA Corporation. All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl
[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=rtsp://admin:admin123@192.168.1.106:554/cam/realmonitor?channel=1&subtype=0
num-sources=1
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0
[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0
#qos=0
nvbuf-memory-type=0
#overlay-id=1
[sink1]
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out_toson.mp4
source-id=0
[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400
[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0
[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=8
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
model-engine-file=../../models/Primary_Detector_Nano/resnet10.caffemodel_b8_fp16.engine
batch-size=8
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=4
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_nano.txt
[tracker]
enable=1
tracker-width=480
tracker-height=272
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_iou.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
#ll-config-file required for IOU only
#ll-config-file=iou_config.txt
gpu-id=0
[tests]
file-loop=0
三、例程源码
1. 在源文件目录中找到源码,我们可以自己编译并运行demo
# 源文件目录(我们可以自己编译并运行demo)
$ cd /opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-test1
$ ls
deepstream_test1_app.c dstest1_pgie_config.txt Makefile README
$ make
$ ls
deepstream-test1-app deepstream_test1_app.o Makefile
deepstream_test1_app.c dstest1_pgie_config.txt README
# 运行deepstream-test1-app
$ ./deepstream-test1-app /opt/nvidia/deepstream/deepstream-4.0/samples/streams/sample_720p.h264
# demo里的视频图像文件
$ cd /opt/nvidia/deepstream/deepstream-4.0/samples/streams
$ ls
sample_1080p_h264.mp4 sample_720p.h264 sample_720p.mjpeg sample_cam6.mp4
sample_1080p_h265.mp4 sample_720p.jpg sample_720p.mp4 sample_industrial.jpg
Stream | Type of Stream |
sample_1080p_h264.mp4 | H264 containerized stream |
sample_1080p_h265.mp4 | H265 containerized stream |
sample_720p.h264 | H264 elementary stream |
sample_720p.jpg | JPEG image |
sample_720p.mjpeg | MJPEG stream |
sample_cam6.mp4 | H264 containerized stream (360D camera stream) |
sample_industrial.jpg | JPEG image |
日志信息:
Now playing: /opt/nvidia/deepstream/deepstream-4.0/samples/streams/sample_720p.h264
Using winsys: x11
Opening in BLOCKING MODE
Creating LL OSD context new
0:00:01.658822508 30700 0x558a9bcd20 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:01:03.125062075 30700 0x558a9bcd20 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_int8.engine
Running...
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Creating LL OSD context new
Frame Number = 0 Number of objects = 5 Vehicle Count = 3 Person Count = 2
Frame Number = 1 Number of objects = 5 Vehicle Count = 3 Person Count = 2
Frame Number = 2 Number of objects = 5 Vehicle Count = 3 Person Count = 2
..
Frame Number = 3 Number of objects = 7 Vehicle Count = 4 Person Count = 3
Frame Number = 4 Number of objects = 6 Vehicle Count = 4 Person Count = 2
Frame Number = 1441 Number of objects = 0 Vehicle Count = 0 Person Count = 0
End of stream
Returned, stopping playback
Deleting pipeline
2. 源文件
(其中Makefile和dstest1_pgie_config.txt略有修改,include文件我是拷贝出来的,模型使用绝对地址)
deepstream_test1_app.c
/*
* Copyright (c) 2018-2019, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include <gst/gst.h>
#include <glib.h>
#include <stdio.h>
#include "gstnvdsmeta.h"
#define MAX_DISPLAY_LEN 64
#define PGIE_CLASS_ID_VEHICLE 0
#define PGIE_CLASS_ID_PERSON 2
/* The muxer output resolution must be set if the input streams will be of
* different resolution. The muxer will scale all the input frames to this
* resolution. */
#define MUXER_OUTPUT_WIDTH 1920
#define MUXER_OUTPUT_HEIGHT 1080
/* Muxer batch formation timeout, for e.g. 40 millisec. Should ideally be set
* based on the fastest source's framerate. */
#define MUXER_BATCH_TIMEOUT_USEC 4000000
gint frame_number = 0;
gchar pgie_classes_str[4][32] = { "Vehicle", "TwoWheeler", "Person",
"Roadsign"
};
/* osd_sink_pad_buffer_probe will extract metadata received on OSD sink pad
* and update params for drawing rectangle, object information etc. */
static GstPadProbeReturn
osd_sink_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
gpointer u_data)
{
GstBuffer *buf = (GstBuffer *) info->data;
guint num_rects = 0;
NvDsObjectMeta *obj_meta = NULL;
guint vehicle_count = 0;
guint person_count = 0;
NvDsMetaList * l_frame = NULL;
NvDsMetaList * l_obj = NULL;
NvDsDisplayMeta *display_meta = NULL;
NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);
for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
l_frame = l_frame->next) {
NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
int offset = 0;
for (l_obj = frame_meta->obj_meta_list; l_obj != NULL;
l_obj = l_obj->next) {
obj_meta = (NvDsObjectMeta *) (l_obj->data);
if (obj_meta->class_id == PGIE_CLASS_ID_VEHICLE) {
vehicle_count++;
num_rects++;
}
if (obj_meta->class_id == PGIE_CLASS_ID_PERSON) {
person_count++;
num_rects++;
}
}
display_meta = nvds_acquire_display_meta_from_pool(batch_meta);
NvOSD_TextParams *txt_params = &display_meta->text_params[0];
display_meta->num_labels = 1;
txt_params->display_text = g_malloc0 (MAX_DISPLAY_LEN);
offset = snprintf(txt_params->display_text, MAX_DISPLAY_LEN, "Person = %d ", person_count);
offset = snprintf(txt_params->display_text + offset , MAX_DISPLAY_LEN, "Vehicle = %d ", vehicle_count);
/* Now set the offsets where the string should appear */
txt_params->x_offset = 10;
txt_params->y_offset = 12;
/* Font , font-color and font-size */
txt_params->font_params.font_name = "Serif";
txt_params->font_params.font_size = 10;
txt_params->font_params.font_color.red = 1.0;
txt_params->font_params.font_color.green = 1.0;
txt_params->font_params.font_color.blue = 1.0;
txt_params->font_params.font_color.alpha = 1.0;
/* Text background color */
txt_params->set_bg_clr = 1;
txt_params->text_bg_clr.red = 0.0;
txt_params->text_bg_clr.green = 0.0;
txt_params->text_bg_clr.blue = 0.0;
txt_params->text_bg_clr.alpha = 1.0;
nvds_add_display_meta_to_frame(frame_meta, display_meta);
}
g_print ("Frame Number = %d Number of objects = %d "
"Vehicle Count = %d Person Count = %d\n",
frame_number, num_rects, vehicle_count, person_count);
frame_number++;
return GST_PAD_PROBE_OK;
}
static gboolean
bus_call (GstBus * bus, GstMessage * msg, gpointer data)
{
GMainLoop *loop = (GMainLoop *) data;
switch (GST_MESSAGE_TYPE (msg)) {
case GST_MESSAGE_EOS:
g_print ("End of stream\n");
g_main_loop_quit (loop);
break;
case GST_MESSAGE_ERROR:{
gchar *debug;
GError *error;
gst_message_parse_error (msg, &error, &debug);
g_printerr ("ERROR from element %s: %s\n",
GST_OBJECT_NAME (msg->src), error->message);
if (debug)
g_printerr ("Error details: %s\n", debug);
g_free (debug);
g_error_free (error);
g_main_loop_quit (loop);
break;
}
default:
break;
}
return TRUE;
}
int
main (int argc, char *argv[])
{
GMainLoop *loop = NULL;
GstElement *pipeline = NULL, *source = NULL, *h264parser = NULL,
*decoder = NULL, *streammux = NULL, *sink = NULL, *pgie = NULL, *nvvidconv = NULL,
*nvosd = NULL;
#ifdef PLATFORM_TEGRA
GstElement *transform = NULL;
#endif
GstBus *bus = NULL;
guint bus_watch_id;
GstPad *osd_sink_pad = NULL;
/* Check input arguments */
if (argc != 2) {
g_printerr ("Usage: %s <H264 filename>\n", argv[0]);
return -1;
}
/* Standard GStreamer initialization */
gst_init (&argc, &argv);
loop = g_main_loop_new (NULL, FALSE);
/* Create gstreamer elements */
/* Create Pipeline element that will form a connection of other elements */
pipeline = gst_pipeline_new ("dstest1-pipeline");
/* Source element for reading from the file */
source = gst_element_factory_make ("filesrc", "file-source");
/* Since the data format in the input file is elementary h264 stream,
* we need a h264parser */
h264parser = gst_element_factory_make ("h264parse", "h264-parser");
/* Use nvdec_h264 for hardware accelerated decode on GPU */
decoder = gst_element_factory_make ("nvv4l2decoder", "nvv4l2-decoder");
/* Create nvstreammux instance to form batches from one or more sources. */
streammux = gst_element_factory_make ("nvstreammux", "stream-muxer");
if (!pipeline || !streammux) {
g_printerr ("One element could not be created. Exiting.\n");
return -1;
}
/* Use nvinfer to run inferencing on decoder's output,
* behaviour of inferencing is set through config file */
pgie = gst_element_factory_make ("nvinfer", "primary-nvinference-engine");
/* Use convertor to convert from NV12 to RGBA as required by nvosd */
nvvidconv = gst_element_factory_make ("nvvideoconvert", "nvvideo-converter");
/* Create OSD to draw on the converted RGBA buffer */
nvosd = gst_element_factory_make ("nvdsosd", "nv-onscreendisplay");
/* Finally render the osd output */
#ifdef PLATFORM_TEGRA
transform = gst_element_factory_make ("nvegltransform", "nvegl-transform");
#endif
sink = gst_element_factory_make ("nveglglessink", "nvvideo-renderer");
if (!source || !h264parser || !decoder || !pgie
|| !nvvidconv || !nvosd || !sink) {
g_printerr ("One element could not be created. Exiting.\n");
return -1;
}
#ifdef PLATFORM_TEGRA
if(!transform) {
g_printerr ("One tegra element could not be created. Exiting.\n");
return -1;
}
#endif
/* we set the input filename to the source element */
g_object_set (G_OBJECT (source), "location", argv[1], NULL);
g_object_set (G_OBJECT (streammux), "width", MUXER_OUTPUT_WIDTH, "height",
MUXER_OUTPUT_HEIGHT, "batch-size", 1,
"batched-push-timeout", MUXER_BATCH_TIMEOUT_USEC, NULL);
/* Set all the necessary properties of the nvinfer element,
* the necessary ones are : */
g_object_set (G_OBJECT (pgie),
"config-file-path", "dstest1_pgie_config.txt", NULL);
/* we add a message handler */
bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
bus_watch_id = gst_bus_add_watch (bus, bus_call, loop);
gst_object_unref (bus);
/* Set up the pipeline */
/* we add all elements into the pipeline */
#ifdef PLATFORM_TEGRA
gst_bin_add_many (GST_BIN (pipeline),
source, h264parser, decoder, streammux, pgie,
nvvidconv, nvosd, transform, sink, NULL);
#else
gst_bin_add_many (GST_BIN (pipeline),
source, h264parser, decoder, streammux, pgie,
nvvidconv, nvosd, sink, NULL);
#endif
GstPad *sinkpad, *srcpad;
gchar pad_name_sink[16] = "sink_0";
gchar pad_name_src[16] = "src";
sinkpad = gst_element_get_request_pad (streammux, pad_name_sink);
if (!sinkpad) {
g_printerr ("Streammux request sink pad failed. Exiting.\n");
return -1;
}
srcpad = gst_element_get_static_pad (decoder, pad_name_src);
if (!srcpad) {
g_printerr ("Decoder request src pad failed. Exiting.\n");
return -1;
}
if (gst_pad_link (srcpad, sinkpad) != GST_PAD_LINK_OK) {
g_printerr ("Failed to link decoder to stream muxer. Exiting.\n");
return -1;
}
gst_object_unref (sinkpad);
gst_object_unref (srcpad);
/* we link the elements together */
/* file-source -> h264-parser -> nvh264-decoder ->
* nvinfer -> nvvidconv -> nvosd -> video-renderer */
if (!gst_element_link_many (source, h264parser, decoder, NULL)) {
g_printerr ("Elements could not be linked: 1. Exiting.\n");
return -1;
}
#ifdef PLATFORM_TEGRA
if (!gst_element_link_many (streammux, pgie,
nvvidconv, nvosd, transform, sink, NULL)) {
g_printerr ("Elements could not be linked: 2. Exiting.\n");
return -1;
}
#else
if (!gst_element_link_many (streammux, pgie,
nvvidconv, nvosd, sink, NULL)) {
g_printerr ("Elements could not be linked: 2. Exiting.\n");
return -1;
}
#endif
/* Lets add probe to get informed of the meta data generated, we add probe to
* the sink pad of the osd element, since by that time, the buffer would have
* had got all the metadata. */
osd_sink_pad = gst_element_get_static_pad (nvosd, "sink");
if (!osd_sink_pad)
g_print ("Unable to get sink pad\n");
else
gst_pad_add_probe (osd_sink_pad, GST_PAD_PROBE_TYPE_BUFFER,
osd_sink_pad_buffer_probe, NULL, NULL);
/* Set the pipeline to "playing" state */
g_print ("Now playing: %s\n", argv[1]);
gst_element_set_state (pipeline, GST_STATE_PLAYING);
/* Wait till pipeline encounters an error or EOS */
g_print ("Running...\n");
g_main_loop_run (loop);
/* Out of the main loop, clean up nicely */
g_print ("Returned, stopping playback\n");
gst_element_set_state (pipeline, GST_STATE_NULL);
g_print ("Deleting pipeline\n");
gst_object_unref (GST_OBJECT (pipeline));
g_source_remove (bus_watch_id);
g_main_loop_unref (loop);
return 0;
}
Makefile
################################################################################
# Copyright (c) 2018-2019, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################
APP:= deepstream-test1-app
TARGET_DEVICE = $(shell gcc -dumpmachine | cut -f1 -d -)
NVDS_VERSION:=4.0
LIB_INSTALL_DIR?=/opt/nvidia/deepstream/deepstream-$(NVDS_VERSION)/lib/
ifeq ($(TARGET_DEVICE),aarch64)
CFLAGS:= -DPLATFORM_TEGRA
endif
SRCS:= $(wildcard *.c)
INCS:= $(wildcard *.h)
PKGS:= gstreamer-1.0
OBJS:= $(SRCS:.c=.o)
CFLAGS+= -I./includes
CFLAGS+= `pkg-config --cflags $(PKGS)`
LIBS:= `pkg-config --libs $(PKGS)`
LIBS+= -L$(LIB_INSTALL_DIR) -lnvdsgst_meta -lnvds_meta \
-Wl,-rpath,$(LIB_INSTALL_DIR)
all: $(APP)
%.o: %.c $(INCS) Makefile
$(CC) -c -o $@ $(CFLAGS) $<
$(APP): $(OBJS) Makefile
$(CC) -o $(APP) $(OBJS) $(LIBS)
clean:
rm -rf $(OBJS) $(APP)
dstest1_pgie_config.txt
################################################################################
# Copyright (c) 2018-2019, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################
# Following properties are mandatory when engine files are not specified:
# int8-calib-file(Only in INT8)
# Caffemodel mandatory properties: model-file, proto-file, output-blob-names
# UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
# ONNX: onnx-file
#
# Mandatory properties for detectors:
# num-detected-classes
#
# Optional properties for detectors:
# enable-dbscan(Default=false), interval(Primary mode only, Default=0)
# custom-lib-path,
# parse-bbox-func-name
#
# Mandatory properties for classifiers:
# classifier-threshold, is-classifier
#
# Optional properties for classifiers:
# classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
# operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
# input-object-min-width, input-object-min-height, input-object-max-width,
# input-object-max-height
#
# Following properties are always recommended:
# batch-size(Default=1)
#
# Other optional properties:
# net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
# model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
# mean-file, gie-unique-id(Default=0), offsets, gie-mode (Default=1 i.e. primary),
# custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#model-file=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_int8.engine
model-file=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.caffemodel
proto-file=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.prototxt
labelfile-path=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/labels.txt
int8-calib-file=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/cal_trt.bin
batch-size=1
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
[class-attrs-all]
threshold=0.2
eps=0.2
group-threshold=1
四、其他的例程源码简介
参考官方文档“Reference Application Source Details”部分:https://docs.nvidia.com/metropolis/deepstream/dev-guide/ 也可以参考博客《关于NVIDIA Deepstream SDK压箱底的资料都在这里了》:https://cloud.tencent.com/developer/article/1524712
Sample APP包括: (注:其中都需要make编译,具体请参阅其中的README文档)
- DeepStream Sample App<DS installation dir>/sources/apps/sample_apps/deepstream-app 说明:端到端示例演示了4级联神经网络(1个一级检测器和3个二级分类器)的多相机流,并显示平铺输出。
- DeepStream Test 1<DS installation dir>/sources/apps/sample_apps/deepstream-test1 说明:对单一H264视频流,应用filesrc, decode, nvstreammmux, nvinfer(主检测网络), nvosd, renderer等DeepStream插件(元素)
- DeepStream Test 2<DS installation dir>/sources/apps/sample_apps/deepstream-test2 说明:简单的应用程序,建立在test1之上,显示额外的属性,如跟踪和二级分类属性。
- DeepStream Test 3<DS installation dir>/sources/apps/sample_apps/deepstream-test3 说明:简单的应用程序,建立在test1的基础上,显示多个输入源和批处理使用nvstreammuxer。
- DeepStream Test 4<DS installation dir>/sources/apps/sample_apps/deepstream-test4 说明:这是在Test1示例的基础上构建的,演示了“nvmsgconv”和“nvmsgbroker”插件在物联网连接管道中的使用。对于test4,用户必须修改kafka代理连接字符串才能成功连接。需要安装分析服务器docker之前运行test4。DeepStream分析文档有关于设置分析服务器的更多信息。
- FasterRCNN Object Detector<DS installation dir>/sources/objectDetector_FasterRCNN 说明:FasterRCNN物体探测器实例。
- SSD Object Detector<DS installation dir>/sources/objectDetector_SSD 说明:SSD目标探测器实例。
五、改写例程DeepStream-Test1-app
由于该sample无法调用摄像头,并且几种尝试都失败,就自己学习了GStreamer 《GStreamer应用开发手册学习笔记》,并移植该sample推理部分到代码中。 一来,能够使自己加深理解GStreamer处理逻辑;二来,也能明白deepstream插件相关的一些信息。 (注:该demo使用GStreamer调用rtsp网络摄像头并使用deepstream插件推理,达到实时目标检测效果,因为内部模型未调整,所以检测功能和官方DeepStream-Test1-app一样。) ps:
问:deepstream的下列插件是开源的吗?哪里可以查到插件源码以及资料呢?
"nvv4l2decoder"、"nvstreammux"、"nvinfer"、"nvvideoconvert"、"nvdsosd"、"nvegltransform"、"nveglglessink"
回:在source里找找代码。只有很小一部分开源,其他都没有。
插件就类似积木,你按照自己的应用流程去利用这些插件,当然你也可以自己写插件。
但是你如果想看底层代码是怎么写的,应该看不到吧。
这个deepstream就是快速帮你做出你的应用。
我学习GStreamer并自己跑过的程序:https://github.com/tosonw/gstreamer_learn 我改写的deepstream_test1程序:https://github.com/tosonw/deepstream-test1-app_rtsp
deepstream_test1_app_demo_rtsp.c
//
// Created by toson on 20-2-10.
//
#include "stdio.h"
#include "gst/gst.h"
#define RTSPCAM "rtsp://admin:admin123@192.168.1.106:554/cam/realmonitor?channel=1&subtype=0"
#define MUXER_OUTPUT_WIDTH 1920
#define MUXER_OUTPUT_HEIGHT 1080
#define MUXER_BATCH_TIMEOUT_USEC 4000000
static void cb_new_rtspsrc_pad(GstElement *element, GstPad *pad, gpointer data) {
gchar *name;
GstCaps *p_caps;
gchar *description;
GstElement *p_rtph264depay;
name = gst_pad_get_name(pad);
g_print("A new pad %s was created\n", name);
// here, you would setup a new pad link for the newly created pad
// sooo, now find that rtph264depay is needed and link them?
p_caps = gst_pad_get_pad_template_caps(pad);
description = gst_caps_to_string(p_caps);
printf("%s\n", p_caps);
printf("%s\n", description);
g_free(description);
p_rtph264depay = GST_ELEMENT(data);
// try to link the pads then ...
if (!gst_element_link_pads(element, name, p_rtph264depay, "sink")) {
printf("Failed to link elements 3\n");
}
g_free(name);
}
int main(int argc, char *argv[]) {
GstElement *pipeline = NULL, *source = NULL, *rtppay = NULL, *parse = NULL,
*decoder = NULL, *sink = NULL, *filter1 = NULL;
GstCaps *filtercaps = NULL;
gst_init(&argc, &argv);
/// Build Pipeline
pipeline = gst_pipeline_new("Toson");
/// Create elements
source = gst_element_factory_make("rtspsrc", "source");
g_object_set(G_OBJECT (source), "latency", 2000, NULL);
rtppay = gst_element_factory_make("rtph264depay", "depayl");
parse = gst_element_factory_make("h264parse", "parse");
#ifdef PLATFORM_TEGRA
decoder = gst_element_factory_make("nvv4l2decoder", "nvv4l2-decoder");
GstElement *streammux = gst_element_factory_make("nvstreammux", "stream-muxer");
GstElement *pgie = gst_element_factory_make("nvinfer", "primary-nvinference-engine");
GstElement *nvvidconv = gst_element_factory_make("nvvideoconvert", "nvvideo-converter");
GstElement *nvosd = gst_element_factory_make("nvdsosd", "nv-onscreendisplay");
GstElement *transform = gst_element_factory_make("nvegltransform", "nvegl-transform");
sink = gst_element_factory_make ( "nveglglessink", "sink");
if (!pipeline || !streammux || !pgie || !nvvidconv || !nvosd || !transform) {
g_printerr("One element could not be created. Exiting.\n");
return -1;
}
g_object_set(G_OBJECT (streammux), "width", MUXER_OUTPUT_WIDTH, "height",
MUXER_OUTPUT_HEIGHT, "batch-size", 1,
"batched-push-timeout", MUXER_BATCH_TIMEOUT_USEC, NULL);
g_object_set(G_OBJECT (pgie),
"config-file-path", "dstest1_pgie_config.txt", NULL);
#else
decoder = gst_element_factory_make("avdec_h264", "decode");
sink = gst_element_factory_make("autovideosink", "sink");
#endif
if (!pipeline || !source || !rtppay || !parse || !decoder || !sink) {
g_printerr("One element could not be created.\n");
}
g_object_set(G_OBJECT (sink), "sync", FALSE, NULL);
g_object_set(GST_OBJECT(source), "location", RTSPCAM, NULL);
/// 加入插件
#ifdef PLATFORM_TEGRA
gst_bin_add_many(GST_BIN (pipeline),
source, rtppay, parse, decoder, streammux, pgie,
nvvidconv, nvosd, transform, sink, NULL);
#else
gst_bin_add_many(GST_BIN (pipeline),
source, rtppay, parse, decoder, sink, NULL);
#endif
// listen for newly created pads
g_signal_connect(source, "pad-added", G_CALLBACK(cb_new_rtspsrc_pad), rtppay);
#ifdef PLATFORM_TEGRA
GstPad *sinkpad, *srcpad;
gchar pad_name_sink[16] = "sink_0";
gchar pad_name_src[16] = "src";
sinkpad = gst_element_get_request_pad(streammux, pad_name_sink);
if (!sinkpad) {
g_printerr("Streammux request sink pad failed. Exiting.\n");
return -1;
}
//获取指定element中的指定pad 该element为 streammux
srcpad = gst_element_get_static_pad(decoder, pad_name_src);
if (!srcpad) {
g_printerr("Decoder request src pad failed. Exiting.\n");
return -1;
}
if (gst_pad_link(srcpad, sinkpad) != GST_PAD_LINK_OK) {
g_printerr("Failed to link decoder to stream muxer. Exiting.\n");
return -1;
}
//gst_pad_link
gst_object_unref(sinkpad);
gst_object_unref(srcpad);
#endif
/// 链接插件
#ifdef PLATFORM_TEGRA
if (!gst_element_link_many(rtppay, parse, decoder, NULL)) {
printf("\nFailed to link elements 0.\n");
return -1;
}
if (!gst_element_link_many(streammux, pgie, nvvidconv, nvosd, transform, sink, NULL)) {
printf("\nFailed to link elements 2.\n");
return -1;
}
#else
if (!gst_element_link_many(rtppay, parse, decoder, sink, NULL)) {
printf("\nFailed to link elements.\n");
return -1;
}
#endif
/// 开始运行
gst_element_set_state(pipeline, GST_STATE_PLAYING);
GstBus *bus = gst_element_get_bus(pipeline);
GstMessage *msg = gst_bus_timed_pop_filtered(bus, GST_CLOCK_TIME_NONE,
(GstMessageType) (GST_MESSAGE_ERROR | GST_MESSAGE_EOS));
if (msg != NULL) {
gst_message_unref(msg);
}
gst_object_unref(bus);
gst_element_set_state(pipeline, GST_STATE_NULL);
gst_object_unref(pipeline);
}
当然,运行时你仍然需要配置文件:dstest1_pgie_config.txt 如果你需要CMake编译:
CMakeLists.txt
cmake_minimum_required(VERSION 3.5)
project(deepstream_test1-app_toson)
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED TRUE)
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -pthread -g")
MESSAGE(STATUS "operation system is ${CMAKE_SYSTEM}")
MESSAGE(STATUS "CMAKE_SYSTEM_NAME is ${CMAKE_SYSTEM}")
IF (CMAKE_SYSTEM_NAME MATCHES "Linux")
MESSAGE(STATUS "current platform: Linux ")
ELSEIF (CMAKE_SYSTEM_NAME MATCHES "Windows")
MESSAGE(STATUS "current platform: Windows")
ELSEIF (CMAKE_SYSTEM_NAME MATCHES "FreeBSD")
MESSAGE(STATUS "current platform: FreeBSD")
ELSE ()
MESSAGE(STATUS "other platform: ${CMAKE_SYSTEM_NAME}")
ENDIF (CMAKE_SYSTEM_NAME MATCHES "Linux")
if (${CMAKE_SYSTEM} MATCHES "Linux-4.9.140-tegra")
message("On TEGRA PLATFORM.")
add_definitions(-DPLATFORM_TEGRA)
set(SYS_USR_LIB /usr/lib/aarch64-linux-gnu)
set(SYS_LIB /lib/aarch64-linux-gnu)
set(DS_LIB /opt/nvidia/deepstream/deepstream-4.0/lib)
link_libraries(
/opt/nvidia/deepstream/deepstream-4.0/lib/libnvdsgst_meta.so
/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_meta.so
)
else ()
message("On X86 PLATFORM.")
set(SYS_USR_LIB /usr/lib/x86_64-linux-gnu)
set(SYS_LIB /lib/x86_64-linux-gnu)
endif ()
include_directories(
includes
/usr/include/gstreamer-1.0
/usr/include/glib-2.0
${SYS_USR_LIB}/glib-2.0/include
)
link_libraries(
${SYS_USR_LIB}/libgtk3-nocsd.so.0
${SYS_USR_LIB}/libgstreamer-1.0.so.0
${SYS_USR_LIB}/libgobject-2.0.so.0
${SYS_USR_LIB}/libglib-2.0.so.0
${SYS_LIB}/libc.so.6
${SYS_LIB}/libdl.so.2
${SYS_LIB}/libpthread.so.0
${SYS_USR_LIB}/libgmodule-2.0.so.0
${SYS_LIB}/libm.so.6
${SYS_USR_LIB}/libffi.so.6
${SYS_LIB}/libpcre.so.3
)
add_executable(deepstream_test1_app_ deepstream_test1_app.c)
add_executable(deepstream_test1_app_demo_rtsp_ deepstream_test1_app_demo_rtsp.c)