Deepstream 自定义检测Yolo v5模型部署
依照四部署yolo v5 环境。
Convert PyTorch model to wts file
- Download repositories
- Download latest YoloV5 (YOLOv5s, YOLOv5m, YOLOv5l or YOLOv5x) weights to yolov5 folder (example for YOLOv5s)
- Copy gen_wts.py file (from tensorrtx/yolov5 folder) to yolov5 (ultralytics) folder
- 复制检测检测Yolo v5 模型best.pth到yolov5 folder
- Generate wts file
yolov5s.wts file will be generated in yolov5 folder
【错误】No module named tqdm
【错误】No module named seaborn
因matplotlib无法正常安装导致seaborn无法安装,尝试:
再次因matplotlib失败
尝试:
再次执行
【错误】no module named numpy.testing.nosetester
发生这种情况的原因是numpy
和之间的版本不兼容scipy
。numpy
在其最新版本中已弃用numpy.testing.nosetester
。
尝试
【报错】No lapack/blas resources found
再次执行
生成 best.wts
依照训练的Yolo v5参数对yololayer.h和yolov5.cpp进行修改
https://zhuanlan.zhihu.com/p/365191541
1.查看Yolo v5的训练参数:
https://www.icode9.com/content-3-774443.html
- data文件夹下查看myvoc.yaml文件查找对应类别数
- weights文件夹下查看预训练模型版本-比如yolov5m
2、修改yololayer.h和yolov5.cpp文件,主要修改对应的参数和我们训练时候保持一致,不然会报错
- 对yololayer.h修改类别数
- 对yolov5.cpp主要需要根据显存大小调试batchsize大小,一般设成1就可以
Convert wts file to TensorRT model
根据https://github.com/DanaHan/Yolov5-in-Deepstream-5.0的说明,在Build tensorrtx/yolov5之前还需要:
Important Note:
You should replace yololayer.cu and hardswish.cu file in tensorrtx/yolov5
- Build tensorrtx/yolov5
- Move generated yolov5s.wts file to tensorrtx/yolov5 folder (example for YOLOv5s)
- Convert to TensorRT model (yolov5s.engine file will be generated in tensorrtx/yolov5/build folder)
- Note: by default, yolov5 script generate model with batch size = 1 and FP16 mode.
Edit yolov5.cpp file before compile if you want to change this parameters.
We can get ‘best.engine’ and ‘libmyplugin.so’ here for the future use.