工训备赛日志(三)——基于PaddleLite的垃圾分类模型在树莓派上的部署

引言:笔者用树莓派4B,在OpenCv和PaddleLite2.8环境下,将之前训练好的模型成功部署,本文内容分为四个部分,分别是:树莓派4B环境搭建、模型的转换、模型部署、结果演示四个部分。

目录:

一、树莓派4B环境搭建

1.OpenCv-Python安装
2.PaddleLite源码编译安装

二、模型转换

三、模型部署

一、树莓派4B环境搭建

1.OpenCv-Python安装.

安装依赖:

sudo apt-get install libhdf5-dev
sudo apt-get install libatlas-base-dev
sudo apt-get install libjasper-dev
sudo apt-get install libqt4-test
sudo apt-get install libqtgui4
sudo apt-get update

安装numpy

pip install numpy==1.16.2

pip3安装opencv-python

sudo pip3 install opencv-contrib-python==4.1.0.25

结果测试

java opencv级联分类器训练 opencv分类器物体识别 opencv垃圾分类训练集_paddle


二、模型转化

# 引用Paddlelite预测库
from paddlelite.lite import *
# 1. 创建opt实例
opt=Opt()
# 2. 指定输入模型地址
# 非combined形式
#opt.set_model_dir("./mobilenet_v1")
# conmbined形式,具体模型和参数名称,请根据实际修改
opt.set_model_file("./model.pdmodel")
opt.set_param_file("./model.pdiparams")
# 3. 指定转化类型: arm、x86、opencl、npu
opt.set_valid_places("arm")
# 4. 指定模型转化类型: naive_buffer、protobuf
opt.set_model_type("naive_buffer")
# 5. 动态离线量化
opt.set_quant_model(True)
opt.set_quant_type("QUANT_INT8")
# 6. 输出模型地址
opt.set_optimize_out("out_model")
# 7. 执行模型优化
opt.run()

三、模型部署

#导入库
from paddlelite.lite import *
import cv2
import numpy as np
import sys
import time
from PIL import Image
from PIL import ImageFont
from PIL import ImageDraw
# 加载模型
def create_predictor(model_dir):
    config = MobileConfig()
    config.set_model_from_file(model_dir)
    predictor = create_paddle_predictor(config)
    return predictor    
 
#图像归一化处理
def process_img(image, input_image_size):
    origin = image
    img = origin.resize(input_image_size, Image.BILINEAR)
    resized_img = img.copy()
    if img.mode != 'RGB':
      	img = img.convert('RGB')
    img = np.array(img).astype('float32').transpose((2, 0, 1))  # HWC to CHW
    img -= 127.5
    img *= 0.007843
    img = img[np.newaxis, :]
    return origin,img
 
# 预测
def predict(image, predictor, input_image_size):
    #输入数据处理
    input_tensor = predictor.get_input(0)
    input_tensor.resize([1, 3, input_image_size[0], input_image_size[1]])
    image = Image.fromarray(cv2.cvtColor(image, cv2.COLOR_BGRA2RGBA))
    origin, img = process_img(image, input_image_size)
    image_data = np.array(img).flatten().tolist()
    input_tensor.set_float_data(image_data)
    #执行预测
    predictor.run()
    #获取输出
    output_tensor = predictor.get_output(0)
    print("output_tensor.float_data()[:] : ", output_tensor.float_data()[:])
    res = output_tensor.float_data()[:]
    return res
 
# 展示结果
def post_res(label_dict, res):
    print(max(res))
    target_index = res.index(max(res))
    print("结果是:" + "   " + label_dict[target_index])

主函数中的模型文件地址可由用户自己决定,其标签个数、序号均与用户自己的训练集有关,其中数据集文件夹的序号取决于其相应文件夹的排列顺序。

笔者的如下:

java opencv级联分类器训练 opencv分类器物体识别 opencv垃圾分类训练集_树莓派_02

故依次顺序为:label_dict = {0:“Battery”, 1:“Bottle”, 2:“Cans”, 3:“Ceramics”,4:“Cigarette”,5:“Fruit”,6:“FruitSkin”,7:“Vegetable”,8:“VegetableSkin”}

以下为单张预测示例:

if __name__ == '__main__':
    # 初始定义
    label_dict = {0:"Battery", 1:"Bottle", 2:"Cans", 3:"Ceramics",4:"Cigarette",5:"Fruit",6:"FruitSkin",7:"Vegetable",8:"VegetableSkin"}#标签所对应识别的种类,序号取决于数据集文件夹的排列顺序
    image = "./test.jpg"
    model_dir = "./out_model.nb"
    image_size = (224, 224)
    # 初始化
    predictor = create_predictor(model_dir)
    # 读入图片
    image = cv2.imread(image)
    # 预测
    res = predict(image, predictor, image_size)
    # 显示结果
    post_res(label_dict, res)
    cv2.namedWindow('image', cv2.WINDOW_NORMAL)
    cv2.imshow("image", image)
    cv2.waitKey()

测试结果:

java opencv级联分类器训练 opencv分类器物体识别 opencv垃圾分类训练集_python_03

以下为视频流预测:

if __name__ == '__main__':
    # 初始定义
    label_dict = {0:"Battery", 1:"Bottle", 2:"Cans", 3:"Ceramics",4:"Cigarette",5:"Fruit",6:"FruitSkin",7:"Vegetable",8:"VegetableSkin"}#标签所对应识别的种类,序号取决于数据集文件夹的排列顺序
    image = "./test.jpg"
    model_dir = "./out_model.nb"
    image_size = (224, 224)
    cap = cv2.VideoCapture(0)
    (major_ver, minor_ver, subminor_ver) = (cv2.__version__).split('.')
    # 初始化
    predictor = create_predictor(model_dir)
    # 读入图片
while cap.isOpened():
    _, img = cap.read()

    if int(major_ver) < 3:
        fps = cap.get(cv2.cv.CV_CAP_PROP_FPS)
    else:
        fps = cap.get(cv2.CAP_PROP_FPS)
        
    font = cv2.FONT_HERSHEY_SIMPLEX
    
    text = 'FPS:  '+str(fps)
    img = cv2.putText(img, text,(10,50), font, 1, (0,255,255), 2, cv2.LINE_AA)
    # 预测
    res = predict(img, predictor, image_size)
    # 显示结果
    post_res(label_dict, res)
    cv2.imshow('img', img)
    if cv2.waitKey(1)&0xFF == ord('q'):
        break

cap.release()

测试结果:

java opencv级联分类器训练 opencv分类器物体识别 opencv垃圾分类训练集_计算机视觉_04


java opencv级联分类器训练 opencv分类器物体识别 opencv垃圾分类训练集_python_05