本文在腾讯云上使用CentOS8进行相关操作
OpenVINO Model Server这个docker还比较新,目前不足200的总下载量,和它的定位不符合。
官方文档应该是有错误的,而且有 一些地方没有展开说明,我在本文中进行补充。
Note: OVMS has been tested on CentOS and Ubuntu. Publically released docker images are based on CentOS.**
=======================================================================================================
Step 1: Prepare Docker
To see if you have Docker already installed and ready to use, test the installation:
$ docker run hello-world
If you see a test image and an informational message, Docker is ready to use. Go to download and build the OpenVINO Model Server. If you don't see the test image and message:
- Install the Docker* Engine on your development machine.
- Use the Docker post-installation steps.
Continue to Step 2 to download and build the OpenVINO Model Server.
Step 2: Download and Build the OpenVINO Model Server
- Download the Docker* image that contains the OpenVINO Model Server. This image is available from DockerHub:
/model_server :latest
or build the docker image openvino/model_server:latest with a command:
<URL >
Note: URL to OpenVINO Toolkit package can be received after registration on OpenVINO™ Toolkit website
Step 3: Download a Model
Download the model components to the model
directory. Example command using curl:
特别注意,这里的文件需要组织结构 ,比如我们下载了bin+xml,需要 按照以下模式存放
本文采用的方法是直接在/ 下创建 model,而后级联创建models和model1,将bin+xml放到mode1下面,后面的命令行都是在这个基础上编写的。你需要根据实际情况修改使用。
Step 4: Start the Model Server Container(这个地方原文档可能有错)
在前面已经组织的文件结构基础上,使用
更为正式的说法应该是
此外可以参考docker上的文档
Step 5: Download the Example Client Components
Model scripts are available to provide an easy way to access the Model Server. This example uses a face detection script and uses curl to download components.
- Use this command to download all necessary components:
For more information:
这几个因为连到了github上,所以可能需要重复下载。
Step 6: Download Data for Inference
- Download example images for inference. This example uses a file named people1.jpeg.
- Put the image in a folder by itself. The script runs inference on all images in the folder.
Step 7: Run Inference
在pip安装前,最好运行一次
将pip 升级到最新的数据。
此外,如果显示CV2 的错误,请运行
- Go to the folder in which you put the client script.
- Install the dependencies:
- Create a folder in which inference results will be put:
- Run the client script:
Step 8: Review the Results
In the results
folder, look for an image that contains the inference results. The result is the modified input image with bounding boxes indicating detected faces.
OK成功实现!
而且即使将这里的做完了,下一步也必然是其它服务调用的问题。不过这段face_detection.py里面应该已经写的是比较清楚的。
从结构上看,这里的操作更类似于网络部署,这个放到下里面再说。
其它重要参考:
https://medium.com/@rachittayal7/getting-started-with-openvino-serving-3810361a7368
https://zhuanlan.zhihu.com/p/102107664