还不会使用torchserve部署yoloV5模型?(不使用Docker)

Posted 啊~小 l i

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了还不会使用torchserve部署yoloV5模型?(不使用Docker)相关的知识,希望对你有一定的参考价值。

没有使用Docker容器

  1. 环境简介
    1. 操作系统centos8
    2. conda管理的python3.8(此处建议使用python3.8);
    3. JDK11(openJDK,Oracle JDK)
    4. 其他相关的python库(torch,cv2,numpy等等)
  2. 环境配置
    1. Java环境配置
      JDK下载Oracle JDK11下载官网
    2. 配置环境变量 (这里使用的源码安装)
      sudo vim /etv/bashrc然后将下面的环境变量添加到末尾
export JAVA_HOME=(您的 JDK 路径)
export CLASSPATH=$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib
export PATH=$JAVA_HOME/bin:$PATH
  1. python环境配置
    1. conda的安装(使用清华源手动下载,下载速度会更快)清华源Anaconda链接
      下载后进入 下载Anaconda的文件夹使用命令sudo bash (Conda的文件)
      PS:统一用户选项后,可以自由选择安装路径。最后选择自动配置环境变量
    2. conda的安装测试
      conda安装后激活环境变量source ~/.bashrc
      使用命令测试Conda环境conda env list
    3. conda的使用
      1. 环境的创建 conda create -n (环境名) python=需要的python版本
      2. conda的切换 conda activate 环境名
      3. conda环境的复制conda create -n (创建的环境名字) --clone (copy的环境名字)
    4. 依赖的安装
      切换到自己的想要使用的conda环境后使用pip/conda命令安装依赖
      1. 打包需要的库是torch-model-archiver
      2. 部署需要的库是torchserve
  2. 模型打包
    模型打包是一个很麻烦的步骤,我也参考过很多网上的资料,很多资料都是不可以使用的。这里提供一种比较好的思路来部署模型。

模型部署前的准备

将模型转化为torchScript格式(YOLOV5源码里面有代码hubconf.py),转化方法如下:
python models/export.py --weights yolov5s.pt --img 640 --batch 1

手写处理流程(难点)handler文件

模型部署后接收到的图片为字节流,我们需要对传输的数据进行处理,然后使用Handler.py文件对流程进行处理。

from ts.torch_handler.base_handler import BaseHandler
import torch
class MyHandler(BaseHandler):
    def __init__(self):
        # 初始化实例,看具体情况重写
        # self.model = None 
        # self.mapping = None
        # self.device = None
        # self.initialized = False
        # self.context = None
        # self.manifest = None
        # self.map_location = None
        # self.explain = False
        # self.target = 0
        ......
    def initialize(self,context):
        # 初始化模型及其它相关参数,看具体情况重写
        ......
    def preprocess(self, data):
        # 前处理,关键步骤,一般情况下都需要重写
        ......
    def inference(self, model_input):
        # 推理,看具体情况重写
        ......
    def postprocess(self, inference_output):
        # 后处理,关键步骤,一般情况下都需要重写
        ......
    def handle(self, data, context):
        # 服务流程,看具体情况重写
        ......

这里给大家我自己手写的一个参考(可以直接使用)

import time
from ts.torch_handler.base_handler import BaseHandler
import numpy as np
import torch
import torchvision
import cv2


class ModelHandler(BaseHandler):
    """
    A custom model handler implementation.
    """

    def __init__(self):
        super().__init__()
        self._context = None
        self.initialized = False
        self.batch_size = 1
        self.img_size = 640

    def preprocess(self, data):
        """
        Transform raw input into model input data.
        :param batch: list of raw requests, should match batch size
        :return: list of preprocessed model input data
        """
        list_img_names = ["img" + str(i) for i in range(1, self.batch_size + 1)]
        inputs = torch.zeros(self.batch_size, 3, self.img_size, self.img_size)
        for i, img_name in enumerate(list_img_names):
            try:
                # Take the input data and make it inference ready
                byte_array = data[0][img_name]
                file_bytes = np.asarray(bytearray(byte_array), dtype=np.uint8)
                # yolov5 preprocessing
                img = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
                img = img[:, :, ::-1].transpose(2, 0, 1)  # BGR to RGB, to 3x416x416
                img = np.ascontiguousarray(img)
                input = torch.from_numpy(img)
                input = input.float()
                input /= 255.0  # 0 - 255 to 0.0 - 1.0
                inputs[i, :, :, :] = input
            except:
                pass
        return inputs

    def postprocess(self, inference_output):
        """
        Return inference result.
        :param inference_output: list of inference output
        :return: list of predict results
        """
        # Take output from network and post-process to desired format
        postprocess_output = inference_output
        pred = non_max_suppression(postprocess_output[0], conf_thres=0.6)
        pred = [p.tolist() for p in pred]
        return [pred]


def non_max_suppression(prediction, conf_thres=0.5, iou_thres=0.6, classes=None, agnostic=False, labels=()):
    """
    Performs Non-Maximum Suppression (NMS) on inference results
    Returns:
         detections with shape: nx6 (x1, y1, x2, y2, conf, cls)
    """

    # Number of classes.
    nc = prediction[0].shape[1] - 5

    # Candidates.
    xc = prediction[..., 4] > conf_thres

    # Settings:
    # Minimum and maximum box width and height in pixels.
    min_wh, max_wh = 2, 256

    # Maximum number of detections per image.
    max_det = 100

    # Timeout.
    time_limit = 10.0

    # Require redundant detections.
    redundant = True

    # Multiple labels per box (adds 0.5ms/img).
    multi_label = nc > 1

    # Use Merge-NMS.
    merge = False

    t = time.time()
    output = [torch.zeros(0, 6)] * prediction.shape[0]
    for xi, x in enumerate(prediction):  # image index, image inference

        # Apply constraints:
        # Confidence.
        x = x[xc[xi]]

        # Cat apriori labels if autolabelling.
        if labels and len(labels[xi]):
            l = labels[xi]
            v = torch.zeros((len(l), nc + 5), device=x.device)
            v[:, :4] = l[:, 1:5]  # box
            v[:, 4] = 1.0  # conf
            v[range(len(l)), l[:, 0].long() + 5] = 1.0  # cls
            x = torch.cat((x, v), 0)

        # If none remain process next image.
        if not x.shape[0]:
            continue

        # Compute conf.
        x[:, 5:] *= x[:, 4:5]  # conf = obj_conf * cls_conf

        # Box (center x, center y, width, height) to (x1, y1, x2, y2).
        box = xywh2xyxy(x[:, :4])

        # Detections matrix nx6 (xyxy, conf, cls).
        if multi_label:
            i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T
            x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)
        else:

            # Best class only.
            conf, j = x[:, 5:].max(1, keepdim=True)
            x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]

        # Filter by class.
        if classes:
            x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]

        # If none remain process next image.
        # Number of boxes.
        n = x.shape[0]
        if not n:
            continue

        # Batched NMS:
        # Classes.
        c = x[:, 5:6] * (0 if agnostic else max_wh)

        # Boxes (offset by class), scores.
        boxes, scores = x[:, :4] + c, x[:, 4]

        # NMS.
        i = torchvision.ops.nms(boxes, scores, iou_thres)

        # Limit detections.
        if i.shape[0] > max_det:  # limit detections
            i = i[:max_det]
        if merge and (1 < n < 3E3):

            # Merge NMS (boxes merged using weighted mean).
            # Update boxes as boxes(i,4) = weights(i,n) * boxes(n,4).
            iou = box_iou(boxes[i], boxes) > iou_thres  # iou matrix
            weights = iou * scores[None]  # box weights
            x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True)  # merged boxes
            if redundant:
                i = i[iou.sum(1) > 1]  # require redundancy

        output[xi] = x[i]
        if (time.time() - t) > time_limit:
            break  # time limit exceeded

    return output


def xywh2xyxy(x):
    # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
    y = torch.zeros_like(x) if isinstance(x, torch.Tensor) else np.zeros_like(x)
    y[:, 0] = x[:, 0] - x[:, 2] / 2  # top left x
    y[:, 1] = x[:, 1] - x[:, 3] / 2  # top left y
    y[:, 2] = x[:, 0] + x[:, 2] / 2  # bottom right x
    y[:, 3] = x[:, 1] + x[:, 3] / 2  # bottom right y
    return y

**补充:**这里的代码并不能直接使用,这里打包的时候还需要将其他YOLOV5的相关代码一起打包进去,这里仅供参考。完整的部署代码:

链接:https://pan.baidu.com/s/1_A_RHF6ln7EdZafiRujk6g 
提取码:wm24 

打包命令

torch-model-archiver --model-name test --version 1 --serialized-file ./last_90.pt --model-file 模型配置文件 --handler ./handler.py --extra-files (这个handler.py文件,index_to_name.json) -f
5. 模型部署
配置文件编写示例(config.properties):

inference_address=http://0.0.0.0:8080
management_address=http://127.0.0.1:8081
metrics_address=http://127.0.0.1:8082
number_ofnetty_threads=1

配置好相关将模型放到文件夹使用命令启动
torchserve --start --model-store model_store --models yolo5=yolo5.mar --ts-config ./config.properties

部署测试

  1. 加载的模型
    网页访问http://localhost:8080/models

    或者终端访问curl http://localhost:8080/models

  2. 部署状态
    网页访问http://localhost:8080/ping

    或者终端访问curl http:localhost:8080/ping

  3. 预测
    网页访问
    http://localhost:8080/predictions/模型名字 -T 图片
    例如:http://localhost:8080/predictions/test

    或者:
    curl http://localhost:8080/predictions/test -T 图片

部署结束 ENDing

以上是关于还不会使用torchserve部署yoloV5模型?(不使用Docker)的主要内容,如果未能解决你的问题,请参考以下文章

还不会使用torchserve部署yoloV5模型?(不使用Docker)

TorchServe部署HuggingFace文本生成模型

TorchServe部署HuggingFace文本生成模型

TorchServe部署HuggingFace文本生成模型

torchserve安装模型的部署与测试(基于docker)

Yolov5s模型在全志V853平台上的部署方法和应用