Jetson Nano部署YOLOv5与Tensorrtx加速——(自己走一遍全过程记录)

Posted Mr_LanGX

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Jetson Nano部署YOLOv5与Tensorrtx加速——(自己走一遍全过程记录)相关的知识,希望对你有一定的参考价值。

说在前面

搞了一下Jetson nano和YOLOv5,网上的资料大多重复也有许多的坑,在配置过程中摸爬滚打了好几天,出坑后决定写下这份教程供自己备忘。

事先声明,这篇文章的许多内容本身并不是原创,而是将配置过程中的文献进行了搜集整理,但是所有步骤都1:1复刻我的配置过程,包括其中的出错和解决途径,但是每个人的设备和网络上的包都是不断更新的,不能保证写下这篇文章之后的版本在兼容性上没有问题,总之提前祝自己好运!

一、烧录镜像

1、镜像选择

这里我选择的是亚博智能,它已经将镜像大部分给配置好了。

获取链接:(提取码:o6a4)
镜像的下载地址

里面已经安装好了如下的东西:
CUDA10.2,CUDNNv8,tensorRT,opencv4.1.1,python2,python3,tensorflow2.3,jetpack4.4.1,yolov4-tiny和yolov4,jetson-inference包(含资料中的训练模型),jetson-gpio库,安装pytorch1.6和torchvesion0.7,安装node v15.0.1,npm7.0.3,jupterlab,jetcham,已开启VNC服务。

2、镜像烧录方法

烧录方法参考这一篇文章,很简单的。
镜像烧录方法

3、Jetson nano 系统初始化设置

插卡!开机!最好连接上屏幕,不差这几个钱了。之后的很多命令需要用到root权限,我们需要开启root用户。

sudo passwd root

之后设置密码即可

开发板需要插上网线或者插上免驱动的无线网卡联网!!!

①做个小备份

sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak
sudo gedit /etc/apt/sources.list

②删除所有,替换为如下的东西

deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic main multiverse restricted universe
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-security main multiverse restricted universe
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-updates main multiverse restricted universe
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-backports main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-security main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-updates main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-backports main multiverse restricted universe

题外话:如何更换源呢?
Jetson Nano 烧录的镜像是国外的源,安装软件和升级软件包的速度非常慢,甚至还会常常出现网络错误,更换源的步骤如下:
①先备份原本的source.list文件。

sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak  

②编辑source.list,并更换国内源。

sudo gedit /etc/apt/sources.list

③按 “i” 开始输入,删除所有内容,复制并更换源。(这里选清华源或中科大源其中一个,然后保存)

# 清华源
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic main multiverse restricted universe
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-security main multiverse restricted universe
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-updates main multiverse restricted universe
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-backports main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-security main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-updates main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-backports main multiverse restricted universe

# 中科大源
deb http://mirrors.ustc.edu.cn/ubuntu-ports/ bionic-updates main restricted
deb http://mirrors.ustc.edu.cn/ubuntu-ports/ bionic universe
deb http://mirrors.ustc.edu.cn/ubuntu-ports/ bionic-updates universe
deb http://mirrors.ustc.edu.cn/ubuntu-ports/ bionic multiverse
deb http://mirrors.ustc.edu.cn/ubuntu-ports/ bionic-updates multiverse
deb http://mirrors.ustc.edu.cn/ubuntu-ports/ bionic-backports main restricted universe multiverse
deb http://mirrors.ustc.edu.cn/ubuntu-ports/ bionic-security main restricted
deb http://mirrors.ustc.edu.cn/ubuntu-ports/ bionic-security universe
deb http://mirrors.ustc.edu.cn/ubuntu-ports/ bionic-security multiverse

④更新软件

# 更新软件
sudo apt-get update
sudo apt-get upgrade

二、开始配置所需的环境,安装各种支持包

1、配置CUDA

Jetson nano内置好了CUDA,但需要配置环境变量才能使用,打开命令行添加环境变量即可,我这里是CUDA10.2如果不是使用我的镜像就需要根据自己的CUDA版本去填写路径了。

#打开终端,输入命令
vi .bashrc

拉到最后,在最后添加这些

export PATH=/usr/local/cuda-10.2/bin$PATH:+:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64$LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH
export CUDA_ROOT=/usr/local/cuda

应用当前配置(刷新一下)

source ~/.bashrc

查看是否配置成功

nvcc -V

2、安装pip3

sudo apt-get update
sudo apt-get install python3-pip python3-dev -y

3、安装jtop

安装jtop库这个可以监控自己的设备CPU、GPU工作状态

sudo -H pip3 install jetson-stats
sudo jtop		#运行jtop(第一次可能不行,第二次就好了)  按【q】退出

4、配置可能需要用到的库

sudo apt-get install build-essential make cmake cmake-curses-gui -y
sudo apt-get install git g++ pkg-config curl -y
sudo apt-get install libatlas-base-dev gfortran libcanberra-gtk-module libcanberra-gtk3-module -y
sudo apt-get install libhdf5-serial-dev hdf5-tools -y
sudo apt-get install nano locate screen -y

5、安装所需要的依赖环境

sudo apt-get install libfreetype6-dev -y
sudo apt-get install protobuf-compiler libprotobuf-dev openssl -y
sudo apt-get install libssl-dev libcurl4-openssl-dev -y
sudo apt-get install cython3 -y

6、安装opencv的系统级依赖,一些编解码的库

sudo apt-get install build-essential -y
sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev -y
sudo apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff5-dev libdc1394-22-dev -y
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev liblapacke-dev -y
sudo apt-get install libxvidcore-dev libx264-dev -y
sudo apt-get install libatlas-base-dev gfortran -y
sudo apt-get install ffmpeg -y

7、更新CMake

这一步是必须的,因为ARM架构的很多东西都要从源码编译

wget http://www.cmake.org/files/v3.13/cmake-3.13.0.tar.gz
tar xpvf cmake-3.13.0.tar.gz cmake-3.13.0/  #解压
cd cmake-3.13.0/
./bootstrap --system-curl	# 漫长的等待,做一套眼保健操...
make -j4 #编译  同样是漫长的等待...
echo 'export PATH=~/cmake-3.13.0/bin/:$PATH' >> ~/.bashrc
source ~/.bashrc #更新.bashrc

8、U盘兼容

之后的步骤可能需要使用U盘把大文件拷入开发板,但是对于大容量设备可能会出现无法挂载,一条安装命令解决。

sudo apt-get install exfat-utils

三、安装pytorch

Jetson nano上的Linux其实不是x86架构,而是类似手机的ARM架构,这也就导致它的很多包和普通的Linux上的不是通用的。也是踩过的坑之一,pytorch官网下载的包,在实际使用时无法调用开发板的显卡(这是个大问题,失去显卡的开发板算力暴跌!)。这里的PyTorch以及接下来的torchvision等包都需要安装Nvidia官网给出的版本。

1.下载PyTorch1.8
我已经下载好了,现成的安装包下载链接奉上: (提取码:yvex)
安装包

2.安装PyTorch1.8
把下载的东西用U盘拷到Jetson nano开发板上,建议放桌面上,好找。
sudo pip3 install …# 直接把.whl拖到命令窗口中,让它自动填充文件位置
安装需要略漫长的等待。

四、安装torchvision 0.9.0版本

PyTorch和torchvision版本是需要对应的,上一步下载的那个正好是对应的。

1.提前安装好我们需要的依赖

sudo apt-get install libopenmpi2
sudo apt-get install libopenblas-dev
sudo apt-get install libjpeg-dev zlib1g-dev

2.安装torchvision 0.9.0
同样需要特殊的匹配Jetson nano的版本,步骤三中个人链接里包含了这个torchvision。把下载的包拷到开发板上,同样建议放桌面上。

cd torchvision	# 进入到这个包的目录下
export BUILD_VERSION=0.9.0
sudo python3 setup.py install		# 安装(估计要20、30分钟不止吧)

3.检验一下是否成功安装

python3
import torch
import torchvision
print(torch.cuda.is_available())	# 这一步如果输出True那么就成功了!
quit()	# 最后退出python编译

五、下载YOLOv5-5.0源代码

在自己的电脑上或服务器上训练好。这里如何训练,不做过多解释,可以去B站找一些视频学习一下。我的项目是检测电梯按键。需要数据集和训练权重以及各种yolov5改进代码的同学可以滴滴私信联系我。

六、安装使YOLOv5成功运行需依赖的包

注意:下载过程如果因为网络原因失败的话可以在命令后加上 -i https://pypi.tuna.tsinghua.edu.cn/simple 来使用清华镜像源

1、

sudo pip3 install matplotlib==3.2.2
sudo pip3 install --upgrade Cython	#更新一下这个包

2、numpy有些特殊,已经自带了,但是是apt-get安装的,所以先卸掉原来的,也方便之后包的管理

sudo apt-get remove python-numpy
sudo pip3 install numpy==1.19.4
sudo pip3 install scipy==1.4.1.	# 这个包安装巨慢,耐心等待

3、这之后的一些包我在安装时都没有指定版本,这里的指令是根据之后pip3 list补上的

sudo pip3 install tqdm==4.61.2
sudo pip3 install seaborn==0.11.1
sudo pip3 install scikit-build==0.11.1	# 安装opencv需要这个包
sudo pip3 install opencv-python==4.5.3.56	# 不出意外也是一个相当漫长的过程
sudo pip3 install tensorboard==2.5.0 -i https://pypi.tuna.tsinghua.edu.cn/simple
sudo pip3 install --upgrade PyYAML	# 我升级到了5.4.1 也可以sudo pip3 install PyYAML==5.4.1
sudo pip3 install thop
sudo pip3 install pycocotools

4、根据YOLOv5官方给的所需的安装包清单,仔细对照,查漏补缺的给安装好。

安装命令输入格式:sudo pip3 install  .................
# base ----------------------------------------
matplotlib>=3.2.2
numpy>=1.18.5
opencv-python>=4.1.2
Pillow
PyYAML>=5.3.1
scipy>=1.4.1
torch>=1.7.0
torchvision>=0.8.1
tqdm>=4.41.0

# logging -------------------------------------
tensorboard>=2.4.1
wandb

# plotting ------------------------------------
seaborn>=0.11.0
pandas

# export --------------------------------------
coremltools>=4.1
onnx>=1.8.1
scikit-learn==0.19.2  # for coreml quantization

# extras --------------------------------------
thop  # FLOPS computation
pycocotools>=2.0  # COCO mAP

5、运行检测脚本
在源码的detect.py同目录下,打开终端,运行下面的命令。
效果还可以,启动模型要很久,预测效果还可以。之后就可以在自己的inference中的output中看到自己预测的图片了。
接着打开detecy.py检测脚本,修改一下检测资源参数,改为调用摄像头进行实时视频预测,大概10fps,应该说不算差,但是是有提升办法的。

python3 detect.py --source /path/to/xxx.jpg --weights /path/to/best.pt --conf-thres 0.7
或者是:
python3 detect.py

七、来一波TensorRT加速?

1、安装pycuda-2019

① (网络好的时候用这个方法)

在线安装pycuda

pip3 install pycuda

②(你的网络不好的时候用下面这个方法)

提取码:t94b 下载链接
下载完之后解压。
进入解压出来的文件。

tar zxvf pycuda-2019.1.2.tar.gz    
cd pycuda-2019.1.2/  
python3 configure.py --cuda-root=/usr/local/cuda-10.2
sudo python3 setup.py install

出现这个就说明正在编译文件安装,等待一段时间后即可安装完成。

安装完出现:

就表明安装成功了。

但是使用的时候还得配置一下一些必要的东西不然会报错:

FileNotFoundError: [Errno 2] No such file or directory: ‘nvcc’

将nvcc的完整路径硬编码到Pycuda的compiler.py文件中的compile_plain()
中,大约在第 73 行的位置中加入下面段代码!

nvcc = '/usr/local/cuda/bin/'+nvcc

2、TensorRT加速

这时我们要用到一个大佬的开源,GitHub地址如下:

https://github.com/wang-xinyu/tensorrtx/tree/master/yolov5

大佬是真的牛批,好好看一下吧。不仅有yolov5的,还有好多算法的,大佬都给做了相关的加速,大佬给他的项目起名叫TensorRTx,比原版的TensorRT加速更好用。
需要下载两个东西:
第一是:YOLOv5原版的开源程序(选择v5.0版本)
第二是:将大佬开源的项目tensorrtx,下载到自己的windows电脑上
然后,把tensorrtx文件夹整体,复制粘贴到yolov5-5.0原版程序的文件夹中。
我为了自己理解方便,和之后的操作,稍微改了一下文件夹名称:
(当然我都把东西准备好了,下载就行: 提取码:私信聊)
下载
YOLOv5原版程序文件夹改名为yolov5(Tensorrtx)如下图所示:

把tenserrtx文件夹整体改名为:tensorrtx-yolov5-v5.0,复制粘贴到yolov5(Tensorrtx)的文件夹中,如下图所示:

下面开始真正的操作了:
①生成.wts文件(在windows电脑上操作即可)
1.将训练得到的.pt权重文件改名为yolov5s.pt(必须改成这个名,没有为什么),把它放到yolov5(Tenserrtx)文件夹中。
2.将这个文件 yolov5-5.0(Tensorrtx)\\tensorrtx-yolov5-v5.0\\yolov5\\gen_wts.py
复制粘贴到yolov5(Tensorrtx)文件夹中。

注意: 此时yolov5(Tensorrtx)文件夹中有了 yolov5s.pt和gen_wts.py这两个文件。

然后,在yolov5(Tensorrtx)文件夹中右击鼠标,打开终端,激活在anaconda中自己创建的虚拟环境
比如:conda activate torch1.10 。
然后输入命令:

python gen_wts.py -w yolov5s.pt -o yolov5s.wts

(问题:在anaconda中自己创建虚拟环境不会?那你就去B站找视频自己学一下。YOLOv5的权重都训练好了,这个不可能不会的。)
文件内会生成一个文件:yolov5s.wts

② build(在Jetson nano上弄)(这一步是生成引擎文件)
1.将上述生成的.wts文件用U盘复制到Jetson nano里的yolov5-5.0(Tensorrtx)\\tensorrtx-yolov5-v5.0\\yolov5文件夹中。
2.打开上述文件夹里的yololayer.h文件,修改CLASS_NUM的数量(根据自己训练模型的类的个数来设,我的是55)。
3.此时上述文件夹里有(.wts 是在windows电脑上生成的)(yolov5.cpp 未进行过改动)(yololayer.h 已经改为自己训练的类数了)这三个。
4.在上述文件夹中打开终端,依次运行指令:

mkdir build
cd build
cmake ..
make
sudo ./yolov5 -s ../yolov5s.wts yolov5s.engine s

稍微等待之后,在build文件夹中便通过tensorrtx生成了基于C++的engine引擎部署文件了。但是我C++水平不怎么样,对它有种心理上的抵触,把他搞成python的吧。

③USB摄像头实时检测加速
由于本人C++语言很一般,所以只能硬着头皮修改了下yolov5-5.0(Tensorrtx)\\tensorrtx-yolov5-v5.0\\yolov5文件夹中的yolov5_trt.py脚本,脚本的代码格式较差,但是能够实现加速,有需要的可以作为一个参考。 在文件夹下新建一个yolo_trt_test.py文件。复制下面 v4.0或者v5.0的代码到yolo_trt_test.py。
需要自行更改的地方:yolov5s.engine的路径要改成自己的、检测物体的类别名称要改为自己的。

①v5.0代码

"""
An example that uses TensorRT's Python api to make inferences.
"""
import ctypes
import os
import shutil
import random
import sys
import threading
import time
import cv2
import numpy as np
import pycuda.autoinit
import pycuda.driver as cuda
import tensorrt as trt
import torch
import torchvision
import argparse
 
CONF_THRESH = 0.5
IOU_THRESHOLD = 0.4
 
 
def get_img_path_batches(batch_size, img_dir):
    ret = []
    batch = []
    for root, dirs, files in os.walk(img_dir):
        for name in files:
            if len(batch) == batch_size:
                ret.append(batch)
                batch = []
            batch.append(os.path.join(root, name))
    if len(batch) > 0:
        ret.append(batch)
    return ret
 
def plot_one_box(x, img, color=None, label=None, line_thickness=None):
    """
    description: Plots one bounding box on image img,
                 this function comes from YoLov5 project.
    param: 
        x:      a box likes [x1,y1,x2,y2]
        img:    a opencv image object
        color:  color to draw rectangle, such as (0,255,0)
        label:  str
        line_thickness: int
    return:
        no return
    """
    tl = (
        line_thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1
    )  # line/font thickness
    color = color or [random.randint(0, 255) for _ in range(3)]
    c1, c2 = (int(x[0]), int(x[1])), (int(x[2]), int(x[3]))
    cv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA)
    if label:
        tf = max(tl - 1, 1)  # font thickness
        t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0]
        c2 = c1[0] + t_size[0], c1[1] - t_size[1] - 3
        cv2.rectangle(img, c1, c2, color, -1, cv2.LINE_AA)  # filled
        cv2.putText(
            img,
            label,
            (c1[0], c1[1] - 2),
            0,
            tl / 3,
            [225, 255, 255],
            thickness=tf,
            lineType=cv2.LINE_AA,
        )
 
 
class YoLov5TRT(object):
    """
    description: A YOLOv5 class that warps TensorRT ops, preprocess and postprocess ops.
    """
 
    def __init__(self, engine_file_path):
        # Create a Context on this device,
        self.ctx = cuda.Device(0).make_context()
        stream = cuda.Stream()
        TRT_LOGGER = trt.Logger(trt.Logger.INFO)
        runtime = trt.Runtime(TRT_LOGGER)
 
        # Deserialize the engine from file
        with open(engine_file_path, "rb") as f:
            engine = runtime.deserialize_cuda_engine(f.read())
        context = engine.create_execution_context()
 
        host_inputs = []
        cuda_inputs = []
        host_outputs = []
        cuda_outputs = []
        bindings = []
 
        for binding in engine:
            print('bingding:', binding, engine.get_binding_shape(binding))
            size = trt.volume(engine.get_binding_shape(binding)) * engine.max_batch_size
            dtype = trt.nptype(engine.get_binding_dtype(binding))
            # Allocate host and device buffers
            host_mem = cuda.pagelocked_empty(size, dtype)
            cuda_mem = cuda.mem_alloc(host_mem.nbytes)
            # Append the device buffer to device bindings.
            bindings.append(int(cuda_mem))
            # Append to the appropriate list.
            if engine.binding_is_input(binding):
                self.input_w = engine.get_binding_shape(binding)[-1]
                self.input_h = engine.get_binding_shape(binding)[-2]
                host_inputs.append(host_mem)
                cuda_inputs.append(cuda_mem)
            else:
          

NVIDIA Jetson YOLOv5 tensorRT部署和加速 C++版

前言

在实现NVIDIA Jetson AGX Xavier 部署YOLOv5的深度学习环境,然后能正常推理跑模型后;发现模型速度不够快,于是使用tensorRT部署,加速模型,本文介绍C++版本的。

NVIDIA Jetson YOLOv5应用与部署_一颗小树x的博客-CSDN博客

版本介绍:yolov5 v6.0、tensorrtx;Jetpack 4.5 [L4T 32.5.0]、CUDA: 10.2.89。

我测试了 kitti 数据集的100张图片:加速后每一张图像,平均推理时间是22ms,感觉还行。

目录

一、下载yolov5 v6.0和tensorrtx

二、生成 xxx.wts文件

三、修改配置

四、编译tensorrtx

五、运行

六、解析关键代码

七、Batch size 进一步加速实验


一、下载yolov5 v6.0和tensorrtx

yolov5 v6.0版本,下载来至 yolov5 release v6.0,

git clone -b v6.0 https://github.com/ultralytics/yolov5.git

对应版本的tensorrtx:

git clone https://github.com/wang-xinyu/tensorrtx.git

二、生成 xxx.wts文件

首先复制 tensorrtx/yolov5/gen_wts.py 文件到 ultralytics/yolov5 中;其中tensorrtx 是名称,不同版本名称不一致,这里叫tensorrtx-master;比如,tensorrtx-master 和 yolov5 在同级目录:

cp tensorrtx-master/yolov5/gen_wts.py ./yolov5

进入yolov5 工程目录

cd yolov5

可以把yolov5s.pt 放到yolov5 里面,然后生成yolov5s.wts

python gen_wts.py -w yolov5s.pt -o yolov5s.wts

三、修改配置

 进入tensorrtx 的 yolov5目录中,cd tensorrtx/yolov5/

cd tensorrtx-master/yolov5

3.1 C++版本的注意看yolov5.cpp、yololayer.h;首先看yolov5.cpp,它可以设置GPU id、NMS thresh、BBox confidence thresh、Batch size、推理精度(INT8/FP16/FP32)等等参数。

 3.2 然后看一下yololayer.h文件,它可以设置模型的类别,输入大小等等。

使用摄像头推理(默认摄像头0),修改yolov5.cpp即可:

四、编译tensorrtx

首先进入tensorrtx 的 yolov5目录中,cd tensorrtx/yolov5/

cd tensorrtx-master/yolov5

建立build目录,准备编译工作

mkdir build
cd build

复制刚才生成的 yolov5s.wts 文件到build目录中 

cp ultralytics/yolov5/yolov5s.wts tensorrtx/yolov5/build

然后编译

cmake ..
make

五、运行

YOLOv5s模型

首先用yolov5s.wts生成yolov5s.engine,然后用yolov5s.engine运行;

sudo ./yolov5 -s yolov5s.wts yolov5s.engine s
sudo ./yolov5 -d yolov5s.engine ../samples

sudo ./yolov5 -s yolov5s.wts yolov5s.engine s 中的s对应模型级别(可以选择:n/s/m/l/x/n6/s6/m6/l6/x6)

../samples 中链接指向了两张图片。可以自己创建一个文件夹,放一些图片进去测试。

如果是YOLOv5m模型

sudo ./yolov5 -s yolov5m.wts yolov5m.engine m
sudo ./yolov5 -d yolov5m.engine ../samples

我测试了 kitti 数据集的100张图片:(每一张图像,平均推理时间是22ms,感觉还行;后面测试一些实时的视频流处理速度)

其他数据集测试效果:

六、解析关键代码

C++版本的注意看yolov5.cpp、yololayer.h;首先看yolov5.cpp,它可以设置GPU id、NMS thresh、BBox confidence thresh、Batch size、推理精度(INT8/FP16/FP32)等等参数。

 然后看一下yololayer.h文件,它可以设置模型的类别,输入大小等等。

使用摄像头推理(默认摄像头0),修改yolov5.cpp即可:

#include <iostream>
#include <chrono>
#include <cmath>
#include "cuda_utils.h"
#include "logging.h"
#include "common.hpp"
#include "utils.h"
#include "calibrator.h"
#include "preprocess.h"

// OpenCV includes
#include <opencv2/opencv.hpp>
#include <opencv2/highgui.hpp>
#include <string>


#define USE_FP16  // set USE_INT8 or USE_FP16 or USE_FP32
#define DEVICE 0  // GPU id
#define NMS_THRESH 0.4
#define CONF_THRESH 0.5
#define BATCH_SIZE 1
#define MAX_IMAGE_INPUT_SIZE_THRESH 3000 * 3000 // ensure it exceed the maximum size in the input images !

// stuff we know about the network and the input/output blobs
static const int INPUT_H = Yolo::INPUT_H;
static const int INPUT_W = Yolo::INPUT_W;
static const int CLASS_NUM = Yolo::CLASS_NUM;
static const int OUTPUT_SIZE = Yolo::MAX_OUTPUT_BBOX_COUNT * sizeof(Yolo::Detection) / sizeof(float) + 1;  // we assume the yololayer outputs no more than MAX_OUTPUT_BBOX_COUNT boxes that conf >= 0.1
const char* INPUT_BLOB_NAME = "data";
const char* OUTPUT_BLOB_NAME = "prob";
static Logger gLogger;

static int get_width(int x, float gw, int divisor = 8) 
    return int(ceil((x * gw) / divisor)) * divisor;


static int get_depth(int x, float gd) 
    if (x == 1) return 1;
    int r = round(x * gd);
    if (x * gd - int(x * gd) == 0.5 && (int(x * gd) % 2) == 0) 
        --r;
    
    return std::max<int>(r, 1);


ICudaEngine* build_engine(unsigned int maxBatchSize, IBuilder* builder, IBuilderConfig* config, DataType dt, float& gd, float& gw, std::string& wts_name) 
    INetworkDefinition* network = builder->createNetworkV2(0U);

    // Create input tensor of shape 3, INPUT_H, INPUT_W with name INPUT_BLOB_NAME
    ITensor* data = network->addInput(INPUT_BLOB_NAME, dt, Dims3 3, INPUT_H, INPUT_W );
    assert(data);
    std::map<std::string, Weights> weightMap = loadWeights(wts_name);
    /* ------ yolov5 backbone------ */
    auto conv0 = convBlock(network, weightMap, *data,  get_width(64, gw), 6, 2, 1,  "model.0");
    assert(conv0);
    auto conv1 = convBlock(network, weightMap, *conv0->getOutput(0), get_width(128, gw), 3, 2, 1, "model.1");
    auto bottleneck_CSP2 = C3(network, weightMap, *conv1->getOutput(0), get_width(128, gw), get_width(128, gw), get_depth(3, gd), true, 1, 0.5, "model.2");
    auto conv3 = convBlock(network, weightMap, *bottleneck_CSP2->getOutput(0), get_width(256, gw), 3, 2, 1, "model.3");
    auto bottleneck_csp4 = C3(network, weightMap, *conv3->getOutput(0), get_width(256, gw), get_width(256, gw), get_depth(6, gd), true, 1, 0.5, "model.4");
    auto conv5 = convBlock(network, weightMap, *bottleneck_csp4->getOutput(0), get_width(512, gw), 3, 2, 1, "model.5");
    auto bottleneck_csp6 = C3(network, weightMap, *conv5->getOutput(0), get_width(512, gw), get_width(512, gw), get_depth(9, gd), true, 1, 0.5, "model.6");
    auto conv7 = convBlock(network, weightMap, *bottleneck_csp6->getOutput(0), get_width(1024, gw), 3, 2, 1, "model.7");
    auto bottleneck_csp8 = C3(network, weightMap, *conv7->getOutput(0), get_width(1024, gw), get_width(1024, gw), get_depth(3, gd), false, 1, 0.5, "model.8");
    auto spp9 = SPPF(network, weightMap, *bottleneck_csp8->getOutput(0), get_width(1024, gw), get_width(1024, gw), 5, "model.9");
    /* ------ yolov5 head ------ */
    auto conv10 = convBlock(network, weightMap, *spp9->getOutput(0), get_width(512, gw), 1, 1, 1, "model.10");

    auto upsample11 = network->addResize(*conv10->getOutput(0));
    assert(upsample11);
    upsample11->setResizeMode(ResizeMode::kNEAREST);
    upsample11->setOutputDimensions(bottleneck_csp6->getOutput(0)->getDimensions());

    ITensor* inputTensors12[] =  upsample11->getOutput(0), bottleneck_csp6->getOutput(0) ;
    auto cat12 = network->addConcatenation(inputTensors12, 2);
    auto bottleneck_csp13 = C3(network, weightMap, *cat12->getOutput(0), get_width(1024, gw), get_width(512, gw), get_depth(3, gd), false, 1, 0.5, "model.13");
    auto conv14 = convBlock(network, weightMap, *bottleneck_csp13->getOutput(0), get_width(256, gw), 1, 1, 1, "model.14");

    auto upsample15 = network->addResize(*conv14->getOutput(0));
    assert(upsample15);
    upsample15->setResizeMode(ResizeMode::kNEAREST);
    upsample15->setOutputDimensions(bottleneck_csp4->getOutput(0)->getDimensions());

    ITensor* inputTensors16[] =  upsample15->getOutput(0), bottleneck_csp4->getOutput(0) ;
    auto cat16 = network->addConcatenation(inputTensors16, 2);

    auto bottleneck_csp17 = C3(network, weightMap, *cat16->getOutput(0), get_width(512, gw), get_width(256, gw), get_depth(3, gd), false, 1, 0.5, "model.17");

    /* ------ detect ------ */
    IConvolutionLayer* det0 = network->addConvolutionNd(*bottleneck_csp17->getOutput(0), 3 * (Yolo::CLASS_NUM + 5), DimsHW 1, 1 , weightMap["model.24.m.0.weight"], weightMap["model.24.m.0.bias"]);
    auto conv18 = convBlock(network, weightMap, *bottleneck_csp17->getOutput(0), get_width(256, gw), 3, 2, 1, "model.18");
    ITensor* inputTensors19[] =  conv18->getOutput(0), conv14->getOutput(0) ;
    auto cat19 = network->addConcatenation(inputTensors19, 2);
    auto bottleneck_csp20 = C3(network, weightMap, *cat19->getOutput(0), get_width(512, gw), get_width(512, gw), get_depth(3, gd), false, 1, 0.5, "model.20");
    IConvolutionLayer* det1 = network->addConvolutionNd(*bottleneck_csp20->getOutput(0), 3 * (Yolo::CLASS_NUM + 5), DimsHW 1, 1 , weightMap["model.24.m.1.weight"], weightMap["model.24.m.1.bias"]);
    auto conv21 = convBlock(network, weightMap, *bottleneck_csp20->getOutput(0), get_width(512, gw), 3, 2, 1, "model.21");
    ITensor* inputTensors22[] =  conv21->getOutput(0), conv10->getOutput(0) ;
    auto cat22 = network->addConcatenation(inputTensors22, 2);
    auto bottleneck_csp23 = C3(network, weightMap, *cat22->getOutput(0), get_width(1024, gw), get_width(1024, gw), get_depth(3, gd), false, 1, 0.5, "model.23");
    IConvolutionLayer* det2 = network->addConvolutionNd(*bottleneck_csp23->getOutput(0), 3 * (Yolo::CLASS_NUM + 5), DimsHW 1, 1 , weightMap["model.24.m.2.weight"], weightMap["model.24.m.2.bias"]);

    auto yolo = addYoLoLayer(network, weightMap, "model.24", std::vector<IConvolutionLayer*>det0, det1, det2);
    yolo->getOutput(0)->setName(OUTPUT_BLOB_NAME);
    network->markOutput(*yolo->getOutput(0));
    // Build engine
    builder->setMaxBatchSize(maxBatchSize);
    config->setMaxWorkspaceSize(16 * (1 << 20));  // 16MB
#if defined(USE_FP16)
    config->setFlag(BuilderFlag::kFP16);
#elif defined(USE_INT8)
    std::cout << "Your platform support int8: " << (builder->platformHasFastInt8() ? "true" : "false") << std::endl;
    assert(builder->platformHasFastInt8());
    config->setFlag(BuilderFlag::kINT8);
    Int8EntropyCalibrator2* calibrator = new Int8EntropyCalibrator2(1, INPUT_W, INPUT_H, "./coco_calib/", "int8calib.table", INPUT_BLOB_NAME);
    config->setInt8Calibrator(calibrator);
#endif

    std::cout << "Building engine, please wait for a while..." << std::endl;
    ICudaEngine* engine = builder->buildEngineWithConfig(*network, *config);
    std::cout << "Build engine successfully!" << std::endl;

    // Don't need the network any more
    network->destroy();

    // Release host memory
    for (auto& mem : weightMap)
    
        free((void*)(mem.second.values));
    

    return engine;


ICudaEngine* build_engine_p6(unsigned int maxBatchSize, IBuilder* builder, IBuilderConfig* config, DataType dt, float& gd, float& gw, std::string& wts_name) 
    INetworkDefinition* network = builder->createNetworkV2(0U);
    // Create input tensor of shape 3, INPUT_H, INPUT_W with name INPUT_BLOB_NAME
    ITensor* data = network->addInput(INPUT_BLOB_NAME, dt, Dims3 3, INPUT_H, INPUT_W );
    assert(data);
    
    std::map<std::string, Weights> weightMap = loadWeights(wts_name);

    /* ------ yolov5 backbone------ */
    auto conv0 = convBlock(network, weightMap, *data,  get_width(64, gw), 6, 2, 1,  "model.0");
    auto conv1 = convBlock(network, weightMap, *conv0->getOutput(0), get_width(128, gw), 3, 2, 1, "model.1");
    auto c3_2 = C3(network, weightMap, *conv1->getOutput(0), get_width(128, gw), get_width(128, gw), get_depth(3, gd), true, 1, 0.5, "model.2");
    auto conv3 = convBlock(network, weightMap, *c3_2->getOutput(0), get_width(256, gw), 3, 2, 1, "model.3");
    auto c3_4 = C3(network, weightMap, *conv3->getOutput(0), get_width(256, gw), get_width(256, gw), get_depth(6, gd), true, 1, 0.5, "model.4");
    auto conv5 = convBlock(network, weightMap, *c3_4->getOutput(0), get_width(512, gw), 3, 2, 1, "model.5");
    auto c3_6 = C3(network, weightMap, *conv5->getOutput(0), get_width(512, gw), get_width(512, gw), get_depth(9, gd), true, 1, 0.5, "model.6");
    auto conv7 = convBlock(network, weightMap, *c3_6->getOutput(0), get_width(768, gw), 3, 2, 1, "model.7");
    auto c3_8 = C3(network, weightMap, *conv7->getOutput(0), get_width(768, gw), get_width(768, gw), get_depth(3, gd), true, 1, 0.5, "model.8");
    auto conv9 = convBlock(network, weightMap, *c3_8->getOutput(0), get_width(1024, gw), 3, 2, 1, "model.9");
    auto c3_10 = C3(network, weightMap, *conv9->getOutput(0), get_width(1024, gw), get_width(1024, gw), get_depth(3, gd), false, 1, 0.5, "model.10");
    auto sppf11 = SPPF(network, weightMap, *c3_10->getOutput(0), get_width(1024, gw), get_width(1024, gw), 5, "model.11");

    /* ------ yolov5 head ------ */
    auto conv12 = convBlock(network, weightMap, *sppf11->getOutput(0), get_width(768, gw), 1, 1, 1, "model.12");
    auto upsample13 = network->addResize(*conv12->getOutput(0));
    assert(upsample13);
    upsample13->setResizeMode(ResizeMode::kNEAREST);
    upsample13->setOutputDimensions(c3_8->getOutput(0)->getDimensions());
    ITensor* inputTensors14[] =  upsample13->getOutput(0), c3_8->getOutput(0) ;
    auto cat14 = network->addConcatenation(inputTensors14, 2);
    auto c3_15 = C3(network, weightMap, *cat14->getOutput(0), get_width(1536, gw), get_width(768, gw), get_depth(3, gd), false, 1, 0.5, "model.15");

    auto conv16 = convBlock(network, weightMap, *c3_15->getOutput(0), get_width(512, gw), 1, 1, 1, "model.16");
    auto upsample17 = network->addResize(*conv16->getOutput(0));
    assert(upsample17);
    upsample17->setResizeMode(ResizeMode::kNEAREST);
    upsample17->setOutputDimensions(c3_6->getOutput(0)->getDimensions());
    ITensor* inputTensors18[] =  upsample17->getOutput(0), c3_6->getOutput(0) ;
    auto cat18 = network->addConcatenation(inputTensors18, 2);
    auto c3_19 = C3(network, weightMap, *cat18->getOutput(0), get_width(1024, gw), get_width(512, gw), get_depth(3, gd), false, 1, 0.5, "model.19");

    auto conv20 = convBlock(network, weightMap, *c3_19->getOutput(0), get_width(256, gw), 1, 1, 1, "model.20");
    auto upsample21 = network->addResize(*conv20->getOutput(0));
    assert(upsample21);
    upsample21->setResizeMode(ResizeMode::kNEAREST);
    upsample21->setOutputDimensions(c3_4->getOutput(0)->getDimensions());
    ITensor* inputTensors21[] =  upsample21->getOutput(0), c3_4->getOutput(0) ;
    auto cat22 = network->addConcatenation(inputTensors21, 2);
    auto c3_23 = C3(network, weightMap, *cat22->getOutput(0), get_width(512, gw), get_width(256, gw), get_depth(3, gd), false, 1, 0.5, "model.23");

    auto conv24 = convBlock(network, weightMap, *c3_23->getOutput(0), get_width(256, gw), 3, 2, 1, "model.24");
    ITensor* inputTensors25[] =  conv24->getOutput(0), conv20->getOutput(0) ;
    auto cat25 = network->addConcatenation(inputTensors25, 2);
    auto c3_26 = C3(network, weightMap, *cat25->getOutput(0), get_width(1024, gw), get_width(512, gw), get_depth(3, gd), false, 1, 0.5, "model.26");

    auto conv27 = convBlock(network, weightMap, *c3_26->getOutput(0), get_width(512, gw), 3, 2, 1, "model.27");
    ITensor* inputTensors28[] =  conv27->getOutput(0), conv16->getOutput(0) ;
    auto cat28 = network->addConcatenation(inputTensors28, 2);
    auto c3_29 = C3(network, weightMap, *cat28->getOutput(0), get_width(1536, gw), get_width(768, gw), get_depth(3, gd), false, 1, 0.5, "model.29");

    auto conv30 = convBlock(network, weightMap, *c3_29->getOutput(0), get_width(768, gw), 3, 2, 1, "model.30");
    ITensor* inputTensors31[] =  conv30->getOutput(0), conv12->getOutput(0) ;
    auto cat31 = network->addConcatenation(inputTensors31, 2);
    auto c3_32 = C3(network, weightMap, *cat31->getOutput(0), get_width(2048, gw), get_width(1024, gw), get_depth(3, gd), false, 1, 0.5, "model.32");

    /* ------ detect ------ */
    IConvolutionLayer* det0 = network->addConvolutionNd(*c3_23->getOutput(0), 3 * (Yolo::CLASS_NUM + 5), DimsHW 1, 1 , weightMap["model.33.m.0.weight"], weightMap["model.33.m.0.bias"]);
    IConvolutionLayer* det1 = network->addConvolutionNd(*c3_26->getOutput(0), 3 * (Yolo::CLASS_NUM + 5), DimsHW 1, 1 , weightMap["model.33.m.1.weight"], weightMap["model.33.m.1.bias"]);
    IConvolutionLayer* det2 = network->addConvolutionNd(*c3_29->getOutput(0), 3 * (Yolo::CLASS_NUM + 5), DimsHW 1, 1 , weightMap["model.33.m.2.weight"], weightMap["model.33.m.2.bias"]);
    IConvolutionLayer* det3 = network->addConvolutionNd(*c3_32->getOutput(0), 3 * (Yolo::CLASS_NUM + 5), DimsHW 1, 1 , weightMap["model.33.m.3.weight"], weightMap["model.33.m.3.bias"]);

    auto yolo = addYoLoLayer(network, weightMap, "model.33", std::vector<IConvolutionLayer*>det0, det1, det2, det3);
    yolo->getOutput(0)->setName(OUTPUT_BLOB_NAME);
    network->markOutput(*yolo->getOutput(0));

    // Build engine
    builder->setMaxBatchSize(maxBatchSize);
    config->setMaxWorkspaceSize(16 * (1 << 20));  // 16MB
#if defined(USE_FP16)
    config->setFlag(BuilderFlag::kFP16);
#elif defined(USE_INT8)
    std::cout << "Your platform support int8: " << (builder->platformHasFastInt8() ? "true" : "false") << std::endl;
    assert(builder->platformHasFastInt8());
    config->setFlag(BuilderFlag::kINT8);
    Int8EntropyCalibrator2* calibrator = new Int8EntropyCalibrator2(1, INPUT_W, INPUT_H, "./coco_calib/", "int8calib.table", INPUT_BLOB_NAME);
    config->setInt8Calibrator(calibrator);
#endif

    std::cout << "Building engine, please wait for a while..." << std::endl;
    ICudaEngine* engine = builder->buildEngineWithConfig(*network, *config);
    std::cout << "Build engine successfully!" << std::endl;

    // Don't need the network any more
    network->destroy();

    // Release host memory
    for (auto& mem : weightMap)
    
        free((void*)(mem.second.values));
    

    return engine;


void APIToModel(unsigned int maxBatchSize, IHostMemory** modelStream, bool& is_p6, float& gd, float& gw, std::string& wts_name) 
    // Create builder
    IBuilder* builder = createInferBuilder(gLogger);
    IBuilderConfig* config = builder->createBuilderConfig();

    // Create model to populate the network, then set the outputs and create an engine
    ICudaEngine *engine = nullptr;
    if (is_p6) 
        engine = build_engine_p6(maxBatchSize, builder, config, DataType::kFLOAT, gd, gw, wts_name);
     else 
        engine = build_engine(maxBatchSize, builder, config, DataType::kFLOAT, gd, gw, wts_name);
    
    assert(engine != nullptr);

    // Serialize the engine
    (*modelStream) = engine->serialize();

    // Close everything down
    engine->destroy();
    builder->destroy();
    config->destroy();


void doInference(IExecutionContext& context, cudaStream_t& stream, void **buffers, float* output, int batchSize) 
    // infer on the batch asynchronously, and DMA output back to host
    context.enqueue(batchSize, buffers, stream, nullptr);
    CUDA_CHECK(cudaMemcpyAsync(output, buffers[1], batchSize * OUTPUT_SIZE * sizeof(float), cudaMemcpyDeviceToHost, stream));
    cudaStreamSynchronize(stream);


bool parse_args(int argc, char** argv, std::string& wts, std::string& engine, bool& is_p6, float& gd, float& gw, std::string& img_dir) 
    if (argc < 4) return false;
    if (std::string(argv[1]) == "-s" && (argc == 5 || argc == 7)) 
        wts = std::string(argv[2]);
        engine = std::string(argv[3]);
        auto net = std::string(argv[4]);
        if (net[0] == 'n') 
            gd = 0.33;
            gw = 0.25;
         else if (net[0] == 's') 
            gd = 0.33;
            gw = 0.50;
         else if (net[0] == 'm') 
            gd = 0.67;
            gw = 0.75;
         else if (net[0] == 'l') 
            gd = 1.0;
            gw = 1.0;
         else if (net[0] == 'x') 
            gd = 1.33;
            gw = 1.25;
         else if (net[0] == 'c' && argc == 7) 
            gd = atof(argv[5]);
            gw = atof(argv[6]);
         else 
            return false;
        
        if (net.size() == 2 && net[1] == '6') 
            is_p6 = true;
        
     else if (std::string(argv[1]) == "-d" && argc == 4) 
        engine = std::string(argv[2]);
        img_dir = std::string(argv[3]);
     else 
        return false;
    
    return true;


int main(int argc, char** argv) 
    // opencv
    cv::VideoCapture cap; // 1.创建视频采集对象;
    cv::Mat readImage;	  //    读取的图片;
    cap.open(0);         // 2.打开默认相机;
    if (!cap.isOpened()) std::cout << "open Capture error !!!" << std::endl;
    else std::cout << "open Capture OK !!!" << std::endl;
    // cap.release();    // 释放视频采集对象!!!

    cudaSetDevice(DEVICE);

    std::string wts_name = "";
    std::string engine_name = "";
    bool is_p6 = false;
    float gd = 0.0f, gw = 0.0f;
    std::string img_dir;
    if (!parse_args(argc, argv, wts_name, engine_name, is_p6, gd, gw, img_dir)) 
        std::cerr << "arguments not right!" << std::endl;
        std::cerr << "./yolov5 -s [.wts] [.engine] [n/s/m/l/x/n6/s6/m6/l6/x6 or c/c6 gd gw]  // serialize model to plan file" << std::endl;
        std::cerr << "./yolov5 -d [.engine] ../samples  // deserialize plan file and run inference" << std::endl;
        return -1;
    

    // create a model using the API directly and serialize it to a stream
    if (!wts_name.empty()) 
        IHostMemory* modelStream nullptr ;
        APIToModel(BATCH_SIZE, &modelStream, is_p6, gd, gw, wts_name);
        assert(modelStream != nullptr);
        std::ofstream p(engine_name, std::ios::binary);
        if (!p) 
            std::cerr << "could not open plan output file" << std::endl;
            return -1;
        
        p.write(reinterpret_cast<const char*>(modelStream->data()), modelStream->size());
        modelStream->destroy();
        return 0;
    

    // deserialize the .engine and run inference
    std::ifstream file(engine_name, std::ios::binary);
    if (!file.good()) 
        std::cerr << "read " << engine_name << " error!" << std::endl;
        return -1;
    
    char *trtModelStream = nullptr;
    size_t size = 0;
    file.seekg(0, file.end);
    size = file.tellg();
    file.seekg(0, file.beg);
    trtModelStream = new char[size];
    assert(trtModelStream);
    file.read(trtModelStream, size);
    file.close();

    std::vector<std::string> file_names;
    if (read_files_in_dir(img_dir.c_str(), file_names) < 0) 
        std::cerr << "read_files_in_dir failed." << std::endl;
        return -1;
    

    static float prob[BATCH_SIZE * OUTPUT_SIZE];
    IRuntime* runtime = createInferRuntime(gLogger);
    assert(runtime != nullptr);
    ICudaEngine* engine = runtime->deserializeCudaEngine(trtModelStream, size);
    assert(engine != nullptr);
    IExecutionContext* context = engine->createExecutionContext();
    assert(context != nullptr);
    delete[] trtModelStream;
    assert(engine->getNbBindings() == 2);
    float* buffers[2];
    // In order to bind the buffers, we need to know the names of the input and output tensors.
    // Note that indices are guaranteed to be less than IEngine::getNbBindings()
    const int inputIndex = engine->getBindingIndex(INPUT_BLOB_NAME);
    const int outputIndex = engine->getBindingIndex(OUTPUT_BLOB_NAME);
    assert(inputIndex == 0);
    assert(outputIndex == 1);
    // Create GPU buffers on device
    CUDA_CHECK(cudaMalloc((void**)&buffers[inputIndex], BATCH_SIZE * 3 * INPUT_H * INPUT_W * sizeof(float)));
    CUDA_CHECK(cudaMalloc((void**)&buffers[outputIndex], BATCH_SIZE * OUTPUT_SIZE * sizeof(float)));

    // Create stream
    cudaStream_t stream;
    CUDA_CHECK(cudaStreamCreate(&stream));
    uint8_t* img_host = nullptr;
    uint8_t* img_device = nullptr;
    // prepare input data cache in pinned memory 
    CUDA_CHECK(cudaMallocHost((void**)&img_host, MAX_IMAGE_INPUT_SIZE_THRESH * 3));
    // prepare input data cache in device memory
    CUDA_CHECK(cudaMalloc((void**)&img_device, MAX_IMAGE_INPUT_SIZE_THRESH * 3));
    int fcount = 0;
    int save_int = 0;
    std::vector<cv::Mat> imgs_buffer(BATCH_SIZE);
    std::vector<AffineMatrix> matrix_buffer(BATCH_SIZE);
    while (true) 
    // for (int f = 0; f < (int)file_names.size(); f++) 
        if (cv::waitKey(1)  == 'q') break; //如果按下q,会推出程序 
        fcount++;
        save_int++;
        if (fcount < BATCH_SIZE ) continue;
        //auto start = std::chrono::system_clock::now();
        float* buffer_idx = (float*)buffers[inputIndex];
        for (int b = 0; b < fcount; b++) 
            cv::Mat img;
            cap >> img;
            // cv::Mat img = cv::imread(img_dir + "/" + file_names[f - fcount + 1 + b]); // ############

            if (img.empty()) continue;
            imgs_buffer[b] = img;
            size_t  size_image = img.cols * img.rows * 3;
            size_t  size_image_dst = INPUT_H * INPUT_W * 3;
            //copy data to pinned memory
            memcpy(img_host,img.data,size_image);
            //copy data to device memory
            CUDA_CHECK(cudaMemcpyAsync(img_device,img_host,size_image,cudaMemcpyHostToDevice,stream));
            preprocess_kernel_img(img_device, img.cols, img.rows, buffer_idx, matrix_buffer[b], INPUT_W, INPUT_H, stream);       
            buffer_idx += size_image_dst;
        
        // Run inference
        auto start = std::chrono::system_clock::now();
        doInference(*context, stream, (void**)buffers, prob, BATCH_SIZE);
        auto end = std::chrono::system_clock::now();
        std::cout << "inference time: " << std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count() << "ms" << std::endl;
        std::vector<std::vector<Yolo::Detection>> batch_res(fcount);
        for (int b = 0; b < fcount; b++) 
            auto& res = batch_res[b];
            nms(res, &prob[b * OUTPUT_SIZE], CONF_THRESH, NMS_THRESH);
        
        for (int b = 0; b < fcount; b++) 
            auto& res = batch_res[b];
            auto& bbox_affine_matrix = matrix_buffer[b];
            cv::Mat img = imgs_buffer[b];
            for (size_t j = 0; j < res.size(); j++) 
                cv::Rect r = get_rect(res[j].bbox, bbox_affine_matrix);
                cv::rectangle(img, r, cv::Scalar(0x27, 0xC1, 0x36), 2);
                cv::putText(img, std::to_string((int)res[j].class_id), cv::Point(r.x, r.y - 1), cv::FONT_HERSHEY_PLAIN, 1.2, cv::Scalar(0xFF, 0xFF, 0xFF), 2);
            
            // cv::imwrite("_" + file_names[f - fcount + 1 + b], img);
            cv::imwrite(std::to_string(save_int)+".jpg", img);
        
        fcount = 0;
    

    // Release stream and buffers
    cudaStreamDestroy(stream);
    CUDA_CHECK(cudaFree(img_device));
    CUDA_CHECK(cudaFreeHost(img_host));
    CUDA_CHECK(cudaFree(buffers[inputIndex]));
    CUDA_CHECK(cudaFree(buffers[outputIndex]));
    // Destroy the engine
    context->destroy();
    engine->destroy();
    runtime->destroy();

    cap.release(); // 释放视频采集对象!!!

    // Print histogram of the output distribution
    //std::cout << "\\nOutput:\\n\\n";
    //for (unsigned int i = 0; i < OUTPUT_SIZE; i++)
    //
    //    std::cout << prob[i] << ", ";
    //    if (i % 10 == 0) std::cout << std::endl;
    //
    //std::cout << std::endl;

    return 0;

效果:

七、Batch size 进一步加速实验

官方说当 batchsize=8 时,预处理 + 推理速度提高 3 倍;于是我试了一下;还是 kitti 数据集的100张图片,

当batchsize=8 时,一次推理8张图片,平均时间是46ms;46ms / 8 = 5.75ms;即现在推理一张图片需要5.75ms,对比上面单张推理时间22ms,快了3.8倍左右。

这在一个设备用来推理多个视频流输入时,还是挺不错的。

 本文参考 wang-xinyu 大佬开源的tensorrtx ,致谢:https://github.com/wang-xinyu/tensorrtx

参考:https://github.com/wang-xinyu/tensorrtx/blob/master/yolov5/README.md

以上是关于Jetson Nano部署YOLOv5与Tensorrtx加速——(自己走一遍全过程记录)的主要内容,如果未能解决你的问题,请参考以下文章

NVIDIA Jetson YOLOv5应用与部署

NVIDIA Jetson YOLOv5应用与部署

NVIDIA Jetson YOLOv5应用与部署

NVIDIA Jetson AGX Xavier YOLOv5应用与部署

NVIDIA Jetson YOLOv5 tensorRT部署和加速 C++版

NVIDIA Jetson YOLOv5 tensorRT部署和加速 C++版