物体检测实战:使用OpenCV内置方法实现行人检测
Posted AI浩
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了物体检测实战:使用OpenCV内置方法实现行人检测相关的知识,希望对你有一定的参考价值。
您是否知道 OpenCV 具有执行行人检测的内置方法?
OpenCV 附带一个预训练的 HOG + 线性 SVM 模型,可用于在图像和视频流中执行行人检测。
今天我们使用Opencv自带的模型实现对视频流中的行人检测,只需打开一个新文件,将其命名为 detect.py ,然后加入代码:
# import the necessary packages
from __future__ import print_function
import numpy as np
import argparse
import cv2
import os
导入需要的包,然后定义项目需要的方法。
def nms(boxes, probs=None, overlapThresh=0.3):
# if there are no boxes, return an empty list
if len(boxes) == 0:
return []
# if the bounding boxes are integers, convert them to floats -- this
# is important since we'll be doing a bunch of divisions
if boxes.dtype.kind == "i":
boxes = boxes.astype("float")
# initialize the list of picked indexes
pick = []
# grab the coordinates of the bounding boxes
x1 = boxes[:, 0]
y1 = boxes[:, 1]
x2 = boxes[:, 2]
y2 = boxes[:, 3]
# compute the area of the bounding boxes and grab the indexes to sort
# (in the case that no probabilities are provided, simply sort on the
# bottom-left y-coordinate)
area = (x2 - x1 + 1) * (y2 - y1 + 1)
idxs = y2
# if probabilities are provided, sort on them instead
if probs is not None:
idxs = probs
# sort the indexes
idxs = np.argsort(idxs)
# keep looping while some indexes still remain in the indexes list
while len(idxs) > 0:
# grab the last index in the indexes list and add the index value
# to the list of picked indexes
last = len(idxs) - 1
i = idxs[last]
pick.append(i)
# find the largest (x, y) coordinates for the start of the bounding
# box and the smallest (x, y) coordinates for the end of the bounding
# box
xx1 = np.maximum(x1[i], x1[idxs[:last]])
yy1 = np.maximum(y1[i], y1[idxs[:last]])
xx2 = np.minimum(x2[i], x2[idxs[:last]])
yy2 = np.minimum(y2[i], y2[idxs[:last]])
# compute the width and height of the bounding box
w = np.maximum(0, xx2 - xx1 + 1)
h = np.maximum(0, yy2 - yy1 + 1)
# compute the ratio of overlap
overlap = (w * h) / area[idxs[:last]]
# delete all indexes from the index list that have overlap greater
# than the provided overlap threshold
idxs = np.delete(idxs, np.concatenate(([last],
np.where(overlap > overlapThresh)[0])))
# return only the bounding boxes that were picked
return boxes[pick].astype("int")
image_types = (".jpg", ".jpeg", ".png", ".bmp", ".tif", ".tiff")
def list_images(basePath, contains=None):
# return the set of files that are valid
return list_files(basePath, validExts=image_types, contains=contains)
def list_files(basePath, validExts=None, contains=None):
# loop over the directory structure
for (rootDir, dirNames, filenames) in os.walk(basePath):
# loop over the filenames in the current directory
for filename in filenames:
# if the contains string is not none and the filename does not contain
# the supplied string, then ignore the file
if contains is not None and filename.find(contains) == -1:
continue
# determine the file extension of the current file
ext = filename[filename.rfind("."):].lower()
# check to see if the file is an image and should be processed
if validExts is None or ext.endswith(validExts):
# construct the path to the image and yield it
imagePath = os.path.join(rootDir, filename)
yield imagePath
def resize(image, width=None, height=None, inter=cv2.INTER_AREA):
dim = None
(h, w) = image.shape[:2]
# 如果高和宽为None则直接返回
if width is None and height is None:
return image
# 检查宽是否是None
if width is None:
# 计算高度的比例并并按照比例计算宽度
r = height / float(h)
dim = (int(w * r), height)
# 高为None
else:
# 计算宽度比例,并计算高度
r = width / float(w)
dim = (width, int(h * r))
resized = cv2.resize(image, dim, interpolation=inter)
# return the resized image
return resized
nms函数:非极大值抑制。
list_images:读取图片。
resize:等比例改变大小。
# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--images", default='test1', help="path to images directory")
args = vars(ap.parse_args())
# 初始化 HOG 描述符/人物检测器
hog = cv2.HOGDescriptor()
hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
定义输入图片的文件夹路径。
初始化HOG检测器。
# loop over the image paths
for imagePath in list_images(args["images"]):
# 加载图像并调整其大小以
# (1)减少检测时间
# (2)提高检测精度
image = cv2.imread(imagePath)
image = resize(image, width=min(400, image.shape[1]))
orig = image.copy()
print(image)
# detect people in the image
(rects, weights) = hog.detectMultiScale(image, winStride=(4, 4),
padding=(8, 8), scale=1.05)
# draw the original bounding boxes
print(rects)
for (x, y, w, h) in rects:
cv2.rectangle(orig, (x, y), (x + w, y + h), (0, 0, 255), 2)
# 使用相当大的重叠阈值对边界框应用非极大值抑制,以尝试保持仍然是人的重叠框
rects = np.array([[x, y, x + w, y + h] for (x, y, w, h) in rects])
pick = nms(rects, probs=None, overlapThresh=0.65)
# draw the final bounding boxes
for (xA, yA, xB, yB) in pick:
cv2.rectangle(image, (xA, yA), (xB, yB), (0, 255, 0), 2)
# show some information on the number of bounding boxes
filename = imagePath[imagePath.rfind("/") + 1:]
print("[INFO] : original boxes, after suppression".format(
filename, len(rects), len(pick)))
# show the output images
cv2.imshow("Before NMS", orig)
cv2.imshow("After NMS", image)
cv2.waitKey(0)
遍历 --images 目录中的图像。
然后,将图像调整为最大宽度为 400 像素。尝试减少图像尺寸的原因有两个:
- 减小图像大小可确保需要评估图像金字塔中的滑动窗口更少(即从线性 SVM 中提取 HOG 特征,然后将其传递给线性 SVM),从而减少检测时间(并提高整体检测吞吐量)。
- 调整我们的图像大小也提高了我们行人检测的整体准确性(即更少的误报)。
通过调用 hog 描述符的 detectMultiScale 方法,检测图像中的行人。 detectMultiScale 方法构造了一个比例为1.05 的图像金字塔,滑动窗口步长分别为x 和y 方向的(4, 4) 个像素。
滑动窗口的大小固定为 64 x 128 像素,正如开创性的 Dalal 和 Triggs 论文《用于人体检测的定向梯度直方图》所建议的那样。 detectMultiScale 函数返回 rects 的 2 元组,或图像中每个人的边界框 (x, y) 坐标和 weights ,SVM 为每次检测返回的置信度值。
较大的尺度大小将评估图像金字塔中的较少层,这可以使算法运行得更快。然而,规模太大(即图像金字塔中的层数较少)会导致行人无法被检测到。同样,过小的比例尺会显着增加需要评估的图像金字塔层的数量。这不仅会造成计算上的浪费,还会显着增加行人检测器检测到的误报数量。也就是说,在执行行人检测时,比例是要调整的最重要的参数之一。我将在以后的博客文章中对每个参数进行更彻底的审查以检测到多尺度。
获取初始边界框并将它们绘制在图像上。
但是,对于某些图像,您会注意到每个人检测到多个重叠的边界框。
在这种情况下,我们有两个选择。我们可以检测一个边界框是否完全包含在另一个边界框内。或者我们可以应用非最大值抑制并抑制与重要阈值重叠的边界框。
应用非极大值抑制后,得到最终的边界框,然后输出图像。
运行结果:
nms前:
nms后:
结论:
相比现在的深度学习方法,机器学习的精度低了很多。
完整代码:
https://download.csdn.net/download/hhhhhhhhhhwwwwwwwwww/67160136
以上是关于物体检测实战:使用OpenCV内置方法实现行人检测的主要内容,如果未能解决你的问题,请参考以下文章