opencv目标跟踪怎么实现重新选择目标

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了opencv目标跟踪怎么实现重新选择目标相关的知识,希望对你有一定的参考价值。

参考技术A Kalman滤波理论主要应用在现实世界中个,并不是理想环境。主要是来跟踪的某一个变量的值,跟踪的依据是首先根据系统的运动方程来对该值做预测,比如说我们知道一个物体的运动速度,那么下面时刻它的位置按照道理是可以预测出来的,不过该预测肯定有误差,只能作为跟踪的依据。另一个依据是可以用测量手段来测量那个变量的值,当然该测量也是有误差的,也只能作为依据,不过这2个依据的权重比例不同。最后kalman滤波就是利用这两个依据进行一些列迭代进行目标跟踪的。本回答被提问者采纳

[OpenCV实战]16 使用OpenCV实现多目标跟踪

在这篇文章中,我们将介绍如何在OpenCV中使用MultiTracker类实现多目标跟踪API。在深入了解详细信息之前,请查看下面列出的关于目标跟踪的帖子,以了解在OpenCV中实现的单个目标跟踪器的基础知识。同时需要安装opencv_contrib库,详细见:

1 背景介绍

计算机视觉和机器学习的大多数初学者都学习对象检测。如果您是初学者,您可能会想到为什么我们需要对象跟踪。我们不能只检测每一帧中的物体吗?

让我们探讨一下跟踪有用的几个原因。

首先,当在视频帧中检测到多个对象(比如人)时,跟踪有助于跨帧确定对象的身份。

其次,在某些情况下,目标检测可能会失败,但仍可能跟踪对象,因为跟踪会考虑前一帧中对象的位置和外观。

第三,一些跟踪算法非常快,因为它们进行本地搜索而不是全局搜索。因此,我们可以通过每第n帧执行目标检测并在中间帧中跟踪对象来为我们的系统获得非常高的性能。

那么,为什么不在第一次检测后无限期地跟踪对象呢?跟踪算法有时可能会丢失其正在跟踪的对象。例如,当对象的运动太大时,跟踪算法可能无法跟上。通常会在目标跟踪一段时间后再次目标检测。

在本教程中,我们将只关注跟踪部分。我们要跟踪的对象将通过指定它们周围的边界框来获取。

2 基于MultiTracker的多目标跟踪

OpenCV中的多目标跟踪器MultiTracker类提供了多目标跟踪的实现。但是这只是一个初步的实现,因为它只处理跟踪对象,而不对被跟踪对象进行任何优化。

2.1 创建单个对象跟踪器

多对象跟踪器只是单个对象跟踪器的集合。我们首先定义一个将跟踪器类型作为输入并创建跟踪器对象的函数。OpenCV有8种不同的跟踪器类型:BOOSTING,MIL,KCF,TLD,MEDIANFLOW,GOTURN,MOSSE,CSRT。本文不使用GOTURN跟踪器。一般我们先给定跟踪器类的名称,再返回单跟踪器对象,然后建立多跟踪器类。

C++代码:

vector<string> trackerTypes = "BOOSTING", "MIL", "KCF", "TLD", "MEDIANFLOW", "GOTURN", "MOSSE", "CSRT";

/**
 * @brief Create a Tracker By Name object 根据设定的类型初始化跟踪器
 *
 * @param trackerType
 * @return Ptr<Tracker>
 */
Ptr<Tracker> createTrackerByName(string trackerType)

    Ptr<Tracker> tracker;
    if (trackerType == trackerTypes[0])
        tracker = TrackerBoosting::create();
    else if (trackerType == trackerTypes[1])
        tracker = TrackerMIL::create();
    else if (trackerType == trackerTypes[2])
        tracker = TrackerKCF::create();
    else if (trackerType == trackerTypes[3])
        tracker = TrackerTLD::create();
    else if (trackerType == trackerTypes[4])
        tracker = TrackerMedianFlow::create();
    else if (trackerType == trackerTypes[5])
        tracker = TrackerGOTURN::create();
    else if (trackerType == trackerTypes[6])
        tracker = TrackerMOSSE::create();
    else if (trackerType == trackerTypes[7])
        tracker = TrackerCSRT::create();
    else
    
        cout << "Incorrect tracker name" << endl;
        cout << "Available trackers are: " << endl;
        for (vector<string>::iterator it = trackerTypes.begin(); it != trackerTypes.end(); ++it)
        
            std::cout << " " << *it << endl;
        
    
    return tracker;

python代码:

from __future__ import print_function
import sys
import cv2
from random import randint

trackerTypes = [BOOSTING, MIL, KCF,TLD, MEDIANFLOW, GOTURN, MOSSE, CSRT]

def createTrackerByName(trackerType):
  # Create a tracker based on tracker name
  if trackerType == trackerTypes[0]:
    tracker = cv2.TrackerBoosting_create()
  elif trackerType == trackerTypes[1]:
    tracker = cv2.TrackerMIL_create()
  elif trackerType == trackerTypes[2]:
    tracker = cv2.TrackerKCF_create()
  elif trackerType == trackerTypes[3]:
    tracker = cv2.TrackerTLD_create()
  elif trackerType == trackerTypes[4]:
    tracker = cv2.TrackerMedianFlow_create()
  elif trackerType == trackerTypes[5]:
    tracker = cv2.TrackerGOTURN_create()
  elif trackerType == trackerTypes[6]:
    tracker = cv2.TrackerMOSSE_create()
  elif trackerType == trackerTypes[7]:
    tracker = cv2.TrackerCSRT_create()
  else:
    tracker = None
    print(Incorrect tracker name)
    print(Available trackers are:)
    for t in trackerTypes:
      print(t)

  return tracker

2.2 读取视频的第一帧

多对象跟踪器需要两个输入即一个视频帧和我们想要跟踪的所有对象的位置(边界框)。

给定此信息,跟踪器在所有后续帧中跟踪这些指定对象的位置。在下面的代码中,我们首先使用VideoCapture类加载视频并读取第一帧。稍后将使用它来初始化MultiTracker。

C++代码:

    // Set tracker type. Change this to try different trackers. 选择追踪器类型
    string trackerType = trackerTypes[6];

    // set default values for tracking algorithm and video 视频读取
    string videoPath = "video/run.mp4";

    // Initialize MultiTracker with tracking algo 边界框
    vector<Rect> bboxes;

    // create a video capture object to read videos 读视频
    cv::VideoCapture cap(videoPath);
    Mat frame;

    // quit if unable to read video file
    if (!cap.isOpened())
    
        cout << "Error opening video file " << videoPath << endl;
        return -1;
    

    // read first frame 读第一帧
    cap >> frame;

python代码:

# Set video to load
videoPath = "video/run.mp4"

# Create a video capture object to read videos
cap = cv2.VideoCapture(videoPath)

# Read first frame
success, frame = cap.read()
# quit if unable to read the video file
if not success:
  print(Failed to read video)
  sys.exit(1)

2.3 在第一帧中确定我们跟踪的对象

接下来,我们需要在第一帧中找到我们想要跟踪的对象。OpenCV提供了一个名为selectROIs的函数,它弹出一个GUI来选择边界框(也称为感兴趣区域(ROI))。在C++版本中可以通过selectROIs允许您获取多个边界框,但在Python版本中,只能通过selectROI获得一个边界框。因此,在Python版本中,我们需要一个循环来获取多个边界框。对于每个对象,我们还选择随机颜色来显示边界框。selectROI函数步骤为先在图像上画框,然后按ENTER确定完成画框画下一个框。按ESC退出画框开始执行程序

代码如下所示。

C++代码:

// Get bounding boxes for first frame
// selectROIs default behaviour is to draw box starting from the center
// when fromCenter is set to false, you can draw box starting from top left corner
bool showCrosshair = true;
bool fromCenter = false;
cout << "\\n==========================================================\\n";
cout << "OpenCV says press c to cancel objects selection process" << endl;
cout << "It doesnt work. Press Escape to exit selection process" << endl;
cout << "\\n==========================================================\\n";
cv::selectROIs("MultiTracker", frame, bboxes, showCrosshair, fromCenter);

// quit if there are no objects to track
if(bboxes.size() < 1)
  return 0;

vector<Scalar> colors;
getRandomColors(colors, bboxes.size());
// Fill the vector with random colors
void getRandomColors(vector<Scalar>& colors, int numColors)

  RNG rng(0);
  for(int i=0; i < numColors; i++)
    colors.push_back(Scalar(rng.uniform(0,255), rng.uniform(0, 255), rng.uniform(0, 255)));

python代码:

## Select boxes
bboxes = []
colors = []

# OpenCVs selectROI function doesnt work for selecting multiple objects in Python
# So we will call this function in a loop till we are done selecting all objects
while True:
  # draw bounding boxes over objects
  # selectROIs default behaviour is to draw box starting from the center
  # when fromCenter is set to false, you can draw box starting from top left corner
  bbox = cv2.selectROI(MultiTracker, frame)
  bboxes.append(bbox)
  colors.append((randint(0, 255), randint(0, 255), randint(0, 255)))
  print("Press q to quit selecting boxes and start tracking")
  print("Press any other key to select next object")
  k = cv2.waitKey(0) & 0xFF
  if (k == 113):  # q is pressed
    break

print(Selected bounding boxes .format(bboxes))

2.4 初始化MultiTrackerer

到目前为止,我们已经读取了第一帧并获得了对象周围的边界框。这是我们初始化多对象跟踪器所需的所有信息。我们首先创建一个MultiTracker对象,并添加你要跟踪目标数的单个对象跟踪器。在此示例中,我们使用CSRT单个对象跟踪器,但您可以通过将下面的trackerType变量更改为本文开头提到的8个跟踪器时间之一来尝试其他跟踪器类型。该CSRT跟踪器是不是最快的,但它产生在我们尝试很多情况下,最好的结果。

您也可以使用包含在同一MultiTracker中的不同跟踪器,但当然,它没有多大意义。能用的不多。CSRT精度最高,KCF速度精度综合最好,MOSSE速度最快。

MultiTracker类只是这些单个对象跟踪器的包装器。正如我们在上一篇文章中所知道的那样,使用第一帧和边界框初始化单个对象跟踪器,该边界框指示我们想要跟踪的对象的位置。MultiTracker将此信息传递给它内部包装的单个目标跟踪器。

C++代码:

    // Create multitracker 创建多目标跟踪类
    Ptr<MultiTracker> multiTracker = cv::MultiTracker::create();

    // initialize multitracker 初始化
    for (int i = 0; i < bboxes.size(); i++)
    
        multiTracker->add(createTrackerByName(trackerType), frame, Rect2d(bboxes[i]));
    

python代码:

# Specify the tracker type
trackerType = "CSRT"

# Create MultiTracker object
multiTracker = cv2.MultiTracker_create()

# Initialize MultiTracker
for bbox in bboxes:
  multiTracker.add(createTrackerByName(trackerType), frame, bbox)

2.5 更新MultiTracker和显示结果

最后,我们的MultiTracker准备就绪,我们可以在新的帧中跟踪多个对象。我们使用MultiTracker类的update方法在新帧中定位对象。每个被跟踪对象的每个边界框都使用不同的颜色绘制。

Update函数会返回true和false。update如果跟踪失败会返回false,C++代码加了判断,Python没有加。但是要注意的是update函数哪怕返回了false,也会继续更新函数,给出边界框。所以返回false,建议停止追踪。

C++代码:

    while (cap.isOpened())
    
        // get frame from the video 逐帧处理
        cap >> frame;

        // stop the program if reached end of video
        if (frame.empty())
        
            break;
        

        //update the tracking result with new frame 更新每一帧
        bool ok = multiTracker->update(frame);
        if (ok == true)
        
            cout << "Tracking success" << endl;
        
        else
        
            cout << "Tracking failure" << endl;
        
        // draw tracked objects 画框
        for (unsigned i = 0; i < multiTracker->getObjects().size(); i++)
        
            rectangle(frame, multiTracker->getObjects()[i], colors[i], 2, 1);
        

        // show frame
        imshow("MultiTracker", frame);

        // quit on x button
        if (waitKey(1) == 27)
        
            break;
        
    

python代码:

# Process video and track objects
while cap.isOpened():
  success, frame = cap.read()
  if not success:
    break

  # get updated location of objects in subsequent frames
  success, boxes = multiTracker.update(frame)

  # draw tracked objects
  for i, newbox in enumerate(boxes):
    p1 = (int(newbox[0]), int(newbox[1]))
    p2 = (int(newbox[0] + newbox[2]), int(newbox[1] + newbox[3]))
    cv2.rectangle(frame, p1, p2, colors[i], 2, 1)

  # show frame
  cv2.imshow(MultiTracker, frame)

  # quit on ESC button
  if cv2.waitKey(1) & 0xFF == 27:  # Esc pressed
    break

3 结果和代码

就结果而言,多目标跟踪就是生成多个单目标跟踪器,每个单目标跟踪器跟踪一个对象。如果你想和目标检测结合,其中的对象框如果要自己设定,push一个Rect对象就行了。

//自己设定对象的检测框
//x,y,width,height
//bboxes.push_back(Rect(388, 155, 30, 40));
//bboxes.push_back(Rect(492, 205, 50, 80));

总体来说精度和单目标跟踪器差不多,所耗时间差不多5到7倍,不同算法不同。

代码下载地址:

完整代码如下:

C++:

// Opencv_MultiTracker.cpp : 此文件包含 "main" 函数。程序执行将在此处开始并结束。
//

#include "pch.h"
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/tracking.hpp>

using namespace cv;
using namespace std;

vector<string> trackerTypes = "BOOSTING", "MIL", "KCF", "TLD", "MEDIANFLOW", "GOTURN", "MOSSE", "CSRT";

/**
 * @brief Create a Tracker By Name object 根据设定的类型初始化跟踪器
 *
 * @param trackerType
 * @return Ptr<Tracker>
 */
Ptr<Tracker> createTrackerByName(string trackerType)

    Ptr<Tracker> tracker;
    if (trackerType == trackerTypes[0])
        tracker = TrackerBoosting::create();
    else if (trackerType == trackerTypes[1])
        tracker = TrackerMIL::create();
    else if (trackerType == trackerTypes[2])
        tracker = TrackerKCF::create();
    else if (trackerType == trackerTypes[3])
        tracker = TrackerTLD::create();
    else if (trackerType == trackerTypes[4])
        tracker = TrackerMedianFlow::create();
    else if (trackerType == trackerTypes[5])
        tracker = TrackerGOTURN::create();
    else if (trackerType == trackerTypes[6])
        tracker = TrackerMOSSE::create();
    else if (trackerType == trackerTypes[7])
        tracker = TrackerCSRT::create();
    else
    
        cout << "Incorrect tracker name" << endl;
        cout << "Available trackers are: " << endl;
        for (vector<string>::iterator it = trackerTypes.begin(); it != trackerTypes.end(); ++it)
        
            std::cout << " " << *it << endl;
        
    
    return tracker;


/**
 * @brief Get the Random Colors object 随机涂色
 *
 * @param colors
 * @param numColors
 */
void getRandomColors(vector<Scalar> &colors, int numColors)

    RNG rng(0);
    for (int i = 0; i < numColors; i++)
    
        colors.push_back(Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255)));
    


int main(int argc, char *argv[])

    // Set tracker type. Change this to try different trackers. 选择追踪器类型
    string trackerType = trackerTypes[7];

    // set default values for tracking algorithm and video 视频读取
    string videoPath = "video/run.mp4";

    // Initialize MultiTracker with tracking algo 边界框
    vector<Rect> bboxes;

    // create a video capture object to read videos 读视频
    cv::VideoCapture cap(videoPath);
    Mat frame;

    // quit if unable to read video file
    if (!cap.isOpened())
    
        cout << "Error opening video file " << videoPath << endl;
        return -1;
    

    // read first frame 读第一帧
    cap >> frame;

    // draw bounding boxes over objects 在第一帧内确定对象框
    /*
        先在图像上画框,然后按ENTER确定画下一个框。按ESC退出画框开始执行程序
    */
    cout << "\\n==========================================================\\n";
    cout << "OpenCV says press c to cancel objects selection process" << endl;
    cout << "It doesnt work. Press Esc to exit selection process" << endl;
    cout << "\\n==========================================================\\n";
    cv::selectROIs("MultiTracker", frame, bboxes, false);

    //自己设定对象的检测框
    //x,y,width,height
    //bboxes.push_back(Rect(388, 155, 30, 40));
    //bboxes.push_back(Rect(492, 205, 50, 80));
    // quit if there are no objects to track 如果没有选择对象
    if (bboxes.size() < 1)
    
        return 0;
    

    vector<Scalar> colors;
    //给各个框涂色
    getRandomColors(colors, bboxes.size());

    // Create multitracker 创建多目标跟踪类
    Ptr<MultiTracker> multiTracker = cv::MultiTracker::create();

    // initialize multitracker 初始化
    for (int i = 0; i < bboxes.size(); i++)
    
        multiTracker->add(createTrackerByName(trackerType), frame, Rect2d(bboxes[i]));
    

    // process video and track objects 开始处理图像
    cout << "\\n==========================================================\\n";
    cout << "Started tracking, press ESC to quit." << endl;
    while (cap.isOpened())
    
        // get frame from the video 逐帧处理
        cap >> frame;

        // stop the program if reached end of video
        if (frame.empty())
        
            break;
        

        //update the tracking result with new frame 更新每一帧
        bool ok = multiTracker->update(frame);
        if (ok == true)
        
            cout << "Tracking success" << endl;
        
        else
        
            cout << "Tracking failure" << endl;
        
        // draw tracked objects 画框
        for (unsigned i = 0; i < multiTracker->getObjects().size(); i++)
        
            rectangle(frame, multiTracker->getObjects()[i], colors[i], 2, 1);
        

        // show frame
        imshow("MultiTracker", frame);

        // quit on x button
        if (waitKey(1) == 27)
        
            break;
        
    
    waitKey(0);
    return 0;

Python:


from __future__ import print_function
import sys
import cv2
from random import randint

trackerTypes = [BOOSTING, MIL, KCF,TLD, MEDIANFLOW, GOTURN, MOSSE, CSRT]

def createTrackerByName(trackerType):
  # Create a tracker based on tracker name
  if trackerType == trackerTypes[0]:
    tracker = cv2.TrackerBoosting_create()
  elif trackerType == trackerTypes[1]:
    tracker = cv2.TrackerMIL_create()
  elif trackerType == trackerTypes[2]:
    tracker = cv2.TrackerKCF_create()
  elif trackerType == trackerTypes[3]:
    tracker = cv2.TrackerTLD_create()
  elif trackerType == trackerTypes[4]:
    tracker = cv2.TrackerMedianFlow_create()
  elif trackerType == trackerTypes[5]:
    tracker = cv2.TrackerGOTURN_create()
  elif trackerType == trackerTypes[6]:
    tracker = cv2.TrackerMOSSE_create()
  elif trackerType == trackerTypes[7]:
    tracker = cv2.TrackerCSRT_create()
  else:
    tracker = None
    print(Incorrect tracker name)
    print(Available trackers are:)
    for t in trackerTypes:
      print(t)

  return tracker

if __name__ == __main__:

  print("Default tracking algoritm is CSRT \\n"
        "Available tracking algorithms are:\\n")
  for t in trackerTypes:
      print(t)

  trackerType = "CSRT"

  # Set video to load
  videoPath = "video/run.mp4"

  # Create a video capture object to read videos
  cap = cv2.VideoCapture(videoPath)

  # Read first frame
  success, frame = cap.read()
  # quit if unable to read the video file
  if not success:
    print(Failed to read video)
    sys.exit(1)

  ## Select boxes
  bboxes = []
  colors = []

  # OpenCVs selectROI function doesnt work for selecting multiple objects in Python
  # So we will call this function in a loop till we are done selecting all objects
  while True:
    # draw bounding boxes over objects
    # selectROIs default behaviour is to draw box starting from the center
    # when fromCenter is set to false, you can draw box starting from top left corner
    bbox = cv2.selectROI(MultiTracker, frame)
    bboxes.append(bbox)
    colors.append((randint(64, 255), randint(64, 255), randint(64, 255)))
    print("Press q to quit selecting boxes and start tracking")
    print("Press any other key to select next object")
    k = cv2.waitKey(0) & 0xFF
    if (k == 113):  # q is pressed
      break

  print(Selected bounding boxes .format(bboxes))

  ## Initialize MultiTracker
  # There are two ways you can initialize multitracker
  # 1. tracker = cv2.MultiTracker("CSRT")
  # All the trackers added to this multitracker
  # will use CSRT algorithm as default
  # 2. tracker = cv2.MultiTracker()
  # No default algorithm specified

  # Initialize MultiTracker with tracking algo
  # Specify tracker type

  # Create MultiTracker object
  multiTracker = cv2.MultiTracker_create()

  # Initialize MultiTracker
  for bbox in bboxes:
    multiTracker.add(createTrackerByName(trackerType), frame, bbox)

  # Process video and track objects
  while cap.isOpened():
    success, frame = cap.read()
    if not success:
      break

    # get updated location of objects in subsequent frames
    success, boxes = multiTracker.update(frame)

    # draw tracked objects
    for i, newbox in enumerate(boxes):
      p1 = (int(newbox[0]), int(newbox[1]))
      p2 = (int(newbox[0] + newbox[2]), int(newbox[1] + newbox[3]))
      cv2.rectangle(frame, p1, p2, colors[i], 2, 1)

    # show frame
    cv2.imshow(MultiTracker, frame)

    # quit on ESC button
    if cv2.waitKey(1) & 0xFF == 27:  # Esc pressed
      break

以上是关于opencv目标跟踪怎么实现重新选择目标的主要内容,如果未能解决你的问题,请参考以下文章

目标跟踪的深度学习方法与opencv实现

目标跟踪入门:使用OpenCV实现质心跟踪

目标跟踪(3)MultiTracker : 基于 OpenCV (C++/Python) 的多目标跟踪

OpenCV多目标跟踪与视频分析

急!!!opencv做目标跟踪的时候,怎样把目标用矩形圈出来

目标跟踪的深度学习方法与opencv下的KCF方法