NMS的numpy写法

Posted 浩瀚之水_csdn

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了NMS的numpy写法相关的知识,希望对你有一定的参考价值。

简单易懂的NMS的numpy写法

目标检测中的NMS,输入:boxes形如N*5,N个(x1,y1,x2,y2,score), thresh阈值为float型。

计算时,首先获取基于分数的降序排序order,计算全部box的面积area,对每个得分最高的boxes[i],计算与其余低分boxes[order[1:]]的交并比ious,去除ious高于阈值的box。

def nms_plain(boxes, threshold):
    x1 = boxes[:,0]
    y1 = boxes[:,1]
    x2 = boxes[:,2]
    y2 = boxes[:,3]
    scores = boxes[:,4]
    order = scores.argsort()[::-1]
    areas = (x2-x1+1)*(y2-y1+1)
    keep = []
    while order.size>0:
        i = order[0]
        keep.append(i)
        xx1 = np.maximum(x1[i],x1[order[1:]] )
        yy1 = np.maximum(y1[i],y1[order[1:]] )
        xx2 = np.minimum(x2[i],x2[order[1:]] )
        yy2 = np.minimum(y2[i],y2[order[1:]] )
        w = np.maximum(0, xx2-xx1)
        h = np.maximum(0,yy2-yy1)
        overlaps = w*h
        ious = overlaps/(areas[i] + areas[order[1:]] - overlaps)
        inds = np.where(ious<threshold)[0]
        order = order[inds+1]
    return keep

简洁的NMS的numpy写法

下面是一个简洁的NMS写法,主要是prod操作计算面积免去了许多多余行。

def nms_neat(boxes, threshold):
    areas = np.prod(boxes[:,2:4]-boxes[:,:2],axis=1)
    order = boxes[:,4].argsort()[::-1]
    keep = []
    while order.size>0:
        i = order[0]
        keep.append(i)
        tl = np.maximum(boxes[i][:2],boxes[order[1:]][:,:2])
        br = np.minimum(boxes[i][2:4],boxes[order[1:]][:,2:4])
        overlaps = np.prod(br-tl,axis=1)*(br>tl).all(axis=1)
        ious = overlaps/(areas[i]+areas[order[1:]]-overlaps)
        inds = np.where(ious<threshold)[0]
        order=order[inds+1]
    return keep

Soft-NMS的简洁的numpy写法

和NMS相比,Soft-NMS把高于thresh的box置信度置为score*weight,而非直接置为0。weight有多种计算方式,linear和Gaussian都有。给出一个Soft-NMS的简洁写法:

def soft_nms(boxes, method = 'linear', thresh = 0.001, Nt = 0.3, sigma = 0.5):
    '''
    the soft nms implement using python
    :param dets: the pred_bboxes
    :param method: the policy of decay pred_bbox score in soft nms
    :param thresh: the threshold
    :param Nt: Nt
    :return: the index of pred_bbox after soft nms
    '''
    areas = np.prod(boxes[:,2:4]-boxes[:,:2], axis=1)
    order = boxes[4].argsort()[::-1]
    keep = []
    while order.size>0:
        i = order[0]
        keep.append(i)
        tl = np.maximum(boxes[i][:2], boxes[order[1:]][:,:2])
        br = np.minimum(boxes[i][2:4], boxes[order[1:]][:,2:4])
        overlaps = np.prod(br-tl, axis = 1)*(br>tl).all(axis = 1)
        ious = overlaps/(areas[i]+areas[order[1:]]-overlaps)
        weight = np.ones_like(overlaps)
        if method == 'linear':
            weight[np.where(ious>Nt)] = 1-ious[np.where(ious>Nt)]
        elif method == 'gaussian':
            weight = np.exp(-(ious*ious)/sigma)
        else:
            weight[np.where(ious>Nt)] = 0
        boxes[order[1:]][:,4] *= weight
        inds = np.where(boxes[order[1:]][:,4]>thresh)[0]
        order = order[inds+1]
    return keep

最后是画图和主函数验证:

def plot_bbox(boxes, c = 'r'):
    x1 = boxes[:,0]
    y1 = boxes[:,1]
    x2 = boxes[:,2]
    y2 = boxes[:,3]
    plt.plot([x1,x2], [y1,y1], c)
    plt.plot([x1,x1], [y1,y2], c)
    plt.plot([x1,x2], [y2,y2], c)
    plt.plot([x2,x2], [y1,y2], c)
    plt.title(" nms")

if __name__ == "__main__":
    boxes=np.array([[100,100,210,210,0.72],
        [250,250,420,420,0.8],
        [220,220,320,330,0.92],
        [100,100,210,210,0.72],
        [230,240,325,330,0.81],
        [220,230,315,340,0.9]])
    
    keep = nms_plain(boxes,0.5)
    plt.figure(1)
    ax1 = plt.subplot(1,2,1)
    ax2 = plt.subplot(1,2,2)
    
    plt.sca(ax1)
    plot_bbox(boxes,'k')   # before nms
    
    keep = nms_neat(boxes, 0.5)
    # keep = soft_nms(boxes,'dd')
    plt.sca(ax2)
    plot_bbox(boxes[keep], 'r')# after nms

    plt.show()

NMS的pytorch写法

NMS的pytorch写法和Numpy类似,稍有不同,代码如下

def nms_pytorch(bboxes, scores, threshold=0.5):
    x1 = bboxes[:,0]
    y1 = bboxes[:,1]
    x2 = bboxes[:,2]
    y2 = bboxes[:,3]
    areas = (x2-x1)*(y2-y1)   # [N,] 每个bbox的面积
    _, order = scores.sort(0, descending=True)    # 降序排列

    keep = []
    while order.numel() > 0:       # torch.numel()返回张量元素个数
        if order.numel() == 1:     # 保留框只剩一个
            i = order.item()
            keep.append(i)
            break
        else:
            i = order[0].item()    # 保留scores最大的那个框box[i]
            keep.append(i)

        # 计算box[i]与其余各框的IOU(思路很好)
        xx1 = x1[order[1:]].clamp(min=x1[i])   # [N-1,]
        yy1 = y1[order[1:]].clamp(min=y1[i])
        xx2 = x2[order[1:]].clamp(max=x2[i])
        yy2 = y2[order[1:]].clamp(max=y2[i])
        inter = (xx2-xx1).clamp(min=0) * (yy2-yy1).clamp(min=0)   # [N-1,]

        iou = inter / (areas[i]+areas[order[1:]]-inter)  # [N-1,]
        idx = (iou <= threshold).nonzero().squeeze() # 注意此时idx为[N-1,] 而order为[N,]
        if idx.numel() == 0:
            break
        order = order[idx+1]  # 修补索引之间的差值
    return torch.LongTensor(keep)   # Pytorch的索引值为LongTensor

NMS的cpp写法

本文的cpp写法是基于libtorch(pytorch c++ api),仿照pytorch,输入boxes和scores以及thresh变量。

//输入boxes:Nx4; socres: N; thresh:介于(0,1)的float变量
//输出vector<int>向量,存储保留的box的index值
vector<int> nms_libtorch(torch::Tensor boxes, torch::Tensor scores, float thresh);\\\\函数声明

vector<int> nms_libtorch(torch::Tensor bboxes, torch::Tensor scores, float thresh) \\\\函数定义
	auto x1 = bboxes.select(-1, 0);
	auto y1 = bboxes.select(-1, 1);
	auto x2 = bboxes.select(-1, 2);
	auto y2 = bboxes.select(-1, 3);
	auto areas = (x2 - x1)*(y2 - y1);   //[N, ] 每个bbox的面积
	auto tuple_sorted = scores.sort(0, true);    //降序排列
	auto order = std::get<1>(tuple_sorted);

	vector<int>	keep;
	while (order.numel() > 0) // torch.numel()返回张量元素个数
		if (order.numel() == 1) //    保留框只剩一个
			auto i = order.item();
			keep.push_back(i.toInt());
			break;
		
		else 
			auto i = order[0].item();// 保留scores最大的那个框box[i]
			keep.push_back(i.toInt());
		
		//计算box[i]与其余各框的IOU(思路很好)
		auto order_mask = order.narrow(0, 1, order.size(-1) - 1);
		x1.index( order_mask );
		x1.index( order_mask ).clamp(x1[keep.back()].item().toFloat(), 1e10);
		auto xx1 = x1.index( order_mask ).clamp(x1[keep.back()].item().toFloat(),1e10);// [N - 1, ]
		auto yy1 = y1.index( order_mask ).clamp(y1[keep.back()].item().toFloat(), 1e10);
		auto xx2 = x2.index( order_mask ).clamp(0, x2[keep.back()].item().toFloat());
		auto yy2 = y2.index( order_mask ).clamp(0, y2[keep.back()].item().toFloat());
		auto inter = (xx2 - xx1).clamp(0, 1e10) * (yy2 - yy1).clamp(0, 1e10);// [N - 1, ]

		auto iou = inter / (areas[keep.back()] + areas.index( order.narrow(0,1,order.size(-1) - 1) ) - inter);//[N - 1, ]
		auto idx = (iou <= thresh).nonzero().squeeze();//注意此时idx为[N - 1, ] 而order为[N, ]
		if (idx.numel() == 0) 
			break;
		
		order = order.index( idx + 1 ); //修补索引之间的差值
	
	return keep;

事实上,除了使用libtorch的写法,将libtorch张量存储到c++的其他数据结构中,比如vector或者数组中同样可以实现NMS功能,本文不再赘述。

Referece

numpy链接1
numpy链接2
pytorch链接1

如果有用请给我一个👍,转载注明:Allent's Blogs

以上是关于NMS的numpy写法的主要内容,如果未能解决你的问题,请参考以下文章

NMS和Soft NMS

有效地按降序对numpy数组进行排序?

动态规划问题之背包模型(18题)

动态规划问题之背包模型(18题)

动态规划问题之背包模型(18题)

K 近邻算法识别手写数字(Numpy写法)