使用 RealSense 进行后处理后如何获得像素 x,y 处的深度?

Posted

技术标签:

【中文标题】使用 RealSense 进行后处理后如何获得像素 x,y 处的深度?【英文标题】:How to get the depth at pixel x,y after post processing with RealSense? 【发布时间】:2020-12-03 18:46:33 【问题描述】:

考虑以下代码:

// Declare pointcloud object, for calculating pointclouds and texture mappings
rs2::pointcloud pc;
// We want the points object to be persistent so we can display the last cloud when a frame drops
rs2::points points;

// Declare RealSense pipeline, encapsulating the actual device and sensors
rs2::pipeline pipe;

// Start streaming with default recommended configuration
pipe.start();

// Declare filters
rs2::decimation_filter dec_filter;  // Decimation - reduces depth frame density
rs2::threshold_filter thr_filter;   // Threshold  - removes values outside recommended range
rs2::spatial_filter spat_filter;    // Spatial    - edge-preserving spatial smoothing
rs2::temporal_filter temp_filter;   // Temporal   - reduces temporal noise
rs2::disparity_transform depth_to_disparity(true);
rs2::disparity_transform disparity_to_depth(false);

// Initialize a vector that holds filters and their options
std::vector<rs2::filter*> filters;
    
// The following order of emplacement will dictate the orders in which filters are applied
filters.emplace_back(&dec_filter);
filters.emplace_back(&thr_filter);
filters.emplace_back(&depth_to_disparity);
filters.emplace_back(&spat_filter);
filters.emplace_back(&temp_filter);
filters.emplace_back(&disparity_to_depth);


while (app) // Application still alive?

    // Wait for the next set of frames from the camera
    auto frames = pipe.wait_for_frames();

    rs2::video_frame color = frames.get_color_frame();

    // For cameras that don't have RGB sensor, we'll map the pointcloud to infrared instead of color
    if (!color)
        color = frames.get_infrared_frame();

    rs2::depth_frame depth = frames.get_depth_frame();

    int centerX = depth.get_width() / 2;
    int centerY = depth.get_height() / 2;


    // A: Pre-filtered
    float prefiltered_distance = depth.get_distance(centerX, centerY);

    
    // B: Filter frames
    for (auto filter : filters)
    
        depth = (*filter).process(depth);
    
    
    
    // C: Post-filtered (fails)
    float postfiltered_distance = depth.get_distance(centerX, centerY);


    // Tell pointcloud object to map to this color frame
    pc.map_to(color);
    // Generate the pointcloud and texture mappings
    points = pc.calculate(depth);
    
    // ...


为什么在过滤帧之前调用depth.get_distance(centerX, centerY); 可以正常工作,但在过滤帧之后调用相同的函数 会失败并出现out of range value for argument "y"

简而言之,如何获得x,y处像素的过滤距离(z)?

【问题讨论】:

【参考方案1】:

抽取过滤器会降低图像分辨率,因此您应该在运行过滤器后再次检查分辨率并更新您的 centerXcenterY 变量,使其不再超出范围。

【讨论】:

以上是关于使用 RealSense 进行后处理后如何获得像素 x,y 处的深度?的主要内容,如果未能解决你的问题,请参考以下文章

OpenCV 随机决策森林:如何获得后验概率

如何获得图片的宽高像素数??

CRF分割后处理

在训练之前执行图形切割或作为基于像素的分类的后处理

如何使用opencv从图像中删除小的彩色像素

C中的OpenCV拼接器例程,如何获得像素变换?