OpenCV 中的鲁棒图像分割
Posted
技术标签:
【中文标题】OpenCV 中的鲁棒图像分割【英文标题】:Robust image segmentation in OpenCV 【发布时间】:2014-06-18 19:47:16 【问题描述】:我正在尝试编写一个 OpenCV 程序来为其他人计算鱼卵。它目前对他们上传的图像进行归一化、模糊、阈值、扩张、距离变换、阈值,然后找到轮廓(就像在典型的分水岭教程中一样)。
我遇到的问题是光照条件变化很大,因此即使使用我的自适应阈值,算法的准确性也会有很大差异。如果图像上有渐变亮度,它的效果似乎特别差。有时物体在背景的衬托下非常明亮,有时它们的亮度几乎相同。有没有什么特别有效的方法可以在不同的光照条件下找到物体?
示例图片:
【问题讨论】:
en.wikipedia.org/wiki/Histogram_equalization 在补丁级别(将图像划分为例如 5x5=25 个补丁,以便您可以在不同的照明条件下估计不同的统计数据) 我觉得这会对重叠不同鸡蛋和没有鸡蛋的区域的补丁产生影响。我使用了具有大块大小的adaptiveThreshold(),它总是能检测到无关的物体,尽管对渐变光照有帮助。 【参考方案1】:因为任何大于 100 像素的东西都与您的图像无关,所以我会构建一个傅立叶带通滤波器来去除这些结构。
这是我使用的一个实现,基于 ImageJ 中的实现。在这个实现中,输入图像被镜像填充以减少边缘伪影。
static void GenerateBandFilter(thrust::host_vector<float>& filter, const BandPassSettings& band, const FrameSize& frame)
//From https://imagej.nih.gov/ij/plugins/fft-filter.html
if (band.do_band_pass == false)
return;
if (frame.width != frame.height)
throw std::runtime_error("Frame height and width should be the same");
auto maxN = static_cast<int>(std::max(frame.width, frame.height));//todo make sure they are the same
auto filterLargeC = 2.0f*band.max_dx / maxN;
auto filterSmallC = 2.0f*band.min_dx / maxN;
auto scaleLargeC = filterLargeC*filterLargeC;
auto scaleSmallC = filterSmallC*filterSmallC;
auto filterLargeR = 2.0f*band.max_dy / maxN;
auto filterSmallR = 2.0f*band.min_dy / maxN;
auto scaleLargeR = filterLargeR*filterLargeR;
auto scaleSmallR = filterSmallR*filterSmallR;
// loop over rows
for (auto j = 1; j < maxN / 2; j++)
auto row = j * maxN;
auto backrow = (maxN - j)*maxN;
auto rowFactLarge = exp(-(j*j) * scaleLargeR);
auto rowFactSmall = exp(-(j*j) * scaleSmallR);
// loop over columns
for (auto col = 1; col < maxN / 2; col++)
auto backcol = maxN - col;
auto colFactLarge = exp(-(col*col) * scaleLargeC);
auto colFactSmall = exp(-(col*col) * scaleSmallC);
auto factor = (((1 - rowFactLarge*colFactLarge) * rowFactSmall*colFactSmall));
filter[col + row] *= factor;
filter[col + backrow] *= factor;
filter[backcol + row] *= factor;
filter[backcol + backrow] *= factor;
auto fixy = [&](float t)return isinf(t) ? 0 : t; ;
auto rowmid = maxN * (maxN / 2);
auto rowFactLarge = fixy(exp(-(maxN / 2)*(maxN / 2) * scaleLargeR));
auto rowFactSmall = fixy(exp(-(maxN / 2)*(maxN / 2) *scaleSmallR));
filter[maxN / 2] *= ((1 - rowFactLarge) * rowFactSmall);
filter[rowmid] *= ((1 - rowFactLarge) * rowFactSmall);
filter[maxN / 2 + rowmid] *= ((1 - rowFactLarge*rowFactLarge) * rowFactSmall*rowFactSmall); //
rowFactLarge = fixy(exp(-(maxN / 2)*(maxN / 2) *scaleLargeR));
rowFactSmall = fixy(exp(-(maxN / 2)*(maxN / 2) *scaleSmallR));
for (auto col = 1; col < maxN / 2; col++)
auto backcol = maxN - col;
auto colFactLarge = exp(-(col*col) * scaleLargeC);
auto colFactSmall = exp(-(col*col) * scaleSmallC);
filter[col] *= ((1 - colFactLarge) * colFactSmall);
filter[backcol] *= ((1 - colFactLarge) * colFactSmall);
filter[col + rowmid] *= ((1 - colFactLarge*rowFactLarge) * colFactSmall*rowFactSmall);
filter[backcol + rowmid] *= ((1 - colFactLarge*rowFactLarge) * colFactSmall*rowFactSmall);
// loop along column 0 and expanded_width/2
auto colFactLarge = fixy(exp(-(maxN / 2)*(maxN / 2) * scaleLargeC));
auto colFactSmall = fixy(exp(-(maxN / 2)*(maxN / 2) * scaleSmallC));
for (auto j = 1; j < maxN / 2; j++)
auto row = j * maxN;
auto backrow = (maxN - j)*maxN;
rowFactLarge = exp(-(j*j) * scaleLargeC);
rowFactSmall = exp(-(j*j) * scaleSmallC);
filter[row] *= ((1 - rowFactLarge) * rowFactSmall);
filter[backrow] *= ((1 - rowFactLarge) * rowFactSmall);
filter[row + maxN / 2] *= ((1 - rowFactLarge*colFactLarge) * rowFactSmall*colFactSmall);
filter[backrow + maxN / 2] *= ((1 - rowFactLarge*colFactLarge) * rowFactSmall*colFactSmall);
filter[0] = (band.remove_dc) ? 0 : filter[0];
你可以在这里查看我使用它的代码:https://github.com/kandel3/DPM_PhaseRetrieval
【讨论】:
虽然我还没有验证这对这种情况有帮助(因为鸡蛋的内部可能会在这种边缘检测形式中显着显示),但它确实很有趣,并导致了兔子洞带通滤波器的图像处理。【参考方案2】:计算图像的 alpha 和 beta 值 image = cv::imread("F:\Dilated.jpg"); 整数 x,y; 诠释a = 0; // 循环使用的变量 整数计数=0; // 循环使用的变量
for( int y = 0; y < image.rows; y++ )
for( int x = 0; x < image.cols; x++ )
for( int c = 0; c < 3; c++ )
image.at<Vec3b>(y,x)[c] =
saturate_cast<uchar>( alpha*( image.at<Vec3b>(y,x)[c] ) + beta );
【讨论】:
如何计算alpha和beta?看起来你正在做某种线性回归拟合,但我不明白你在拟合什么。以上是关于OpenCV 中的鲁棒图像分割的主要内容,如果未能解决你的问题,请参考以下文章
语义分割:如何评估噪声对医学图像分割的有效性和鲁棒性的影响?
32opencv入门GrabCut & FloodFill图像分割