缺少数据的 2D 图像中的曲线拟合

Posted

技术标签:

【中文标题】缺少数据的 2D 图像中的曲线拟合【英文标题】:Curve Fitting in 2D Images With Missing Data 【发布时间】:2021-11-18 03:11:48 【问题描述】:

我在图像中有分割轮廓,我想将它们连接起来,假设它们是参数化的曲线。我正在考虑一些曲线拟合曲线 跟踪 - 但我不知道如何实现它。

我正好有 1 px 宽的分割轮廓:

import itertools
import cv2

# Get all pixel directions
directions = list(itertools.product([-1, 0, 1], repeat=2))
directions.remove((0, 0))

# Load image
img = cv2.imread("image.png", cv2.IMREAD_COLOR)
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)

cv2.imshow("Input", img)

# Find contours
_, thresh = cv2.threshold(gray, 25, 255, cv2.THRESH_BINARY)
contours, _ = cv2.findContours(thresh, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)

我知道,所有轮廓的终点在哪里:

for cnt in contours:   
    ends = [] 

    # Get ends of contour
    # - first pixel is not always the first pixel of open contour
    # - last pixel is not mostly the last pixel in contour

    for pix in cnt:
        pixel = tuple(pix[0])
        pixel_x, pixel_y = pixel

        total_neighbours = 0

        # Count pixel neighbours
        for plus_x, plus_y in directions:
            current_x = pixel_x + plus_x
            current_y = pixel_y + plus_y

            if current_y < 0 or current_y >= gray.shape[0]:
                continue

            if current_x < 0 or current_x >= gray.shape[1]:
                continue

            if gray[current_y][current_x]:
                total_neighbours += 1

        # If it is end pixel
        if total_neighbours == 1:
            ends.append(pixel)
            cv2.circle(img, pixel, 3, (0, 255, 255), 1)

作为一个人类,我知道曲线在哪里,每条独特曲线的轮廓是什么:

为了更好的人类想象力,有原始图像:

但我不知道,如何以编程方式连接这些分割轮廓。我尝试使用一阶和二阶导数进行预测,但还不够好:

    # Make contour "predictions" by first derivative
    # - go from end to second end and calculate first derivative
    # - "predict" on second end contour connection

    for end in ends:
        pixel_x, pixel_y = end

        def predict(pixel_x, pixel_y, was):
            for plus_x, plus_y in directions:
                current_x = pixel_x + plus_x
                current_y = pixel_y + plus_y

                if current_y < 0 or current_y >= gray.shape[0]:
                    continue

                if current_x < 0 or current_x >= gray.shape[1]:
                    continue

                if (current_x, current_y) not in ends:
                    if (current_x, current_y) not in was:
                        if gray[current_y][current_x]:
                            was.append((current_x, current_y))
                            predict(current_x, current_y, was)

                else:
                    derivatives_x = []
                    derivatives_y = []

                    # Calculate derivative
                    for pix in range(len(was) - 3):
                        x1, y1 = was[pix]
                        x2, y2 = was[pix + 3]

                        derivatives_x.append(x1 - x2)
                        derivatives_y.append(y1 - y2)

                    if not derivatives_x:
                        continue

                    # Get last N derivatives and average them
                    avg_x = derivatives_x[-20:]
                    avg_y = derivatives_y[-20:]

                    avg_x = sum(avg_x) / len(avg_x)
                    avg_y = sum(avg_y) / len(avg_y)

                    # Predict N pixels
                    for i in range(7):
                        pos = round(x2 - avg_x * i), round(y2 - avg_y * i)
                        cv2.circle(img, pos, 1, (0, 0, 255), cv2.FILLED)

        predict(pixel_x, pixel_y, [])

cv2.imshow("First derivative", img)
cv2.waitKey(0)

有人知道如何拟合一些曲线/样条线/贝塞尔曲线/.. 并将曲线识别为人类吗?

还有更多的二值化源图片:

谢谢。

【问题讨论】:

如果你有这些曲线,以及前几个导数,我认为这就足够了!你的结果是什么? 一些平滑或更宽的半径,用于评估曲率,可能会有所帮助。否则你会受到单个像素的支配。只是猜测。 是的,我遇到了同样的事情(改变我的导数运行多远会帮助一些人并伤害其他人),认为它可能需要一个数值变化的参数。我建议改用基于三角的导数,因为它们更适合非功能曲线。 如果您发布您当前的代码,以便将人们从您的源图像中获取到外推点(在 python 中),并且不接受我的回答,我将悬赏这个问题来尝试以获得更多的想法。 (我花了大约一半的时间才赶上你,所以添加代码会让其他人更容易跳进去尝试) 当我看到这样的问题时,我的第一个问题是:这些行是如何获得的,在给定原始输入图像的情况下我们能做得更好吗? 【参考方案1】:

众所周知,外推法容易受到初始条件的影响(在这种情况下是进行切割的样条状态。话虽如此,我确实相信一阶和二阶导数足以合理解决这些数据集。

我认为您的外推代码中有两个主要错误:(1)整数像素数据的一阶和二阶导数计算将产生与整数空间的离散性质相关的额外噪声(样条是连续的)。这可以通过将整数精度曲线(使用体素网格滤波器或类似的东西)下采样成双精度曲线来解决。 (2) 您计算二阶导数的方法是沿样条线取两个后续方向向量之间的笛卡尔差。一个更好的策略是取而代之的是后续方向向量之间的角度变化。除了更准确之外,这还允许样条线在自身上回绕并持续传播无损曲线。

我确定碰撞的方法是通过将每条曲线与其他曲线进行外推。在外推的每一步之后,评估了 3 个适应度估计以确定局部最小值(以及随后的曲线组合的最佳适应度)。 (1) 每条曲线端点之间的笛卡尔距离。 (2) 每条曲线端点之间的角距离(180 度偏移被认为是最佳的)。 (3) 累积外推距离。总适应度是这 3 个值的加权和。每个测试候选的外推继续进行,直到总适应度下降。在测试每条曲线的每个组合后,如果最佳适应度低于可配置的阈值,则组合这两条曲线并重复该过程。

每次确定样条合并的理想候选者时,都会对两者进行外推,直到它们在中间相遇。然后使用正弦加权平均这些外插点,以改善样条曲线的曲率/连续性。然后重建三组点(样条 1、桥、样条 2)并重构样条。

代码(c++11,opencv):

#include <stdio.h>
#include <opencv2/opencv.hpp>
#include <Windows.h>
#include <string>

#define M_PI 3.14159

using namespace std;
using namespace cv;

double sqDist(Point p1, Point p2)

    return std::pow((double)p1.x - p2.x,2) + std::pow((double)p1.y - p2.y,2);


vector<Point2d> resampleContour(vector<Point> originalContour, int voxelWidth_px = 3)

    //quick and dirty voxel filter downsampling to 5px spacing
    vector<Point2d> outputCurve = vector<Point2d>();
    while(originalContour.size()>0)
    
        int binX = std::floor(originalContour[0].x / voxelWidth_px);
        int binY = std::floor(originalContour[0].y / voxelWidth_px);
        int sumX = originalContour[0].x;
        int sumY = originalContour[0].y;
        int count = 1;
        vector<Point> remainingPoints = vector<Point>();
        for (int i = 1; i < originalContour.size(); i++)
        
            int tempBinX = std::floor(originalContour[i].x / voxelWidth_px);
            int tempBinY = std::floor(originalContour[i].y / voxelWidth_px);
            if (tempBinX == binX && tempBinY == binY)
            
                sumX += originalContour[i].x;
                sumY += originalContour[i].y;
                count++;
            
            else
            
                remainingPoints.push_back(originalContour[i]);
            
        
        originalContour = remainingPoints;
        double aveX = (double)sumX / (double)count;
        double aveY = (double)sumY / (double)count;
        outputCurve.push_back(Point2d(aveX, aveY));
    

    return outputCurve;


Point2d getNN(vector<Point2d> cloud, int targetIndex, int &nnIndex, double &dist)

    nnIndex = -1;
    double shortestDist = 0;
    for (int i = 0; i < cloud.size(); i++)
    
        if (i == targetIndex)  continue; 
        double tempDist = sqDist(cloud[targetIndex], cloud[i]);
        if (nnIndex == -1 || tempDist < shortestDist)
        
            nnIndex = i;
            shortestDist = tempDist;
        
    
    dist = shortestDist;
    return cloud[nnIndex];


int getNN(vector<Point2d> cloud, Point2d pt)

    int nnIndex = -1;
    double shortestDist = 0;
    for (int i = 0; i < cloud.size(); i++)
    
        double tempDist = sqDist(pt, cloud[i]);
        if (nnIndex == -1 || tempDist < shortestDist)
        
            nnIndex = i;
            shortestDist = tempDist;
        
    
    return nnIndex;


vector<Point2d> getNNs(vector<Point2d> cloud, int targetIndex, int count)

    Point2d tPt = cloud[targetIndex];
    sort(cloud.begin(), cloud.end(), [tPt](const Point2d& p1, const Point2d& p2) 
        return sqDist(tPt,p1) < sqDist(tPt, p2);
        );

    vector<Point2d> outCloud = vector<Point2d>();
    for (int i = 1; i <= count && i < cloud.size(); i++) //first point will be same point
    
        outCloud.push_back(cloud[i]);
    
    return outCloud;


Vec2d normalize(Vec2d inVec)

    double length = sqrt(pow(inVec(0), 2) + pow(inVec(1), 2));
    if (length == 0)
    
        throw;
    
    inVec = inVec / length;
    return inVec;


double angleBetween(Vec2d vec1, Vec2d vec2, bool directionalFlag = false)

    vec1 = normalize(vec1);
    vec2 = normalize(vec2);

    double angle = (atan2(vec1(1), vec1(0)) - atan2(vec2(1), vec2(0)));
    if (angle > M_PI)  angle -= 2 * M_PI; 
    else if (angle <= -M_PI)  angle += 2 * M_PI; 

    if (!directionalFlag)
     angle = abs(angle); 
    return angle;


vector<Point> roundPoints(vector<Point2d> cloud)

    vector<Point> outCloud = vector<Point>();
    for (int i = 0; i < cloud.size(); i++)
    
        outCloud.push_back(Point(round(cloud[i].x), round(cloud[i].y)));
    
    return outCloud;


class PointNorm

public:
    PointNorm() 

    //point at p1 with direction p1-p2
    PointNorm(Point2d p1, Point2d p2)
    
        X = p1.x;
        Y = p1.y;
        dir = Vec2d(p1.x - p2.x, p1.y - p2.y);
        dir = normalize(dir);
    

    PointNorm(double x,double y,double dx,double dy)
    
        X = x;
        Y = y;
        dir = Vec2d(dx, dy);
        dir = normalize(dir);
    
    double X = 0;
    double Y = 0;
    Vec2d dir = Vec2d();

    static void dif(PointNorm pn1, PointNorm pn2, double& distance, double& angle)
    
        distance = sqrt(pow(pn1.X - pn2.X, 2) + pow(pn1.Y - pn2.Y, 2));
        
        angle = angleBetween(pn1.dir, pn2.dir, false);
    
;

vector<Point2d> orderCurve(vector<Point2d> cloud)

    if (cloud.size() < 3)  return cloud; 

    vector<Point2d> outCloud = vector<Point2d>();

    int currentIndex = cloud.size() / 2;
    double lastDist = -1;
    vector<Point2d> tempCloud = cloud;
    for (int i = 0; i < cloud.size(); i++)
    
        vector<Point2d> twoNeighbors = getNNs(cloud, i, 5); //technically u can increase this count to increase endpoint confidence
        bool endFlag = true;
        for (int ii = 0; ii < twoNeighbors.size()-1 && endFlag; ii++)
        
            for (int iii = 0; iii < twoNeighbors.size() - 1 && endFlag; iii++)
            
                if (ii == iii)  continue; 
                PointNorm pn1 = PointNorm(cloud[i], twoNeighbors[ii]);
                PointNorm pn2 = PointNorm(cloud[i], twoNeighbors[iii]);
                double tempAngle = angleBetween(pn1.dir, pn2.dir);

                if (tempAngle > M_PI / 2)
                 endFlag = false; 
            
        

        if (endFlag)
        
            outCloud.push_back(cloud[i]);
            break;
        
    

    tempCloud = cloud;
    currentIndex = getNN(cloud, outCloud[0]);

    while (tempCloud.size() > 1)
    
        int testIndex = 0;
        double testDist;
        Point2d tempPoint = getNN(tempCloud, currentIndex, testIndex, testDist);
        outCloud.push_back(tempPoint);
        tempCloud.erase(tempCloud.begin() + currentIndex);
        if (testIndex > currentIndex)  testIndex--; 
        currentIndex = testIndex;
    
    return outCloud;


class ModSpline

public:

    ModSpline(vector<Point2d> orderedCurve, int dirEstimationOffset, bool _comboCurve = false)
    
        //totalCurve = orderedCurve;
        totalCurve = vector<Point2d>();
        copy(orderedCurve.begin(), orderedCurve.end(), back_inserter(totalCurve));
        int end = orderedCurve.size() - 1;
        head = PointNorm(orderedCurve[0], orderedCurve[dirEstimationOffset]);//slight offset gives better estimation of direction if last point is noisy
        tail = PointNorm(orderedCurve[end], orderedCurve[end- dirEstimationOffset]);
        comboCurve = _comboCurve;
    

    void stepExtrapolate_Head(int stepSize_px, int derivativeLookback)
    
        int tempLookback = derivativeLookback;
        if (tempLookback > totalCurve.size() - 1)  tempLookback = totalCurve.size() - 1; 

        head.X += head.dir(0) * (double)stepSize_px;
        head.Y += head.dir(1) * (double)stepSize_px;
        totalCurve.insert(totalCurve.begin(), 1, Point2d(head.X, head.Y));
        
            double dirChangeAngle = computeSecondDerivative(0, tempLookback);
            head.dir = normalize(rotateByAngle(head.dir, dirChangeAngle * (double)stepSize_px));
        
    
    void stepExtrapolate_Tail(int stepSize_px, int derivativeLookback)
    
        int tempLookback = derivativeLookback;
        if (tempLookback > totalCurve.size() - 1)  tempLookback = totalCurve.size() - 1; 
        
        tail.X += tail.dir(0) * stepSize_px;
        tail.Y += tail.dir(1) * stepSize_px;
        totalCurve.push_back(Point2d(tail.X, tail.Y));
        
            double dirChangeAngle = computeSecondDerivative(totalCurve.size() - 1, totalCurve.size() - (1 + tempLookback));
            tail.dir = normalize(rotateByAngle(tail.dir, dirChangeAngle * (double)stepSize_px));
        
    

    void stepExtrapolate_Bidirectional(int stepSize_px, int derivativeLookback)
    
        stepExtrapolate_Head(stepSize_px, derivativeLookback);
        stepExtrapolate_Tail(stepSize_px, derivativeLookback);
    

    void stepExtrapolate_SingleDirection(int stepSize_px, int derivativeLookback, bool headFlag)
    
        if (headFlag)  stepExtrapolate_Head(stepSize_px, derivativeLookback); 
        else  stepExtrapolate_Tail(stepSize_px, derivativeLookback); 
    

    static double estimateSpineDistance(ModSpline spl1, ModSpline spl2,int stepSize_px, int derivativeLookback, double distWeight, double angleWeight, double travelWeight)
    
        bool dir_1_head, dir_2_head;
        getNearestEndpoints(spl1, spl2, dir_1_head, dir_2_head);

        //todo: run multiple times with adjusted dir and derivatives

        double lastAngle, lastDist, lastComboDist;
        PointNorm::dif(spl1.getEndpointnorm(dir_1_head), spl2.getEndpointnorm(dir_2_head), lastDist, lastAngle);
        lastAngle = abs(M_PI - lastAngle);//looking for opposing vectors;
        lastComboDist = lastAngle * angleWeight + lastDist * distWeight;

        double totalExtrapolationDistance = 0;
        while (true)
        
            spl1.stepExtrapolate_SingleDirection(stepSize_px, derivativeLookback, dir_1_head);
            spl2.stepExtrapolate_SingleDirection(stepSize_px, derivativeLookback, dir_2_head);
            totalExtrapolationDistance += (stepSize_px + stepSize_px);
            double tempAngle, tempDist, tempComboDist;
            PointNorm::dif(spl1.getEndpointnorm(dir_1_head), spl2.getEndpointnorm(dir_2_head), tempDist, tempAngle);
            tempAngle = abs(M_PI - tempAngle);//looking for opposing vectors;

            tempComboDist = tempAngle * angleWeight + tempDist * distWeight +totalExtrapolationDistance* travelWeight;
            if (tempComboDist > lastComboDist)  break; 
            else
            
                lastAngle = tempAngle;
                lastDist = tempDist;
                lastComboDist = tempComboDist;
            
        
        return lastComboDist;
    

    static ModSpline mergeSplines(ModSpline spl1, ModSpline spl2, int stepSize_px, int derivativeLookback, int dirEstimationOffset, vector<vector<Point2d>> &debugClouds)
    
        bool dir_1_head, dir_2_head;
        getNearestEndpoints(spl1, spl2, dir_1_head, dir_2_head);

        double lastAngle, lastDist, lastComboDist;
        int extrapolationCount = 0;
        PointNorm::dif(spl1.getEndpointnorm(dir_1_head), spl2.getEndpointnorm(dir_2_head),lastDist, lastAngle);
        while (true)
        
            extrapolationCount += 1;
            spl1.stepExtrapolate_SingleDirection(stepSize_px, derivativeLookback, dir_1_head);
            spl2.stepExtrapolate_SingleDirection(stepSize_px, derivativeLookback, dir_2_head);
            double tempAngle, tempDist, tempComboDist;
            PointNorm::dif(spl1.getEndpointnorm(dir_1_head), spl2.getEndpointnorm(dir_2_head), tempDist, tempAngle);

            if (tempDist > lastDist)  break; 
            else  lastDist = tempDist; 
        

        vector<Point2d> outCloud = vector<Point2d>();

        vector<Point2d> spline1Cloud = spl1.getTotalCurve();
        if (dir_1_head)  reverse(spline1Cloud.begin(), spline1Cloud.end()); 
        vector<Point2d> spline2Cloud = spl2.getTotalCurve();
        if (!dir_2_head)  reverse(spline2Cloud.begin(), spline2Cloud.end()); 

        //spline 1
        
            vector<Point2d> debug = vector<Point2d>();
            for (int i = 0; i < spline1Cloud.size() - extrapolationCount; i++)
            
                outCloud.push_back(spline1Cloud[i]);
                debug.push_back(spline1Cloud[i]);
            
            debugClouds.push_back(debug);
        
        //fused region
        
            vector<Point2d> debug = vector<Point2d>();
            double theta = 0;
            double theta_Step = (M_PI / 2.0) / (double)extrapolationCount;
            for (int i = 1; i < extrapolationCount-1; i++)
            
                int invI = extrapolationCount - i;
                Point2d p1 = spline1Cloud[spline1Cloud.size() - (1 + invI)];
                Point2d p2 = spline2Cloud[i];

                double alpha = sin(theta);
                theta += theta_Step;

                //sinusoidal interpolation
                Point2d fusedPt = Point2d(alpha*p2.x+(1.0-alpha)*p1.x, alpha * p2.y + (1.0 - alpha) * p1.y);

                outCloud.push_back(fusedPt);
                debug.push_back(fusedPt);
            
            debugClouds.push_back(debug);
        
        //spline 2
        
            vector<Point2d> debug = vector<Point2d>();
            for (int i = extrapolationCount; i < spline2Cloud.size(); i++)
            
                outCloud.push_back(spline2Cloud[i]);
                debug.push_back(spline2Cloud[i]);
            
            debugClouds.push_back(debug);
        
        return ModSpline(outCloud,dirEstimationOffset,true);
    

    vector<Point2d> getTotalCurve()
    
        return totalCurve;
    

    int getPtCount()  return totalCurve.size(); 

    vector<PointNorm> getEndpoints()
    
        vector<PointNorm> endPoints = vector<PointNorm>();
        endPoints.push_back(head);
        endPoints.push_back(tail);
        return endPoints;
    

    bool isComboCurve()  return comboCurve; 

    Point2d getEndpoint(bool headFlag)
    
        if (headFlag)  return Point2d(head.X, head.Y); 
        else  return Point2d(tail.X, tail.Y); 
    

    PointNorm getEndpointnorm(bool headFlag)
    
        if (headFlag)  return head; 
        else  return tail; 
    

    static void getNearestEndpoints(ModSpline spl1, ModSpline spl2, bool& headFlag1, bool& headFlag2)
    
        double dist1 = sqDist(Point2d(spl1.head.X, spl1.head.Y), Point2d(spl2.head.X, spl2.head.Y));
        double dist2 = sqDist(Point2d(spl1.head.X, spl1.head.Y), Point2d(spl2.tail.X, spl2.tail.Y));
        double dist3 = sqDist(Point2d(spl1.tail.X, spl1.tail.Y), Point2d(spl2.head.X, spl2.head.Y));
        double dist4 = sqDist(Point2d(spl1.tail.X, spl1.tail.Y), Point2d(spl2.tail.X, spl2.tail.Y));

        double minDist = min(dist1, min(dist2, min(dist3, dist4)));
        if (minDist == dist1)  headFlag1 = true; headFlag2 = true; 
        else if (minDist == dist2)  headFlag1 = true; headFlag2 = false; 
        else if (minDist == dist3)  headFlag1 = false; headFlag2 = true; 
        else if (minDist == dist4)  headFlag1 = false; headFlag2 = false; 
    

private:
    vector<Point2d> totalCurve;
    PointNorm head;
    PointNorm tail;
    bool comboCurve = false;

    double computeSecondDerivative(int startIndex, int endIndex)
    
        int increment = 1;
        if (endIndex < startIndex)  increment = -1; 

        double totAngle = 0;
        double totalDistance = 0;
        int count = 0;
        for (int i = startIndex; i + 2 * increment != endIndex; i += increment)
        
            count++;

            Point2d p1 = totalCurve[i];
            Point2d p2 = totalCurve[i + increment];
            Point2d p3 = totalCurve[i + 2 * increment];

            Vec2d v1 = Vec2d(p1.x - p2.x, p1.y - p2.y);
            Vec2d v2 = Vec2d(p2.x - p3.x, p2.y - p3.y);

            double tempAngle = angleBetween(v1, v2, true);
            double aveDist = (sqrt(sqDist(p1, p2)) + sqrt(sqDist(p2, p3))) / 2.0;
            totalDistance += aveDist;
            totAngle += tempAngle;
        

        totAngle = totAngle / totalDistance;
        return totAngle;
    

    static Vec2d rotateByAngle(Vec2d inVec, double angle)
    
        Vec2d outVec = Vec2d();
        outVec(0) = inVec(0)*cos(angle) - inVec(1)*sin(angle);
        outVec(1) = inVec(0)*sin(angle) + inVec(1)*cos(angle);
        return outVec;
    
;


vector<Scalar> colorWheel =  
    Scalar(255,0,0),
    Scalar(0,255,0),
    Scalar(0,0,255),
    Scalar(255,255,0),
    Scalar(255,0,255),
    Scalar(0,255,255) ;
int colorWheelIndex = 0;
void rotateColorWheel()

    colorWheelIndex++;
    if (colorWheelIndex >= colorWheel.size())  colorWheelIndex = 0; 



int main(int argc, char** argv)


    //control downsampling and extrapolation (several these could be static members of modspline instead)
    const int stepSize_px = 1;//how far each extrapolation step travels
    const int derivativeLookback = 15;//how far back in curve is considered in determining 2nd derivative
    const int dirEstimationOffset = 2;//min 1.  determines how much deviations at ends of curves effect extrapolation (lower equals more impact)
    const int voxelWidth = 5; //voxel dimension for averaging intial pixels into point doubles

    //control spline similarity calculation (each of these contributes to a weighted sum of "spline distance")
    const double distWeighting = 1.0; //how much distance between two splines after optimal extrapolation
    const double angleWeighting = 10.0; //how much angle between two splines after optimal extrapolation
    const double travelWeighting = 0.05; //how far splines have to travel to connect
    const double maxTotalSplineError = 15; //unitless weighted combination of "spline distance"

    bool debugFlag = true;// flag to enable or disable debug visualizers


    std::string path = "C:/Local Software/voyDICOM/resources/images/SparseCurves6.png";

    Mat originalImg = cv::imread(path, cv::IMREAD_GRAYSCALE);
    if (debugFlag)  cv::imshow("orignal", originalImg); 

    Mat debugImg = cv::imread(path, cv::IMREAD_COLOR);

    Mat bwImg;
    threshold(originalImg, bwImg, 150, 255, THRESH_BINARY);

    vector<vector<Point> > contours;
    findContours(bwImg, contours, RETR_LIST, CHAIN_APPROX_NONE);

    vector<vector<Point2d>> dCurves = vector<vector<Point2d>>();

    Mat initialCurves = debugImg.clone();
    for (int i = 0; i < contours.size(); i++)
    
        vector<Point2d> tempDCurve = resampleContour(contours[i], voxelWidth);
        if (tempDCurve.size() < 7)  continue; 

        tempDCurve = orderCurve(tempDCurve);
        dCurves.push_back(tempDCurve);

        vector<Point> tempCloud = roundPoints(tempDCurve);
        cv::polylines(initialCurves,  tempCloud , false, colorWheel[colorWheelIndex], 2);
        circle(initialCurves, tempCloud[0], 5, Scalar(0, 255, 0), 1);
        circle(initialCurves, tempCloud[tempCloud.size() - 1], 5, Scalar(0, 0, 255), 1);
        rotateColorWheel();
    
    if (debugFlag)  cv::imshow("initialCurves", initialCurves); 
        
    

    vector<ModSpline> travCurves = vector<ModSpline>();
    vector<ModSpline> tempCurves = vector<ModSpline>();
    for (int i = 0; i < dCurves.size(); i++)
    
        travCurves.push_back(ModSpline(dCurves[i], dirEstimationOffset));
        tempCurves.push_back(ModSpline(dCurves[i], dirEstimationOffset));
    


    if (debugFlag) 
    
        for (int i = 0; i < 25; i++)
        
            colorWheelIndex = 0;
            Mat extrapViewer = debugImg.clone();
            for (int ii = 0; ii < tempCurves.size(); ii++)
            
                //if (!(ii == 7 || ii == 13))  continue; 
                tempCurves[ii].stepExtrapolate_Bidirectional(stepSize_px,derivativeLookback);
                vector<Point2d> tempCloud = tempCurves[ii].getTotalCurve();
                for (int iii = 0; iii < tempCloud.size(); iii++)
                
                    cv::circle(extrapViewer, tempCloud[iii], 1, colorWheel[colorWheelIndex], 1);
                
                rotateColorWheel();
            
            cv::imshow("extrapolation debug", extrapViewer);
            cv::waitKey(100);
        
    
    
    
    int fusionCounter = 1;
    while (true)
    
        vector<tuple<int, int>> validMerges = vector<tuple<int, int>>();
        for (int i = 0; i < travCurves.size(); i++)
        
            for (int ii = 0; ii < travCurves.size(); ii++)
            
                if (i == ii)  continue; 
                double tempComboDist = ModSpline::estimateSpineDistance(
                    travCurves[i], 
                    travCurves[ii], 
                    stepSize_px,
                    derivativeLookback,
                    distWeighting,
                    angleWeighting,
                    travelWeighting);
                if (tempComboDist < maxTotalSplineError)
                
                    validMerges.push_back(tuple<int,int>(i,ii));
                
            
        

        if (validMerges.size()>0)
        
            vector<int> splineSizes = vector<int>();
            for (int i = 0; i < travCurves.size(); i++)
            
                splineSizes.push_back(travCurves[i].getPtCount());
            

            //sort valid merges by combined spline size (favor larger spline merges before smaller ones)
            sort(validMerges.begin(), validMerges.end(), [splineSizes](const tuple<int, int>& spl1, const tuple<int, int>& spl2) 
                return  splineSizes[get<0>(spl1)] + splineSizes[get<1>(spl1)] > splineSizes[get<0>(spl2)] + splineSizes[get<1>(spl2)];
                );

            int splineIndex1 = get<0>(validMerges[0]);
            int splineIndex2 = get<1>(validMerges[0]);

            vector<vector<Point2d>> debugClouds = vector<vector<Point2d>>();
            ModSpline newCurve = ModSpline::mergeSplines(
                travCurves[splineIndex1], 
                travCurves[splineIndex2], 
                stepSize_px, 
                derivativeLookback,
                dirEstimationOffset,
                debugClouds);

            travCurves.erase(travCurves.begin() + max(splineIndex1, splineIndex2));
            travCurves.erase(travCurves.begin() + min(splineIndex1, splineIndex2));
            travCurves.push_back(newCurve);

            if (debugFlag)
            
                Mat debugFusions = debugImg.clone();
                cv::polylines(debugFusions,  roundPoints(debugClouds[0]) , false, Scalar(255, 0, 0), 2);
                cv::polylines(debugFusions,  roundPoints(debugClouds[1]) , false, Scalar(0, 255, 0), 1);
                cv::polylines(debugFusions,  roundPoints(debugClouds[2]) , false, Scalar(0, 0, 255), 2);
                cv::imshow("debugFusion"+std::to_string(fusionCounter++), debugFusions);
                cv::waitKey(500);
            
        
        else
        
            break;
        
    

    Mat finalCurves = debugImg.clone();
    colorWheelIndex = 0;
    for (int i = 0; i < travCurves.size(); i++)
    
        if (!travCurves[i].isComboCurve())  continue; 
        cv::polylines(finalCurves,  roundPoints(travCurves[i].getTotalCurve()) , false, colorWheel[colorWheelIndex], 2);
        rotateColorWheel();
    
    cv::imshow("final curves", finalCurves);
    cv::imshow("initialCurves", initialCurves);
    cv::imshow("original", originalImg);
    
    cv::waitKey(0);


其他数据集(所有 3 个数据集的参数相同):

最后说明:在主函数顶部使用变量可以完全控制样条碰撞检测的特异性。使其更健壮的下一步将是建立一个大型数据集并根据需要修改参数,以便为每个数据集提供最佳解决方案。然后,分析所有这些单独的设置配置,您应该能够为设置建立中心点和置信范围,以便单个配置适用于最大数量的测试图像。

【讨论】:

是的,我打算对它进行一些多项式回归......但正如你所说,对于可以自行回绕的轮廓效果不佳。 tbh 我即将发布的代码是完整的垃圾......但我很累......明天再给它通行证。希望有一些启发或帮助你的点点滴滴和概念 明天我也会尝试分解关键控制变量(现在它们只是散布) 更新了代码以公开关键变量(在 main 的顶部)。还改进了样条合并(使用正弦平均来提高曲率连续性)。碰撞检测也得到了改进。 非常感谢!您的代码效果惊人!我在另一个数据集上自己尝试过(我添加了另外几张图像),但不幸的是我发现了一些奇怪的错误。请问,可以试试吗?我认为 bug 是在 orderCurve 中,但我不知道。 TIL:(堆栈溢出答案有字符限制)大声笑...无论如何发现了端点检测的错误并修复了它,还对推断进行了一些改进,并根据错误修正稍微调整了参数和新图片。

以上是关于缺少数据的 2D 图像中的曲线拟合的主要内容,如果未能解决你的问题,请参考以下文章

拟合和比较 R 中的多个 sigmoid 曲线

备战数学建模31-数据插值与曲线拟合3

如何用matlab实现对边缘检测后的图像的边缘细化和曲线拟合?

matlab拟合正弦曲线的问题

OpenCV曲线拟合与圆拟合

急!MATLAB中用cftool工具数据拟合之后,拟合结果好坏判断