iOS - 将图像裁剪为检测到的面部的问题

Posted

技术标签:

【中文标题】iOS - 将图像裁剪为检测到的面部的问题【英文标题】:iOS - Issues cropping an image to a detected face 【发布时间】:2014-07-28 20:09:06 【问题描述】:

我正在尝试将UIImage 裁剪为已使用内置CoreImage 人脸检测功能检测到的人脸。我似乎能够正确检测到人脸,但是当我尝试将UIImage 裁剪到人脸边界时,它远不正确。我的人脸检测代码如下所示:

-(NSArray *)facesForImage:(UIImage *)image 
    CIImage *ciImage = [CIImage imageWithCGImage:image.CGImage];

    CIContext *context = [CIContext contextWithOptions:nil];
    NSDictionary *opts = @CIDetectorAccuracy : CIDetectorAccuracyHigh;

    CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:context options:opts];
    NSArray *features = [detector featuresInImage:ciImage];

    return features;

...裁剪图像的代码如下所示:

-(UIImage *)imageCroppedToFaceAtIndex:(NSInteger)index forImage:(UIImage *)image 
    NSArray *faces = [self facesForImage:image];
    if((index < 0) || (index >= faces.count)) 
        DDLogError(@"Invalid face index provided");

        return nil;
    

    CIFaceFeature *face = [faces objectAtIndex:index];
    CGRect faceBounds = face.bounds;

    CGImageRef imageRef = CGImageCreateWithImageInRect(image.CGImage, faceBounds);
    UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];

    return croppedImage;

我有一张只有一张脸的图像,我正在使用它进行测试,它似乎可以毫无问题地检测到它。但是收成很差。知道这段代码可能有什么问题吗?

【问题讨论】:

【参考方案1】:

对于其他有类似问题的人——将CGImage坐标转换为UIImage坐标——我发现this great article解释了如何使用CGAffineTransform来完成我正在寻找的东西。

【讨论】:

【参考方案2】:

将面部几何图形从 Core Image 转换为 UIImage 坐标的代码很繁琐。我已经有一段时间没有弄乱它了,但我记得它让我很适应,尤其是在处理旋转的图像时。

我建议查看演示应用程序“SquareCam”,您可以通过搜索 Xcode 文档找到它。它在面部周围绘制红色方块,这是一个好的开始。

请注意,您从 Core Image 获得的矩形始终是正方形,有时裁剪得过于紧密。您可能必须使裁剪矩形更高更宽。

【讨论】:

【参考方案3】:

这个类可以解决问题!一个非常灵活和方便的 UIImage 覆盖。 https://github.com/kylestew/KSMagicalCrop

【讨论】:

【参考方案4】:

使用此代码执行它对我有用。

CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage];

//Container for the face attributes
UIView* faceContainer = [[UIView alloc] initWithFrame:facePicture.frame];

// flip faceContainer on y-axis to match coordinate system used by core image
[faceContainer setTransform:CGAffineTransformMakeScale(1, -1)];

// create a face detector - since speed is not an issue we'll use a high accuracy
// detector
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
                                          context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];

// create an array containing all the detected faces from the detector
NSArray* features = [detector featuresInImage:image];

     for(CIFaceFeature* faceFeature in features)
    
        // get the width of the face
        CGFloat faceWidth = faceFeature.bounds.size.width;

        // create a UIView using the bounds of the face
        UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];


        if(faceFeature.hasLeftEyePosition)
        
            // create a UIView with a size based on the width of the face
            leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.leftEyePosition.x-faceWidth*0.15, faceFeature.leftEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
        

        if(faceFeature.hasRightEyePosition)
        
            // create a UIView with a size based on the width of the face
            RightEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.rightEyePosition.x-faceWidth*0.15, faceFeature.rightEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
        

        if(faceFeature.hasMouthPosition)
        
            // create a UIView with a size based on the width of the face
            mouth = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2, faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];
        

        [view addSubview:faceContainer];

        CGFloat y = view.frame.size.height - (faceView.frame.origin.y+faceView.frame.size.height);

        CGRect rect = CGRectMake(faceView.frame.origin.x, y, faceView.frame.size.width, faceView.frame.size.height);

        CGImageRef imageRef = CGImageCreateWithImageInRect([<Original Image> CGImage],rect);
        croppedImage  = [UIImage imageWithCGImage:imageRef];

        CGImageRelease(imageRef);

        //----cropped Image-------//

        UIImageView *img = [[UIImageView alloc]initWithFrame:CGRectMake(faceView.frame.origin.x, y, faceView.frame.size.width, faceView.frame.size.height)];

        img.image = croppedImage;

【讨论】:

这段代码有问题请告诉我。

以上是关于iOS - 将图像裁剪为检测到的面部的问题的主要内容,如果未能解决你的问题,请参考以下文章

如何将从(对象检测)裁剪的检测到的面部保存到其特定创建的文件夹中?

Flutter 人脸检测

从文件夹中读取所有图像并检测人脸,裁剪并保存到新文件夹

MLKit 旋转面部图像使其笔直(iOS 和 Android)

使用 openimaj 进行面部裁剪

使用 VNRectangleObservation 裁剪 UIImage