目标 C:如何在不损失边缘质量的情况下从 JPG 图像中去除白色背景

Posted

技术标签:

【中文标题】目标 C:如何在不损失边缘质量的情况下从 JPG 图像中去除白色背景【英文标题】:Objective C: How to remove a white background from a JPG image without losing edge quality 【发布时间】:2016-08-07 21:19:39 【问题描述】:

所以我有许多带有白色背景的 jpg 图像要加载到我的应用程序中,我想以编程方式删除白色背景。我有一个功能可以做到这一点,但它会导致每个图像周围出现一些锯齿状边缘。有没有办法可以混合这些边缘以实现平滑边缘?

我目前的方法:

-(UIImage *)changeWhiteColorTransparent: (UIImage *)image

    CGImageRef rawImageRef=image.CGImage;

    const CGFloat colorMasking[6] = 222, 255, 222, 255, 222, 255;

    UIGraphicsBeginImageContextWithOptions(image.size, NO, [UIScreen mainScreen].scale);
    CGImageRef maskedImageRef=CGImageCreateWithMaskingColors(rawImageRef, colorMasking);

    CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0.0, image.size.height);
    CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);

    CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, image.size.width, image.size.height), maskedImageRef);
    UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
    CGImageRelease(maskedImageRef);
    UIGraphicsEndImageContext();
    return result;

我知道我可以更改颜色掩码值,但我认为任何组合都不会产生没有白色背景的平滑图片。

这是一个例子:

该方法还会去除图像中接近白色的多余像素:

我认为理想的方法是根据它们与纯白色的接近程度来改变白色像素的 alpha,而不是仅仅将它们全部移除。任何想法将不胜感激。

【问题讨论】:

这称为色度键合成。使用内核处理算法可能比按像素计算更简单。 CIFilter 的文档有一个example。 GPUImage 也有例子。 【参考方案1】:

#import "UIImage+FloodFill.h"
//https://github.com/Chintan-Dave/UIImageScanlineFloodfill

#define Mask8(x) ( (x) & 0xFF )
#define R(x) ( Mask8(x) )
#define G(x) ( Mask8(x >> 8 ) )
#define B(x) ( Mask8(x >> 16) )
#define A(x) ( Mask8(x >> 24) )
#define RGBAMake(r, g, b, a) ( Mask8(r) | Mask8(g) << 8 | Mask8(b) << 16 | Mask8(a) << 24 )

@interface UIImage (BackgroundRemoval)
//Simple Removal
- (UIImage *)floodFillRemoveBackgroundColor;
@end

@implementation UIImage (BackgroundRemoval)
- (UIImage*) maskImageWithMask:(UIImage *)maskImage 
  CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

  CGImageRef maskImageRef = [maskImage CGImage];

  // create a bitmap graphics context the size of the image
  CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskImage.size.width, maskImage.size.height, 8, 0, colorSpace, (CGBitmapInfo)kCGImageAlphaPremultipliedLast);
  CGColorSpaceRelease(colorSpace);

  if (mainViewContentContext==NULL)
  return NULL;

  CGFloat ratio = 0;

  ratio = maskImage.size.width/ self.size.width;

  if(ratio * self.size.height < maskImage.size.height) 
    ratio = maskImage.size.height/ self.size.height;
  

  CGRect rect1  =  0, 0, maskImage.size.width, maskImage.size.height ;
  CGRect rect2  =  -((self.size.width*ratio)-maskImage.size.width)/2 , -((self.size.height*ratio)-maskImage.size.height)/2, self.size.width*ratio, self.size.height*ratio ;


  CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
  CGContextDrawImage(mainViewContentContext, rect2, self.CGImage);


  // Create CGImageRef of the main view bitmap content, and then
  // release that bitmap context
  CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
  CGContextRelease(mainViewContentContext);

  UIImage *theImage = [UIImage imageWithCGImage:newImage];

  CGImageRelease(newImage);

  // return the image
  return theImage;


- (UIImage *)floodFillRemove
  //1
  UIImage *processedImage = [self floodFillFromPoint:CGPointMake(0, 0) withColor:[UIColor magentaColor] andTolerance:0];

  CGImageRef inputCGImage=processedImage.CGImage;
  UInt32 * inputPixels;
  NSUInteger inputWidth = CGImageGetWidth(inputCGImage);
  NSUInteger inputHeight = CGImageGetHeight(inputCGImage);

  CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

  NSUInteger bytesPerPixel = 4;
  NSUInteger bitsPerComponent = 8;

  NSUInteger inputBytesPerRow = bytesPerPixel * inputWidth;

  inputPixels = (UInt32 *)calloc(inputHeight * inputWidth, sizeof(UInt32));

  CGContextRef context = CGBitmapContextCreate(inputPixels, inputWidth, inputHeight, bitsPerComponent, inputBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);

  CGContextDrawImage(context, CGRectMake(0, 0, inputWidth, inputHeight), inputCGImage);

  //2

  for (NSUInteger j = 0; j < inputHeight; j++) 
    for (NSUInteger i = 0; i < inputWidth; i++) 
      UInt32 * currentPixel = inputPixels + (j * inputWidth) + i;
      UInt32 color = *currentPixel;

      if (R(color) == 255 && G(color) == 0 && B(color) == 255) 
        *currentPixel = RGBAMake(0, 0, 0, A(0));
        else
          *currentPixel = RGBAMake(R(color), G(color), B(color), A(color));
        
      
  
  CGImageRef newCGImage = CGBitmapContextCreateImage(context);

  //3

  UIImage * maskImage = [UIImage imageWithCGImage:newCGImage];
  CGColorSpaceRelease(colorSpace);
  CGContextRelease(context);
  free(inputPixels);

  UIImage *result = [self maskImageWithMask:maskImage];

  //4

  return result;


@end

如果您的图像带有渐变背景怎么办? 使用下面的代码。

- (UIImage *)complexReoveBackground

  GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:self];

  GPUImagePrewittEdgeDetectionFilter *filter = [[GPUImagePrewittEdgeDetectionFilter alloc] init];
  [filter setEdgeStrength:0.04];

  [stillImageSource addTarget:filter];
  [filter useNextFrameForImageCapture];
  [stillImageSource processImage];

  UIImage *resultImage = [filter imageFromCurrentFramebuffer];

  UIImage *processedImage = [resultImage floodFillFromPoint:CGPointMake(0, 0) withColor:[UIColor magentaColor] andTolerance:0];

  CGImageRef inputCGImage=processedImage.CGImage;
  UInt32 * inputPixels;
  NSUInteger inputWidth = CGImageGetWidth(inputCGImage);
  NSUInteger inputHeight = CGImageGetHeight(inputCGImage);

  CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

  NSUInteger bytesPerPixel = 4;
  NSUInteger bitsPerComponent = 8;

  NSUInteger inputBytesPerRow = bytesPerPixel * inputWidth;

  inputPixels = (UInt32 *)calloc(inputHeight * inputWidth, sizeof(UInt32));

  CGContextRef context = CGBitmapContextCreate(inputPixels, inputWidth, inputHeight, bitsPerComponent, inputBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);

  CGContextDrawImage(context, CGRectMake(0, 0, inputWidth, inputHeight), inputCGImage);

  for (NSUInteger j = 0; j < inputHeight; j++) 
    for (NSUInteger i = 0; i < inputWidth; i++) 
      UInt32 * currentPixel = inputPixels + (j * inputWidth) + i;
      UInt32 color = *currentPixel;

      if (R(color) == 255 && G(color) == 0 &&   B(color) == 255) 
        *currentPixel = RGBAMake(0, 0, 0, A(0));
      else
        *currentPixel = RGBAMake(0, 0, 0, 255);
      
    
  
  CGImageRef newCGImage = CGBitmapContextCreateImage(context);
  UIImage * maskImage = [UIImage imageWithCGImage:newCGImage];
  CGColorSpaceRelease(colorSpace);
  CGContextRelease(context);
  free(inputPixels);

  GPUImagePicture *maskImageSource = [[GPUImagePicture alloc] initWithImage:maskImage];

  GPUImageGaussianBlurFilter *blurFilter = [[GPUImageGaussianBlurFilter alloc] init];
  [blurFilter setBlurRadiusInPixels:0.7];
  [maskImageSource addTarget:blurFilter];
  [blurFilter useNextFrameForImageCapture];
  [maskImageSource processImage];

  UIImage *blurMaskImage = [blurFilter imageFromCurrentFramebuffer];
  //return blurMaskImage;
  UIImage *result = [self maskImageWithMask:blurMaskImage];

  return result;

您可以从这里sample code下载示例代码

【讨论】:

我知道这已经晚了,但谢谢!使用第一个代码 sn-p 在使用泛洪容限并添加一些额外代码以去除多余的洋红色后效果很好。 样品here的正确链接

以上是关于目标 C:如何在不损失边缘质量的情况下从 JPG 图像中去除白色背景的主要内容,如果未能解决你的问题,请参考以下文章

python如何在不损失质量的情况下调整(缩小)图像的大小

如何在不损失质量的情况下将 PNG 转换为 BMP

在不损失任何质量的情况下调整图像大小[关闭]

如何在不损失视网膜显示质量的情况下将 UIView 捕获到 UIImage

在不损失质量的情况下调整图像大小

如何在不重写的情况下从数据框中删除行?