将 UIImage 转换为灰度保持图像质量

Posted

技术标签:

【中文标题】将 UIImage 转换为灰度保持图像质量【英文标题】:Convert UIImage to grayscale keeping image quality 【发布时间】:2017-03-03 21:28:42 【问题描述】:

我有这个扩展名(在obj-c 中找到,我将其转换为Swift3)以获得相同的UIImage 但灰度:

public func getGrayScale() -> UIImage

    let imgRect = CGRect(x: 0, y: 0, width: width, height: height)

    let colorSpace = CGColorSpaceCreateDeviceGray()

    let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue).rawValue)
    context?.draw(self.cgImage!, in: imgRect)

    let imageRef = context!.makeImage()
    let newImg = UIImage(cgImage: imageRef!)

    return newImg

我可以看到灰色图像,但它的质量很差......我能看到的唯一与质量有关的是上下文构造函数中的bitsPerComponent: 8。但是查看 Apple 的文档,这是我得到的:

显示ios只支持8bpc...那为什么我不能提高质量呢?

【问题讨论】:

检查您的宽度和高度以确保原件不是 2 倍大小。 【参考方案1】:

我会使用 CoreImage,它可以保持质量。

func convertImageToBW(image:UIImage) -> UIImage 

    let filter = CIFilter(name: "CIPhotoEffectMono")

    // convert UIImage to CIImage and set as input

    let ciInput = CIImage(image: image)
    filter?.setValue(ciInput, forKey: "inputImage")

    // get output CIImage, render as CGImage first to retain proper UIImage scale

    let ciOutput = filter?.outputImage
    let ciContext = CIContext()
    let cgImage = ciContext.createCGImage(ciOutput!, from: (ciOutput?.extent)!)

    return UIImage(cgImage: cgImage!)

根据您使用此代码的方式,出于性能原因,您可能希望在其外部创建 CIContext。

【讨论】:

【参考方案2】:

试试下面的代码:

注意:代码已更新,错误已修复...

在 Swift 3 中测试的代码。 originalImage 是您尝试转换的图像。

答案 1:

     var context = CIContext(options: nil)

更新: CIContext 是处理rendering 的核心图像组件,核心图像的所有处理都在CIContext 中完成。这有点类似于Core GraphicsOpenGL context。有关更多信息,请访问Apple Doc.

     func Noir() 

        let currentFilter = CIFilter(name: "CIPhotoEffectNoir") 
        currentFilter!.setValue(CIImage(image: originalImage.image!), forKey: kCIInputImageKey)
        let output = currentFilter!.outputImage 
        let cgimg = context.createCGImage(output!,from: output!.extent)
        let processedImage = UIImage(cgImage: cgimg!)
        originalImage.image = processedImage
      

您还需要考虑以下可以产生类似效果的过滤器

CIPhotoEffectMono CIPhotoEffectTonal

答案 1 的输出:

答案 2 的输出:

改进的答案:

答案 2: 在应用 coreImage 过滤器之前自动调整输入图像

var context = CIContext(options: nil)

func Noir() 


    //Auto Adjustment to Input Image
    var inputImage = CIImage(image: originalImage.image!)
    let options:[String : AnyObject] = [CIDetectorImageOrientation:1 as AnyObject]
    let filters = inputImage!.autoAdjustmentFilters(options: options)

    for filter: CIFilter in filters 
       filter.setValue(inputImage, forKey: kCIInputImageKey)
   inputImage =  filter.outputImage
      
    let cgImage = context.createCGImage(inputImage!, from: inputImage!.extent)
    self.originalImage.image =  UIImage(cgImage: cgImage!)

    //Apply noir Filter
    let currentFilter = CIFilter(name: "CIPhotoEffectTonal") 
    currentFilter!.setValue(CIImage(image: UIImage(cgImage: cgImage!)), forKey: kCIInputImageKey)

    let output = currentFilter!.outputImage 
    let cgimg = context.createCGImage(output!, from: output!.extent)
    let processedImage = UIImage(cgImage: cgimg!)
    originalImage.image = processedImage


注意:如果你想看到更好的结果。你应该在real device而不是simulator上测试你的代码...

【讨论】:

【参考方案3】:

这是目标 c 中的一个类别。请注意,至关重要的是,此版本考虑了规模。

- (UIImage *)grayscaleImage
    return [self imageWithCIFilter:@"CIPhotoEffectMono"];


- (UIImage *)imageWithCIFilter:(NSString*)filterName
    CIImage *unfiltered = [CIImage imageWithCGImage:self.CGImage];
    CIFilter *filter = [CIFilter filterWithName:filterName];
    [filter setValue:unfiltered forKey:kCIInputImageKey];
    CIImage *filtered = [filter outputImage];
    CIContext *context = [CIContext contextWithOptions:nil];
    CGImageRef cgimage = [context createCGImage:filtered fromRect:CGRectMake(0, 0, self.size.width*self.scale, self.size.height*self.scale)];
    // Do not use initWithCIImage because that renders the filter each time the image is displayed.  This causes slow scrolling in tableviews.
    UIImage *image = [[UIImage alloc] initWithCGImage:cgimage scale:self.scale orientation:self.imageOrientation];
    CGImageRelease(cgimage);
    return image;

【讨论】:

但他不是那里唯一的人:),谢谢@arsenius 是的,这正是我需要的 :)【参考方案4】:

根据 Joe 的回答,我们很容易将 Original 转换为 B&W 。但是返回原始图像请参考以下代码:

var context = CIContext(options: nil)
var startingImage : UIImage = UIImage()

func Noir()      
    startingImage = imgView.image!
    var inputImage = CIImage(image: imgView.image!)!
    let options:[String : AnyObject] = [CIDetectorImageOrientation:1 as AnyObject]
    let filters = inputImage.autoAdjustmentFilters(options: options)

    for filter: CIFilter in filters 
        filter.setValue(inputImage, forKey: kCIInputImageKey)
        inputImage =  filter.outputImage!
    
    let cgImage = context.createCGImage(inputImage, from: inputImage.extent)
    self.imgView.image =  UIImage(cgImage: cgImage!)

    //Filter Logic
    let currentFilter = CIFilter(name: "CIPhotoEffectNoir")
    currentFilter!.setValue(CIImage(image: UIImage(cgImage: cgImage!)), forKey: kCIInputImageKey)

    let output = currentFilter!.outputImage
    let cgimg = context.createCGImage(output!, from: output!.extent)
    let processedImage = UIImage(cgImage: cgimg!)
    imgView.image = processedImage


func Original() 
    imgView.image = startingImage

【讨论】:

这不是恢复原始照片的方法。如果添加过滤器 2 次并想恢复到原始照片怎么办?这个解决方案行不通。【参考方案5】:

Joe 的回答是 UIImage 的扩展 Swift 4 适用于不同的规模:

extension UIImage 
    var noir: UIImage 
        let context = CIContext(options: nil)
        let currentFilter = CIFilter(name: "CIPhotoEffectNoir")!
        currentFilter.setValue(CIImage(image: self), forKey: kCIInputImageKey)
        let output = currentFilter.outputImage!
        let cgImage = context.createCGImage(output, from: output.extent)!
        let processedImage = UIImage(cgImage: cgImage, scale: scale, orientation: imageOrientation)

        return processedImage
    

【讨论】:

还有怎么找回原图呢? @SasukeUchiha 请参阅 CodeBender 的回答。恐怕无法恢复原始图像,因为应用黑色滤镜后大部分颜色信息都会丢失【参考方案6】:

一个 Swift 4.0 扩展,它返回一个可选的 UIImage 以避免任何潜在的崩溃。

import UIKit

extension UIImage 
    var noir: UIImage? 
        let context = CIContext(options: nil)
        guard let currentFilter = CIFilter(name: "CIPhotoEffectNoir") else  return nil 
        currentFilter.setValue(CIImage(image: self), forKey: kCIInputImageKey)
        if let output = currentFilter.outputImage,
            let cgImage = context.createCGImage(output, from: output.extent) 
            return UIImage(cgImage: cgImage, scale: scale, orientation: imageOrientation)
        
        return nil
    

要使用这个:

let image = UIImage(...)
let noirImage = image.noir // noirImage is an optional UIImage (UIImage?)

【讨论】:

如何在代码中使用这个?可以举个例子吗? 在 Swift 5.1 中完美运行【参考方案7】:

上述所有解决方案都依赖于CIImage,而UIImage 通常将CGImage 作为其底层图像,而不是CIImage。所以这意味着你必须在开始时将你的底层图像转换为CIImage,最后将它转换回CGImage(如果你不这样做,用CIImage构造UIImage会有效地为你做这件事)。

虽然对于许多用例来说可能没问题,但 CGImageCIImage 之间的转换不是免费的:它可能很慢,并且在转换时会产生很大的内存峰值。

所以我想提一个完全不同的解决方案,它不需要来回转换图像。它使用Accelerate,Apple here完美描述。

这是一个演示这两种方法的游乐场示例。

import UIKit
import Accelerate

extension CIImage 

    func toGrayscale() -> CIImage? 

        guard let output = CIFilter(name: "CIPhotoEffectNoir", parameters: [kCIInputImageKey: self])?.outputImage else 
            return nil
        

        return output
    



extension CGImage 

    func toGrayscale() -> CGImage 

        guard let format = vImage_CGImageFormat(cgImage: self),
              // The source image bufffer
              var sourceBuffer = try? vImage_Buffer(
                cgImage: self,
                format: format
              ),
              // The 1-channel, 8-bit vImage buffer used as the operation destination.
              var destinationBuffer = try? vImage_Buffer(
                width: Int(sourceBuffer.width),
                height: Int(sourceBuffer.height),
                bitsPerPixel: 8
              ) else 
            return self
        

        // Declare the three coefficients that model the eye's sensitivity
        // to color.
        let redCoefficient: Float = 0.2126
        let greenCoefficient: Float = 0.7152
        let blueCoefficient: Float = 0.0722

        // Create a 1D matrix containing the three luma coefficients that
        // specify the color-to-grayscale conversion.
        let divisor: Int32 = 0x1000
        let fDivisor = Float(divisor)

        var coefficientsMatrix = [
            Int16(redCoefficient * fDivisor),
            Int16(greenCoefficient * fDivisor),
            Int16(blueCoefficient * fDivisor)
        ]

        // Use the matrix of coefficients to compute the scalar luminance by
        // returning the dot product of each RGB pixel and the coefficients
        // matrix.
        let preBias: [Int16] = [0, 0, 0, 0]
        let postBias: Int32 = 0

        vImageMatrixMultiply_ARGB8888ToPlanar8(
            &sourceBuffer,
            &destinationBuffer,
            &coefficientsMatrix,
            divisor,
            preBias,
            postBias,
            vImage_Flags(kvImageNoFlags)
        )

        // Create a 1-channel, 8-bit grayscale format that's used to
        // generate a displayable image.
        guard let monoFormat = vImage_CGImageFormat(
            bitsPerComponent: 8,
            bitsPerPixel: 8,
            colorSpace: CGColorSpaceCreateDeviceGray(),
            bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue),
            renderingIntent: .defaultIntent
        ) else 
                return self
        

        // Create a Core Graphics image from the grayscale destination buffer.
        guard let result = try? destinationBuffer.createCGImage(format: monoFormat) else 
            return self
        

        return result
    

为了测试,我使用了完整尺寸的this 图片。

let start = Date()
var prev = start.timeIntervalSinceNow * -1

func info(_ id: String) 
    print("\(id)\t: \(start.timeIntervalSinceNow * -1 - prev)")
    prev = start.timeIntervalSinceNow * -1


info("started")
let original = UIImage(named: "Golden_Gate_Bridge_2021.jpg")!
info("loaded UIImage(named)")

let cgImage = original.cgImage!
info("original.cgImage")
let cgImageToGreyscale = cgImage.toGrayscale()
info("cgImage.toGrayscale()")
let uiImageFromCGImage = UIImage(cgImage: cgImageToGreyscale, scale: original.scale, orientation: original.imageOrientation)
info("UIImage(cgImage)")

let ciImage = CIImage(image: original)!
info("CIImage(image: original)!")
let ciImageToGreyscale = ciImage.toGrayscale()!
info("ciImage.toGrayscale()")
let uiImageFromCIImage = UIImage(ciImage: ciImageToGreyscale, scale: original.scale, orientation: original.imageOrientation)
info("UIImage(ciImage)")

结果(以秒为单位)

CGImage 方法大约需要 1 秒。总计:

original.cgImage        : 0.5257829427719116
cgImage.toGrayscale()   : 0.46222901344299316
UIImage(cgImage)        : 0.1819549798965454

CIImage 方法耗时约 7 秒。总计:

CIImage(image: original)!   : 0.6055610179901123
ciImage.toGrayscale()       : 4.969912052154541
UIImage(ciImage)            : 2.395193934440613

将图像以 JPEG 格式保存到磁盘时,使用 CGImage 创建的图像也比使用 CIImage 创建的图像小 3 倍(5 MB 对 17 MB)。两张图片的质量都很好。这是一个符合 SO 限制的小版本:

【讨论】:

以上是关于将 UIImage 转换为灰度保持图像质量的主要内容,如果未能解决你的问题,请参考以下文章

快速转换后保持相同的图像质量

将图像转换为灰度

如何将表格另存为图像,但又保持其质量? R

如何使用 CIFilter 在 Swift 中将 UIImage 转换为灰度?

将图像视图转换为图像时质量会降低

内嵌图像质量低