让 CGImage 适应 Swift

Posted

技术标签:

【中文标题】让 CGImage 适应 Swift【英文标题】:Getting CGImage to aspect fit in Swift 【发布时间】:2015-05-11 00:47:09 【问题描述】:

在一个应用程序上工作,该应用程序从相机胶卷中获取背景图像,然后使用 PNG 进行叠加,操作 PNG(捏合、缩放、旋转等),在以下视图控制器上返回合成图像。

无论我尝试什么,在弄清楚如何让我的代码与背景图像相适应时,我都会遇到困难。下面代码中的叠加层(topLayerImage.image)至少在转换时保持其外观。但是,背景图像被强制成形。例如,默认情况下,UIImageView 是用于手机纵向视图的垂直图像,无论原始照片的原始纵横比如何,生成的图像都会以该形状返回。

我已经看到了一些 Objective-C 方法来解决这个问题,尽管我花了整个下午都没有运气。有人可能知道我要去哪里错了吗?这是我的代码:

import UIKit

class TwoLayerViewController: UIViewController 

    @IBOutlet weak var ghostHolderView: UIView!
    @IBOutlet weak var bottomLayerImage: UIImageView!
    @IBOutlet weak var topLayerImage: UIImageView!
    @IBOutlet weak var amountSlider: UISlider!
    @IBOutlet weak var nextPageButton: UIButton!

    var originalPhoto: UIImage?
    var chosenGhostPhoto: UIImage?
    var newImage:UIImage?
    var newBGImage:UIImage?



    override func viewDidLoad() 
        super.viewDidLoad()

        // Do any additional setup after loading the view.

        bottomLayerImage.image = originalPhoto
        bottomLayerImage.contentMode = UIViewContentMode.ScaleAspectFit

        topLayerImage.image = chosenGhostPhoto
        topLayerImage.alpha = 1.0
        topLayerImage.contentMode = UIViewContentMode.ScaleAspectFit
    

    @IBAction func handlePan(recognizer:UIPanGestureRecognizer) 
        let translation = recognizer.translationInView(self.view)
        if let view = recognizer.view 
            view.center = CGPoint(x:view.center.x + translation.x,
            y:view.center.y + translation.y)
        
        recognizer.setTranslation(CGPointZero, inView: self.view)
    

    @IBAction func handlePinch(recognizer : UIPinchGestureRecognizer) 
        if let view = recognizer.view 
            view.transform = CGAffineTransformScale(view.transform,
                recognizer.scale, recognizer.scale)
            recognizer.scale = 1
        
    

    @IBAction func handleRotate(recognizer : UIRotationGestureRecognizer) 
        if let view = recognizer.view 
            view.transform = CGAffineTransformRotate(view.transform, recognizer.rotation)
            var transform:CGAffineTransform = view.transform
            var angle:CGFloat = atan2(transform.b, transform.a)
            println(angle)
        
    

    @IBAction func sliderChangeAmount(sender: UISlider) 

        //        let sliderValue = CGFloat(sender.value)
        //
        //        topLayerImage.alpha = sliderValue

    


    @IBAction func combineImagesButton(sender: AnyObject) 

        let originalWidth = originalPhoto!.size.width
        let originalHeight = originalPhoto!.size.height
        let finalWidth = bottomLayerImage.frame.size.width
        let finalHeight = bottomLayerImage.frame.size.height
        let finalSize : CGSize = CGSizeMake(finalWidth, finalHeight)
        UIGraphicsBeginImageContext(finalSize)

        let size = CGSizeApplyAffineTransform(originalPhoto!.size, CGAffineTransformMakeScale(finalWidth/originalWidth, finalHeight/originalHeight))

        originalPhoto!.drawInRect(CGRect(origin: CGPointZero, size: size))


        // originalPhoto!.drawInRect(bottomLayerImage.frame)

        let imgCon = UIGraphicsGetCurrentContext()
        CGContextTranslateCTM( imgCon, 0.5 * finalSize.width, 0.5 * finalSize.height ) ;
        CGContextRotateCTM( imgCon, atan2(topLayerImage.transform.b, topLayerImage.transform.a));

        chosenGhostPhoto!.drawInRect(topLayerImage.frame, blendMode: kCGBlendModeNormal, alpha:0.8)

        newImage = UIGraphicsGetImageFromCurrentImageContext()

        UIGraphicsEndImageContext()
        println(newImage)
        println(finalSize)
        println(originalPhoto!.size.width, originalPhoto!.size.height)
    

    func getBoundingRectAfterRotation(rect: CGRect, angle: Float) -> CGRect 
        let newWidth : Float = Float(rect.size.width) * abs(cosf(angle)) + Float(rect.size.height) * fabs(sinf(angle))

        let newHeight : Float = Float(rect.size.height) * fabs(cosf(angle)) + Float(rect.size.width) * fabs(sinf(angle))

        let newX : Float = Float(rect.origin.x) + ((Float(rect.size.width) - newWidth) / 2);
        let newY : Float = Float(rect.origin.y) + ((Float(rect.size.height) - newHeight) / 2);
        let rotatedRect : CGRect = CGRectMake(CGFloat(newX), CGFloat(newY), CGFloat(newWidth), CGFloat(newHeight))
        return rotatedRect
    


    override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) 
        if let vc = segue.destinationViewController as? CombinedLayerViewController 

            vc.fullImage = self.newImage
        
    


更新:蒂莫西的建议,经过一些小的调整,在背景问题上起作用,但我在覆盖方面遇到了问题,显然是在规模上,我认为是原点。这是我草率的代码:

import UIKit

class TwoLayerViewController: UIViewController 

@IBOutlet weak var ghostHolderView: UIView!
@IBOutlet weak var bottomLayerImage: UIImageView!
@IBOutlet weak var topLayerImage: UIImageView!
@IBOutlet weak var amountSlider: UISlider!
@IBOutlet weak var nextPageButton: UIButton!

var originalPhoto: UIImage?
var chosenGhostPhoto: UIImage?
var newImage:UIImage?
var newBGImage:UIImage?



override func viewDidLoad() 
    super.viewDidLoad()

    // Do any additional setup after loading the view.

    bottomLayerImage.image = originalPhoto
    bottomLayerImage.contentMode = UIViewContentMode.ScaleAspectFit

    topLayerImage.image = chosenGhostPhoto
//        topLayerImage.alpha = 1.0
//        topLayerImage.contentMode = UIViewContentMode.ScaleAspectFit



@IBAction func handlePan(recognizer:UIPanGestureRecognizer) 
    let translation = recognizer.translationInView(self.view)
    if let view = recognizer.view 
        view.center = CGPoint(x:view.center.x + translation.x,
            y:view.center.y + translation.y)
    
    recognizer.setTranslation(CGPointZero, inView: self.view)



@IBAction func handlePinch(recognizer : UIPinchGestureRecognizer) 
    if let view = recognizer.view 
        view.transform = CGAffineTransformScale(view.transform,
            recognizer.scale, recognizer.scale)
        recognizer.scale = 1

    


@IBAction func handleRotate(recognizer : UIRotationGestureRecognizer) 
    if let view = recognizer.view 
        view.transform = CGAffineTransformRotate(view.transform, recognizer.rotation)
        var transform:CGAffineTransform = view.transform
        var angle:CGFloat = atan2(transform.b, transform.a)
        println(angle)

    


@IBAction func sliderChangeAmount(sender: UISlider) 

    //        let sliderValue = CGFloat(sender.value)
    //
    //        topLayerImage.alpha = sliderValue




@IBAction func combineImagesButton(sender: AnyObject) 

    var originalPhotoFrame: CGRect?
    var backgroundLayerFrame: CGRect?
    var ghostOriginalPhotoFrame: CGRect?
    var ghostBackgroundLayerFrame: CGRect?

    if bottomLayerImage.image!.size.width > bottomLayerImage.image!.size.height 

        originalPhotoFrame = CGRect(x: 0,y: 0, width: bottomLayerImage.image!.size.width, height: bottomLayerImage.image!.size.height)
        backgroundLayerFrame = CGRect(x: 0, y: 0, width: bottomLayerImage.frame.size.width, height: bottomLayerImage.frame.size.height)

     else 

        originalPhotoFrame = CGRect(x: 0,y: 0, width: bottomLayerImage.frame.size.width, height: bottomLayerImage.frame.height)
        backgroundLayerFrame = CGRect(x: 0, y: 0, width: bottomLayerImage.frame.size.width, height: bottomLayerImage.frame.size.height)
    

    // Now figure out whether the ScaleAspectFit was horizontally or vertically bound.
    let horizScale = backgroundLayerFrame!.width / originalPhotoFrame!.width
    let vertScale = backgroundLayerFrame!.height / originalPhotoFrame!.height
    let myScale = min(horizScale, vertScale)

    // So we don't need to do each of these calculations on a separate line, but for ease of explanation…
    // Now we can calculate the size to scale originalPhoto
    let scaledSize = CGSize(width: originalPhotoFrame!.size.width * myScale,
        height: originalPhotoFrame!.size.height * myScale)
    // And now we need to center originalPhoto inside backgroundLayerFrame
    let scaledOrigin = CGPoint(x: (backgroundLayerFrame!.width - scaledSize.width) / 2,
        y: (backgroundLayerFrame!.height - scaledSize.height) / 2)

    // Put it all together
    let scaledPhotoRect = CGRect(origin: scaledOrigin, size: scaledSize)


    //////

    if topLayerImage.image!.size.width > topLayerImage.image!.size.height 

        ghostOriginalPhotoFrame = CGRect(x: 0,y: 0, width: topLayerImage.image!.size.width, height: topLayerImage.image!.size.height)
        ghostBackgroundLayerFrame = CGRect(x: topLayerImage.frame.origin.x, y: topLayerImage.frame.origin.y, width: topLayerImage.frame.size.width, height: topLayerImage.frame.size.height)

     else 

        ghostOriginalPhotoFrame = CGRect(x: topLayerImage.frame.origin.x,y: topLayerImage.frame.origin.y, width: topLayerImage.frame.size.width, height: topLayerImage.frame.height)
        ghostBackgroundLayerFrame = CGRect(x: topLayerImage.frame.origin.x, y: topLayerImage.frame.origin.y, width: topLayerImage.frame.size.width, height: topLayerImage.frame.size.height)
    

    // Now figure out whether the ScaleAspectFit was horizontally or vertically bound.
    let ghostHorizScale = ghostBackgroundLayerFrame!.width / ghostOriginalPhotoFrame!.width
    let ghostVertScale = ghostBackgroundLayerFrame!.height / ghostOriginalPhotoFrame!.height
    let ghostMyScale = min(ghostHorizScale, ghostVertScale)

    // So we don't need to do each of these calculations on a separate line, but for ease of explanation…
    // Now we can calculate the size to scale originalPhoto
    let ghostScaledSize = CGSize(width: ghostOriginalPhotoFrame!.size.width * ghostMyScale,
        height: ghostOriginalPhotoFrame!.size.height * ghostMyScale)
    // And now we need to center originalPhoto inside backgroundLayerFrame
    let ghostScaledOrigin = CGPoint(x: (ghostBackgroundLayerFrame!.width - ghostScaledSize.width) / 2,
        y: (ghostBackgroundLayerFrame!.height - ghostScaledSize.height) / 2)

    // Put it all together
    let ghostScaledPhotoRect = CGRect(origin: ghostScaledOrigin, size: ghostScaledSize)



    //////






//        UIGraphicsBeginImageContext(scaledSize)
    UIGraphicsBeginImageContextWithOptions(scaledSize, false, 0)

    originalPhoto!.drawInRect(CGRect(origin: CGPointZero, size: scaledSize))


    let imgCon = UIGraphicsGetCurrentContext()
//        CGContextTranslateCTM( imgCon, 0.5 * finalSize.width, 0.5 * finalSize.height ) ;
    CGContextRotateCTM( imgCon, atan2(topLayerImage.transform.b, topLayerImage.transform.a));

//        chosenGhostPhoto!.drawInRect(CGRect(origin: topLayerImage.frame.origin, size: ghostScaledSize))

    chosenGhostPhoto!.drawInRect(ghostScaledPhotoRect, blendMode: kCGBlendModeNormal, alpha: 0.8)

//        chosenGhostPhoto!.drawInRect(CGRect(origin: chosenGhostPhoto, size: ghostScaledSize), blendMode: kCGBlendModeNormal, alpha:0.8)



    newImage = UIGraphicsGetImageFromCurrentImageContext()

    UIGraphicsEndImageContext()
    println(newImage)
//        println(finalSize)
    println(originalPhoto!.size.width, originalPhoto!.size.height)



//    func getBoundingRectAfterRotation(rect: CGRect, angle: Float) -> CGRect 
//        let newWidth : Float = Float(rect.size.width) * abs(cosf(angle)) + Float(rect.size.height) * fabs(sinf(angle))
//        
//        let newHeight : Float = Float(rect.size.height) * fabs(cosf(angle)) + Float(rect.size.width) * fabs(sinf(angle))
//        
//        let newX : Float = Float(rect.origin.x) + ((Float(rect.size.width) - newWidth) / 2);
//        let newY : Float = Float(rect.origin.y) + ((Float(rect.size.height) - newHeight) / 2);
//        let rotatedRect : CGRect = CGRectMake(CGFloat(newX), CGFloat(newY), CGFloat(newWidth), CGFloat(newHeight))
//        return rotatedRect
//    


override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) 
    if let vc = segue.destinationViewController as? CombinedLayerViewController 

        vc.fullImage = self.newImage
    

【问题讨论】:

【参考方案1】:

屏幕截图使其更加清晰。第一个屏幕截图准确地显示了ScaleAspectFit 所做的事情:横向图像被缩放而不失真,以填充bottomLayerImage 的整个较窄宽度,并在较大的高度垂直居中。但是:此代码中的 bottomLayerImage.frame 没有任何变化。让这更容易可视化的一件事是设置bottomLayerImage.backgroundColor = .greenColor(),这样您就可以看到bottomLayerImage 占用了相同数量的屏幕空间,并且只是将originalPhoto 垂直居中。

handlePinchhandleRotate 中父视图上设置的变换将调整 bottomLayerImage.frame,因为它存储在父视图的坐标中,但 ScaleAspectFit 的工作不会应用于框架。

这意味着在combineImagesButton 中,计算size 的数学是错误的,当您在矩形中绘制originalPhoto 时它会扭曲。要获得您正在寻找的效果,您需要计算ScaleAspectFit 应用的比例。我掀起了一个操场来展示数学。可以进行优化等等,但应该传达您需要的要点。

//: Playground - noun: a place where people can play

import UIKit

// Just some test values. Adjust them to see different results
let originalPhotoFrame = CGRect(x: 0,y: 0, width: 3000, height: 2000)
let backgroundLayerFrame = CGRect(x: 0, y: 0, width: 640, height: 960)

// Now figure out whether the ScaleAspectFit was horizontally or vertically bound.
let horizScale = backgroundLayerFrame.width / originalPhotoFrame.width
let vertScale = backgroundLayerFrame.height / originalPhotoFrame.height
let myScale = min(horizScale, vertScale)

// So we don't need to do each of these calculations on a separate line, but for ease of explanation…
// Now we can calculate the size to scale originalPhoto
let scaledSize = CGSize(width: originalPhotoFrame.size.width * myScale,
 height: originalPhotoFrame.size.height * myScale)
// And now we need to center originalPhoto inside backgroundLayerFrame
let scaledOrigin = CGPoint(x: (backgroundLayerFrame.width - scaledSize.width) / 2,
 y: (backgroundLayerFrame.height - scaledSize.height) / 2)

// Put it all together
let scaledPhotoRect = CGRect(origin: scaledOrigin, size: scaledSize)

您将希望对 chosenGhostPhototopLayerImage 做同样的事情,并将其与您已经为幽灵做的数学结合起来,但在这个例子中应该很简单。

2015-06-03 更新

好的,我终于有机会查看您更新的代码了。有几件事很突出。

    确认一下:三个手势识别器都将topLayerImage 设置为recognizer.view 对吗?所以他们只是在调整幽灵? 您注释掉了topLayerImage.contentMode = UIViewContentMode.ScaleAspectFit,这意味着如果幽灵的纵横比与topLayerImage 不同,则会被拉伸。我怀疑您希望取消注释。 在我的操场上设置originalPhotoFramebackgroundLayerFrame 只是因为我没有您的所有代码可以使用并且打算用作测试/样本数据。我不确定你为什么要在combineImagesButton 中做你正在做的事情,在那里你正在查看图像是否比它高,但我认为这是错误的。在计算myScale 时,您可以使用originalPhoto.size.[width|height] 而不是originalPhotoFrame.[width|height],而不是backgroundLayerFrame,您可以使用bottomLayerImage.frame。 在我的操场上对scaledSizescaledOrigin 的计算只是为了复制bottomLayerImage 由于.ScaleAspectFit 所做的事情:它尽可能多地拉伸图像而不会失真,然后将缩放的图像居中帧中的图像。为幽灵复制它不起作用,因为识别器会调整topLayerImage.transform,这意味着您不能使用topLayerImage.frame(根据UIView 文档中transform 属性的警告。)我的第一个想法是将原始 topLayerImage.frame 缓存在 viewDidLoad 中,并在最终数学中使用原始值,但请参阅下面的 #6 关于轮换警告。 如果您将上下文创建为scaledSize 并像您一样在原点绘图,则不需要scaledOriginscaledOrigin 只是用来计算你没有使用的scaledPhotoRect(所以也丢掉它)。 您正在处理设备旋转吗?如果用户从纵向旋转到横向,您需要在任何重影数学中进行调整。如果您不进行旋转,我建议将topLayerImage.frame 缓存在viewDidLoad 中,但请注意,如果您支持横向,则必须更新它。我不确定当您旋转设备并且您的视图应用了非身份转换时会发生什么,我认为不同版本的 ios 的答案是不同的。顺便说一句,我认为您将需要隐藏您的变换,将其设置回身份,然后再次读出帧,但这会导致视觉弹出。也许你可以侥幸成功,调用layoutSubviews,获取你的值,放回变换,然后再次调用layoutSubviews,这样这一切就可以在没有干预渲染的情况下发生,但我不确定。无论如何,请注意您需要测试轮换期间发生的情况。

鬼怎么办

您需要应用两种不同的转换。第一个是复制 .ScaleAspectFit 首先为您所做的事情,第二个是用户使用手势识别器所做的组合更改。 如果您更改ghostScaledSize 的代码,就像我在上面列出的scaledSize 一样,但使用缓存的topLayerImage.frame 值,然后调用chosenGhostPhoto!.drawInRect(CGRect(origin: CGPointZero, size: scaledSize)),您应该会得到一个原始图像大小的幽灵,在最终图像的左上角。我会这样做并确认它有效。 一旦您走到这一步,一切都会变得非常好。现在您可以在绘制幻像之前使用bottomLayerImage.transform 并使用CGContextConcatCTM 将其应用于上下文。 (或者您可能需要进行变换的 inverse?我不确定,这在测试中应该很明显。)将其作为单个操作执行很重要,因为如果您单独应用旋转&你试图做的翻译你必须按照与iOS相同的顺序进行翻译。另外,此时您已经获得了转换,因此没有理由分解它然后分别应用每个操作。无论如何,您都想同时应用这两个部分。

【讨论】:

好的,下面是一些截图。首先,您可以看到有人如何调整 topLayerImage(幽灵),而 bottomLayerImage 只是从相机胶卷中选择的一个。第二张图是结果。第三个是重新定位的鬼照片,以便您可以更好地看到它。该图像的纵横比似乎保持不变,而背景图像则不能这样说。 imgur.com/7ymXJG7,6OJdwVW,jD9byzaimgur.com/7ymXJG7,6OJdwVW,jD9byza#1imgur.com/7ymXJG7,6OJdwVW,jD9byza#2 仔细观察后,我的立场得到了纠正:鬼影确实出现了一些扭曲。 @Chris 我编辑的答案中的代码对您的问题有帮助吗?如果是,你能接受这个答案吗?如果不是,你能告诉我更多关于持续困难的信息吗? 今晚我只是试了一下。似乎解决了背景问题,但我似乎无法理解如何正确地将其应用于幽灵。我将更新上面的代码并发布一些截图。我将在上面粘贴我更新的代码(请原谅我没有清理太多)。这些是您在之前/之后看到的顺序:imgur.com/fP5ATEv,ppHfE6v,2aNkZ0r,NKaDfOt,Z9Chefg,0UxZs7G#0 蒂莫西,对我发布的更新有什么想法吗?我显然不知道如何应用它。

以上是关于让 CGImage 适应 Swift的主要内容,如果未能解决你的问题,请参考以下文章

Swift NSImage到CGImage

从文件创建的CGImage可以为空吗

如何在 Swift 中将 CGImage 保存到数据中?

为啥我的 CGImage 是 UIImage 大小的 3 倍

从 CGImage 生成 CCITT 压缩的 TIFF

直接渲染一个CGImage(而不是UIImage)?