巨大的内存峰值 - CGContextDrawImage

Posted

技术标签:

【中文标题】巨大的内存峰值 - CGContextDrawImage【英文标题】:Huge memory peak - CGContextDrawImage 【发布时间】:2012-02-21 16:14:54 【问题描述】:

我使用这段代码来缩放和旋转相机拍摄的图像。当我使用它时,我可以看到一个巨大的内存峰值。大约 20 MB。当我使用仪器时,我可以看到这条线:

CGContextDrawImage(ctxt, orig, self.CGImage);

拥有 20 MB。这对于全分辨率照片是否正常? iPhone 4S 可以应付。但旧设备会因为此代码而崩溃。

重新缩放图像后,我需要在 NSData 中使用它,所以我使用 UIImageJPEGRepresentation() 方法。这一起使内存峰值更高。它在几秒钟内达到 70 MB 的内存使用量。

是的,我确实阅读了几乎所有有关内存使用的 ios 相机相关问题。但是没有答案。

// WBImage.mm -- extra UIImage methods
// by allen brunson  march 29 2009

#include "WBImage.h"

static inline CGFloat degreesToRadians(CGFloat degrees)

    return M_PI * (degrees / 180.0);


static inline CGSize swapWidthAndHeight(CGSize size)

CGFloat  swap = size.width;

size.width  = size.height;
size.height = swap;

return size;


@implementation UIImage (WBImage)

// rotate an image to any 90-degree orientation, with or without mirroring.
// original code by kevin lohman, heavily modified by yours truly.
// http://blog.logichigh.com/2008/06/05/uiimage-fix/

-(UIImage*)rotate:(UIImageOrientation)orient

CGRect             bnds = CGRectZero;
UIImage*           copy = nil;
CGContextRef       ctxt = nil;
CGRect             rect = CGRectZero;
CGAffineTransform  tran = CGAffineTransformIdentity;

bnds.size = self.size;
rect.size = self.size;

switch (orient)

    case UIImageOrientationUp:
        return self;

    case UIImageOrientationUpMirrored:
        tran = CGAffineTransformMakeTranslation(rect.size.width, 0.0);
        tran = CGAffineTransformScale(tran, -1.0, 1.0);
        break;

    case UIImageOrientationDown:
        tran = CGAffineTransformMakeTranslation(rect.size.width,
                                                rect.size.height);
        tran = CGAffineTransformRotate(tran, degreesToRadians(180.0));
        break;

    case UIImageOrientationDownMirrored:
        tran = CGAffineTransformMakeTranslation(0.0, rect.size.height);
        tran = CGAffineTransformScale(tran, 1.0, -1.0);
        break;

    case UIImageOrientationLeft:
        bnds.size = swapWidthAndHeight(bnds.size);
        tran = CGAffineTransformMakeTranslation(0.0, rect.size.width);
        tran = CGAffineTransformRotate(tran, degreesToRadians(-90.0));
        break;

    case UIImageOrientationLeftMirrored:
        bnds.size = swapWidthAndHeight(bnds.size);
        tran = CGAffineTransformMakeTranslation(rect.size.height,
                                                rect.size.width);
        tran = CGAffineTransformScale(tran, -1.0, 1.0);
        tran = CGAffineTransformRotate(tran, degreesToRadians(-90.0));
        break;

    case UIImageOrientationRight:
        bnds.size = swapWidthAndHeight(bnds.size);
        tran = CGAffineTransformMakeTranslation(rect.size.height, 0.0);
        tran = CGAffineTransformRotate(tran, degreesToRadians(90.0));
        break;

    case UIImageOrientationRightMirrored:
        bnds.size = swapWidthAndHeight(bnds.size);
        tran = CGAffineTransformMakeScale(-1.0, 1.0);
        tran = CGAffineTransformRotate(tran, degreesToRadians(90.0));
        break;

    default:
        // orientation value supplied is invalid
        assert(false);
        return nil;


UIGraphicsBeginImageContext(rect.size);
ctxt = UIGraphicsGetCurrentContext();

switch (orient)

    case UIImageOrientationLeft:
    case UIImageOrientationLeftMirrored:
    case UIImageOrientationRight:
    case UIImageOrientationRightMirrored:
        CGContextScaleCTM(ctxt, -1.0, 1.0);
        CGContextTranslateCTM(ctxt, -rect.size.height, 0.0);
        break;

    default:
        CGContextScaleCTM(ctxt, 1.0, -1.0);
        CGContextTranslateCTM(ctxt, 0.0, -rect.size.height);
        break;


CGContextConcatCTM(ctxt, tran);
CGContextDrawImage(ctxt, bnds, self.CGImage);

copy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

return copy;


-(UIImage*)rotateAndScaleFromCameraWithMaxSize:(CGFloat)maxSize

UIImage*  imag = self;

imag = [imag rotate:imag.imageOrientation];
imag = [imag scaleWithMaxSize:maxSize];

return imag;


-(UIImage*)scaleWithMaxSize:(CGFloat)maxSize

return [self scaleWithMaxSize:maxSize quality:kCGInterpolationHigh];


-(UIImage*)scaleWithMaxSize:(CGFloat)maxSize
                quality:(CGInterpolationQuality)quality

CGRect        bnds = CGRectZero;
UIImage*      copy = nil;
CGContextRef  ctxt = nil;
CGRect        orig = CGRectZero;
CGFloat       rtio = 0.0;
CGFloat       scal = 1.0;

bnds.size = self.size;
orig.size = self.size;
rtio = orig.size.width / orig.size.height;

if ((orig.size.width <= maxSize) && (orig.size.height <= maxSize))

    return self;


if (rtio > 1.0)

    bnds.size.width  = maxSize;
    bnds.size.height = maxSize / rtio;

else

    bnds.size.width  = maxSize * rtio;
    bnds.size.height = maxSize;


UIGraphicsBeginImageContext(bnds.size);
ctxt = UIGraphicsGetCurrentContext();

scal = bnds.size.width / orig.size.width;  
CGContextSetInterpolationQuality(ctxt, quality);

CGContextScaleCTM(ctxt, scal, -scal);
CGContextTranslateCTM(ctxt, 0.0, -orig.size.height);

CGContextDrawImage(ctxt, orig, self.CGImage);

copy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

return copy;


@end

【问题讨论】:

【参考方案1】:

我最终使用了 imageIO,内存更少!

-(UIImage *)resizeImageToMaxDimension: (float) dimension withPaht: (NSString *)path


NSURL *imageUrl = [NSURL fileURLWithPath:path];
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((__bridge CFURLRef)imageUrl, NULL);

NSDictionary *thumbnailOptions = [NSDictionary dictionaryWithObjectsAndKeys:
                                  (id)kCFBooleanTrue, kCGImageSourceCreateThumbnailWithTransform,
                                  kCFBooleanTrue, kCGImageSourceCreateThumbnailFromImageAlways,
                                  [NSNumber numberWithFloat:dimension], kCGImageSourceThumbnailMaxPixelSize,
                                  nil];
CGImageRef thumbnail = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, (__bridge CFDictionaryRef)thumbnailOptions);

UIImage *resizedImage = [UIImage imageWithCGImage:thumbnail];

CFRelease(thumbnail);
CFRelease(imageSource);

return resizedImage;


【讨论】:

【参考方案2】:

没错,它来自您用相机拍摄的照片,较旧的设备使用分辨率较低的相机,这意味着使用 iPhone 3g 拍摄的图像的分辨率(因此尺寸)相对于您在 iPhone4s 上的图像更小。图像通常是压缩的,但是当它们在内存中打开以进行某种操作时,它们必须被解压缩,它们需要的大小确实比文件中的更大,因为如果我没记错的话是number_of_pixel_in_row*number_of_pixel_in_height*byte_for_pixel。 再见, 安德烈亚

【讨论】:

感谢您的回答。但是有没有减少内存使用的解决方案。您提到了压缩,也许您可​​以为我指明正确的方向。 压缩适用于文件,当您对图像进行操作时,您需要对其进行解压缩。我仍然不明白您是否遇到崩溃。我建议您阅读这篇文章link 并从 Apple 网站下载示例代码 LargeImageDownsizing。咻 也许一种解决方案是使用 mmap unix 函数将图像内存映射到文件上,但这可能非常困难,也可能很慢。 是的,我的应用程序因此而崩溃。我看一下苹果提供的这种方法: imageWithCGImage:scale:orientation: 。但奇怪的是,这个方法只改变了 size 属性而不改变图像。字节数基本保持不变。 Apple 不提供一种调整 UIImage 大小的方法吗?也许可以在自动释放池中的 while 循环中绘制图像,并且当时只读取 1024 个字节。但同样,我认为我错过了苹果的默认方法,没有吗?【参考方案3】:

在方法的末尾和 return copy; 之前插入:

CGContextRelease(ctxt);

【讨论】:

我刚才试过了,但似乎没有任何效果。我正在使用 ARC,所以也许 ARC 正在处理这个问题。 ARC 不会处理“旧”核心基础对象

以上是关于巨大的内存峰值 - CGContextDrawImage的主要内容,如果未能解决你的问题,请参考以下文章

CollectionView [高分辨率] 图像上的 iOS 内存峰值

如何跟踪android应用程序消耗的峰值内存

VM:_UITextContainerView (CALayer) - 内存峰值

如何获得python脚本的峰值内存使用量?

当虚拟机资源达到峰值

MoveItemAtPath 的内存峰值