将图像转换为灰度
Posted
技术标签:
【中文标题】将图像转换为灰度【英文标题】:Convert image to grayscale 【发布时间】:2009-08-19 09:55:57 【问题描述】:我正在尝试通过以下方式:
#define bytesPerPixel 4
#define bitsPerComponent 8
-(unsigned char*) getBytesForImage: (UIImage*)pImage
CGImageRef image = [pImage CGImage];
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
NSUInteger bytesPerRow = bytesPerPixel * width;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGContextRelease(context);
return rawData;
-(UIImage*) processImage: (UIImage*)pImage
DebugLog(@"processing image");
unsigned char *rawData = [self getBytesForImage: pImage];
NSUInteger width = pImage.size.width;
NSUInteger height = pImage.size.height;
DebugLog(@"width: %d", width);
DebugLog(@"height: %d", height);
NSUInteger bytesPerRow = bytesPerPixel * width;
for (int xCoordinate = 0; xCoordinate < width; xCoordinate++)
for (int yCoordinate = 0; yCoordinate < height; yCoordinate++)
int byteIndex = (bytesPerRow * yCoordinate) + xCoordinate * bytesPerPixel;
//Getting original colors
float red = ( rawData[byteIndex] / 255.f );
float green = ( rawData[byteIndex + 1] / 255.f );
float blue = ( rawData[byteIndex + 2] / 255.f );
//Processing pixel data
float averageColor = (red + green + blue) / 3.0f;
red = averageColor;
green = averageColor;
blue = averageColor;
//Assigning new color components
rawData[byteIndex] = (unsigned char) red * 255;
rawData[byteIndex + 1] = (unsigned char) green * 255;
rawData[byteIndex + 2] = (unsigned char) blue * 255;
NSData* newPixelData = [NSData dataWithBytes: rawData length: height * width * 4];
UIImage* newImage = [UIImage imageWithData: newPixelData];
free(rawData);
DebugLog(@"image processed");
return newImage;
所以当我想转换图像时,我只需调用 processImage:
imageToDisplay.image = [self processImage: image];
但 imageToDisplay 不显示。可能是什么问题?
谢谢。
【问题讨论】:
哪只厚脸皮的猴子在没有点赞的情况下将此添加到他们的收藏夹中?完全缺乏慷慨! 【参考方案1】:我需要一个保留alpha通道的版本,所以我修改了Dutchie432发布的代码:
@implementation UIImage (grayscale)
typedef enum
ALPHA = 0,
BLUE = 1,
GREEN = 2,
RED = 3
PIXELS;
- (UIImage *)convertToGrayscale
CGSize size = [self size];
int width = size.width;
int height = size.height;
// the pixels will be painted to this array
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self CGImage]);
for(int y = 0; y < height; y++)
for(int x = 0; x < width; x++)
uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];
// convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
uint32_t gray = 0.3 * rgbaPixel[RED] + 0.59 * rgbaPixel[GREEN] + 0.11 * rgbaPixel[BLUE];
// set the pixels to gray
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
// create a new CGImageRef from our context with the modified pixels
CGImageRef image = CGBitmapContextCreateImage(context);
// we're done with the context, color space, and pixels
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
// make a new UIImage to return
UIImage *resultUIImage = [UIImage imageWithCGImage:image];
// we're done with image now too
CGImageRelease(image);
return resultUIImage;
@end
【讨论】:
它可以工作,但不支持 Retina 显示,ruralcoder(below) 更新了这个。【参考方案2】:这是一个仅使用 UIKit 和亮度混合模式的代码。有点小技巧,但效果很好。
// Transform the image in grayscale.
- (UIImage*) grayishImage: (UIImage*) inputImage
// Create a graphic context.
UIGraphicsBeginImageContextWithOptions(inputImage.size, YES, 1.0);
CGRect imageRect = CGRectMake(0, 0, inputImage.size.width, inputImage.size.height);
// Draw the image with the luminosity blend mode.
// On top of a white background, this will give a black and white image.
[inputImage drawInRect:imageRect blendMode:kCGBlendModeLuminosity alpha:1.0];
// Get the resulting image.
UIImage *filteredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return filteredImage;
为了保持透明度,也许您可以将UIGraphicsBeginImageContextWithOptions
的opaque
模式参数设置为NO
。需要检查。
【讨论】:
对于读者:UIGraphicsBeginImageContextWithOptions
仅适用于 ios 4。将 opaque
设置为 NO
不会保留 alpha。
此方法需要 10 毫秒而不是 2 毫秒才能在我的图像上接受答案。【参考方案3】:
基于 Cam 的代码,能够处理 Retina 显示器的比例。
- (UIImage *) toGrayscale
const int RED = 1;
const int GREEN = 2;
const int BLUE = 3;
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, self.size.width * self.scale, self.size.height * self.scale);
int width = imageRect.size.width;
int height = imageRect.size.height;
// the pixels will be painted to this array
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self CGImage]);
for(int y = 0; y < height; y++)
for(int x = 0; x < width; x++)
uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];
// convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
uint8_t gray = (uint8_t) ((30 * rgbaPixel[RED] + 59 * rgbaPixel[GREEN] + 11 * rgbaPixel[BLUE]) / 100);
// set the pixels to gray
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
// create a new CGImageRef from our context with the modified pixels
CGImageRef image = CGBitmapContextCreateImage(context);
// we're done with the context, color space, and pixels
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
// make a new UIImage to return
UIImage *resultUIImage = [UIImage imageWithCGImage:image
scale:self.scale
orientation:UIImageOrientationUp];
// we're done with image now too
CGImageRelease(image);
return resultUIImage;
【讨论】:
效果很好,包括视网膜。谢谢伊万! 即使在具有透明背景的 PNG 上也能很好地工作!谢谢! 非常感谢!我需要一种基于不断变化的图像自动生成“字形”的方法,这非常完美! 太棒了!非常感谢!运行良好且不会(担心)性能受到影响。 我通过用整数替换浮点数来稍微提高性能。只需一个for
并将4
添加到rgbaPixel
指针,而不是每次迭代都计算其位置,可以进一步提高性能。【参考方案4】:
我喜欢 Mathieu Godart 的回答,但它似乎不适用于视网膜或 alpha 图像。这是一个似乎对我来说都适用的更新版本:
- (UIImage*)convertToGrayscale
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
CGRect imageRect = CGRectMake(0.0f, 0.0f, self.size.width, self.size.height);
CGContextRef ctx = UIGraphicsGetCurrentContext();
// Draw a white background
CGContextSetRGBFillColor(ctx, 1.0f, 1.0f, 1.0f, 1.0f);
CGContextFillRect(ctx, imageRect);
// Draw the luminosity on top of the white background to get grayscale
[self drawInRect:imageRect blendMode:kCGBlendModeLuminosity alpha:1.0f];
// Apply the source image's alpha
[self drawInRect:imageRect blendMode:kCGBlendModeDestinationIn alpha:1.0f];
UIImage* grayscaleImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return grayscaleImage;
【讨论】:
使我的半透明彩色图像太亮了。【参考方案5】:当你使用这个函数时究竟发生了什么?该函数是否返回了无效图像,或者显示器没有正确显示它?
这是我用来转换为灰度的方法。
- (UIImage *) convertToGreyscale:(UIImage *)i
int kRed = 1;
int kGreen = 2;
int kBlue = 4;
int colors = kGreen | kBlue | kRed;
int m_width = i.size.width;
int m_height = i.size.height;
uint32_t *rgbImage = (uint32_t *) malloc(m_width * m_height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rgbImage, m_width, m_height, 8, m_width * 4, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextSetShouldAntialias(context, NO);
CGContextDrawImage(context, CGRectMake(0, 0, m_width, m_height), [i CGImage]);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// now convert to grayscale
uint8_t *m_imageData = (uint8_t *) malloc(m_width * m_height);
for(int y = 0; y < m_height; y++)
for(int x = 0; x < m_width; x++)
uint32_t rgbPixel=rgbImage[y*m_width+x];
uint32_t sum=0,count=0;
if (colors & kRed) sum += (rgbPixel>>24)&255; count++;
if (colors & kGreen) sum += (rgbPixel>>16)&255; count++;
if (colors & kBlue) sum += (rgbPixel>>8)&255; count++;
m_imageData[y*m_width+x]=sum/count;
free(rgbImage);
// convert from a gray scale image back into a UIImage
uint8_t *result = (uint8_t *) calloc(m_width * m_height *sizeof(uint32_t), 1);
// process the image back to rgb
for(int i = 0; i < m_height * m_width; i++)
result[i*4]=0;
int val=m_imageData[i];
result[i*4+1]=val;
result[i*4+2]=val;
result[i*4+3]=val;
// create a UIImage
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(result, m_width, m_height, 8, m_width * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
CGImageRef image = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage *resultUIImage = [UIImage imageWithCGImage:image];
CGImageRelease(image);
free(m_imageData);
// make sure the data will be released by giving it to an autoreleased NSData
[NSData dataWithBytesNoCopy:result length:m_width * m_height];
return resultUIImage;
【讨论】:
谢谢,这是可行的,但我得到的图像从原始位置旋转了 90 度。我该如何解决? ***和其他人似乎暗示正确的分布是 0.3RED+0.59GREEN+0.11Blue,而不只是平均三种颜色的总和。 仅供参考,Dutchie432 的回答中存在内存泄漏: >uint8_t *m_imageData = (uint8_t *) malloc(m_width * m_height); ...它永远不会被释放并且应该被释放。
此代码似乎无法将 rgb 图像数据正确转换为等效灰度级。它比@mahboudz 的其他有效评论中描述的更糟糕,因为它甚至没有平均三个颜色分量。相反,由于似乎是某种错误,它实际上最终只是从每个像素中获取绿色分量并生成灰度值。由于眼睛对绿色的反应比其他两个成分更敏感,所以回答者(和其他人)可能认为一切正常......
我认为问题出在int colors = kGreen;
行上,它似乎只强制处理像素的绿色分量。要更正,请尝试int colors = kGreen | kBlue | kRed;
。【参考方案6】:
CIFilter 的不同方法。保留 Alpha 通道并使用透明背景:
+ (UIImage *)convertImageToGrayScale:(UIImage *)image
CIImage *inputImage = [CIImage imageWithCGImage:image.CGImage];
CIContext *context = [CIContext contextWithOptions:nil];
CIFilter *filter = [CIFilter filterWithName:@"CIColorControls"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:@(0.0) forKey:kCIInputSaturationKey];
CIImage *outputImage = filter.outputImage;
CGImageRef cgImageRef = [context createCGImage:outputImage fromRect:outputImage.extent];
UIImage *result = [UIImage imageWithCGImage:cgImageRef];
CGImageRelease(cgImageRef);
return result;
【讨论】:
这似乎无法正确保留大小。使用它可以将我的图像放大 2 倍。 由于我无法编辑我的评论,请使用:[UIImage imageWithCGImage:cgImageRef scale:self.scaleorientation:self.imageOrientation];以确保视网膜支持。【参考方案7】:UIImage
的快速扩展,保留 alpha:
extension UIImage
private func convertToGrayScaleNoAlpha() -> CGImageRef
let colorSpace = CGColorSpaceCreateDeviceGray();
let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.None.rawValue)
let context = CGBitmapContextCreate(nil, UInt(size.width), UInt(size.height), 8, 0, colorSpace, bitmapInfo)
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), self.CGImage)
return CGBitmapContextCreateImage(context)
/**
Return a new image in shades of gray + alpha
*/
func convertToGrayScale() -> UIImage
let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.Only.rawValue)
let context = CGBitmapContextCreate(nil, UInt(size.width), UInt(size.height), 8, 0, nil, bitmapInfo)
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), self.CGImage);
let mask = CGBitmapContextCreateImage(context)
return UIImage(CGImage: CGImageCreateWithMask(convertToGrayScaleNoAlpha(), mask), scale: scale, orientation:imageOrientation)!
【讨论】:
【参考方案8】:这是另一个很好的解决方案,作为 UIImage 上的类别方法。它基于 this 博客文章及其 cmets。但我在这里修复了一个内存问题:
- (UIImage *)grayScaleImage
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, self.size.width * self.scale, self.size.height * self.scale);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, self.size.width * self.scale, self.size.height * self.scale, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [self CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef grayImage = CGBitmapContextCreateImage(context);
// release the colorspace and graphics context
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
// make a new alpha-only graphics context
context = CGBitmapContextCreate(nil, self.size.width * self.scale, self.size.height * self.scale, 8, 0, nil, kCGImageAlphaOnly);
// draw image into context with no colorspace
CGContextDrawImage(context, imageRect, [self CGImage]);
// create alpha bitmap mask from current context
CGImageRef mask = CGBitmapContextCreateImage(context);
// release graphics context
CGContextRelease(context);
// make UIImage from grayscale image with alpha mask
CGImageRef cgImage = CGImageCreateWithMask(grayImage, mask);
UIImage *grayScaleImage = [UIImage imageWithCGImage:cgImage scale:self.scale orientation:self.imageOrientation];
// release the CG images
CGImageRelease(cgImage);
CGImageRelease(grayImage);
CGImageRelease(mask);
// return the new grayscale image
return grayScaleImage;
【讨论】:
【参考方案9】:适用于 iOS 9/10 的快速高效的 Swift 3 实现。 我现在尝试了我能找到的用于一次处理 100 张图像的所有图像过滤方法(下载时使用AlamofireImage 的 ImageFilter 选项)。在内存和速度方面,我选择了这种方法,因为它比我尝试过的任何其他方法(对于我的用例)都要好。
func convertToGrayscale() -> UIImage?
UIGraphicsBeginImageContextWithOptions(self.size, false, self.scale)
let imageRect = CGRect(x: 0.0, y: 0.0, width: self.size.width, height: self.size.height)
let context = UIGraphicsGetCurrentContext()
// Draw a white background
context!.setFillColor(red: 1.0, green: 1.0, blue: 1.0, alpha: 1.0)
context!.fill(imageRect)
// optional: increase contrast with colorDodge before applying luminosity
// (my images were too dark when using just luminosity - you may not need this)
self.draw(in: imageRect, blendMode: CGBlendMode.colorDodge, alpha: 0.7)
// Draw the luminosity on top of the white background to get grayscale of original image
self.draw(in: imageRect, blendMode: CGBlendMode.luminosity, alpha: 0.90)
// optional: re-apply alpha if your image has transparency - based on user1978534's answer (I haven't tested this as I didn't have transparency - I just know this would be the the syntax)
// self.draw(in: imageRect, blendMode: CGBlendMode.destinationIn, alpha: 1.0)
let grayscaleImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return grayscaleImage
重新使用 colorDodge:我最初在让图像足够亮以匹配使用 CIFilter("CIPhotoEffectTonal") 生成的灰度着色时遇到问题 - 我的结果太暗了。通过应用 CGBlendMode.colorDodge @ ~ 0.7 alpha,我能够获得不错的匹配,这似乎增加了整体对比度。
其他颜色混合效果也可能起作用 - 但我认为您希望在亮度之前应用,这是灰度过滤效果。 I found this page very helpful to reference about the different BlendModes.
我发现重新提高效率:我需要处理从服务器加载的 100 张缩略图(使用 AlamofireImage 进行异步加载、缓存和应用过滤器)。当我的图像总大小超过缓存大小时,我开始遇到崩溃,所以我尝试了其他方法。
基于 CoreImage CPU 的 CIFilter 方法是我尝试的第一个方法,但对于我正在处理的图像数量来说内存效率不够高。
我还尝试使用EAGLContext(api: .openGLES3)
通过 GPU 应用 CIFilter,这实际上更加占用内存 - 实际上我在加载 200 多个图像时收到了 450+ mb 使用的内存警告。
我尝试了位图处理(即CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: CGImageAlphaInfo.none.rawValue)
...效果很好,只是我无法为现代视网膜设备获得足够高的分辨率。即使我添加了context.scaleBy(x: scaleFactor, y: scaleFactor)
,图像也非常粗糙。
所以在我尝试过的所有方法中,这种方法(UIGraphics Context Draw)在作为过滤器应用到 AlamofireImage 时会大大提高速度和内存效率。例如,在处理我的 200 多张图像时看到不到 70 mbs 的内存,它们基本上是立即加载的,而不是使用 openEAGL 方法花费的大约 35 秒。我知道这些不是很科学的基准。如果有人很好奇,我会对其进行检测:)
最后,如果你确实需要将这个或另一个灰度滤镜传递给 AlamofireImage - 这是如何做到的:(注意你必须将 AlamofireImage 导入你的类才能使用 ImageFilter)
public struct GrayScaleFilter: ImageFilter
public init()
public var filter: (UIImage) -> UIImage
return image in
return image.convertToGrayscale() ?? image
要使用它,像这样创建过滤器并像这样传递给 af_setImage:
let filter = GrayScaleFilter()
imageView.af_setImage(withURL: url, filter: filter)
【讨论】:
真快!经过测试!【参考方案10】:@interface UIImageView (Settings)
- (void)convertImageToGrayScale;
@end
@implementation UIImageView (Settings)
- (void)convertImageToGrayScale
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, self.image.size.width, self.image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, self.image.size.width, self.image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [self.image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
// Return the new grayscale image
self.image = newImage;
@end
【讨论】:
这个答案需要更多的支持。它几乎与使用混合模式一样简洁优雅,而且性能更高。我刚刚发布了一个类似的答案,增加了关于不透明度和视网膜支持的更多细节。【参考方案11】:我还有另一个答案。这个性能非常好,可以处理视网膜图形以及透明度。它扩展了 Sargis Gevorgyan 的方法:
+ (UIImage*) grayScaleFromImage:(UIImage*)image opaque:(BOOL)opaque
// NSTimeInterval start = [NSDate timeIntervalSinceReferenceDate];
CGSize size = image.size;
CGRect bounds = CGRectMake(0, 0, size.width, size.height);
// Create bitmap content with current image size and grayscale colorspace
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
size_t bitsPerComponent = 8;
size_t bytesPerPixel = opaque ? 1 : 2;
size_t bytesPerRow = bytesPerPixel * size.width * image.scale;
CGContextRef context = CGBitmapContextCreate(nil, size.width, size.height, bitsPerComponent, bytesPerRow, colorSpace, opaque ? kCGImageAlphaNone : kCGImageAlphaPremultipliedLast);
// create image from bitmap
CGContextDrawImage(context, bounds, image.CGImage);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage* result = [[UIImage alloc] initWithCGImage:cgImage scale:image.scale orientation:UIImageOrientationUp];
CGImageRelease(cgImage);
CGContextRelease(context);
// performance results on iPhone 6S+ in Release mode.
// Results are in photo pixels, not device pixels:
// ~ 5ms for 500px x 600px
// ~ 15ms for 2200px x 600px
// NSLog(@"generating %d x %d @ %dx grayscale took %f seconds", (int)size.width, (int)size.height, (int)image.scale, [NSDate timeIntervalSinceReferenceDate] - start);
return result;
改用混合模式很优雅,但复制到灰度位图的性能更高,因为您只使用一个或两个颜色通道而不是四个。 opacity bool 用于接收 UIView 的 opaque 标志,因此如果您知道不需要使用 alpha 通道,则可以选择不使用。
我没有在这个回答线程中尝试过基于 Core Image 的解决方案,但如果性能很重要,我会非常谨慎地使用 Core Image。
【讨论】:
这与Natalia's 解决方案相比如何?【参考方案12】:那是我尝试通过直接绘制到灰度色彩空间而不枚举每个像素来快速转换。它的运行速度比 CIImageFilter
解决方案快 10 倍。
@implementation UIImage (Grayscale)
static UIImage *grayscaleImageFromCIImage(CIImage *image, CGFloat scale)
CIImage *blackAndWhite = [CIFilter filterWithName:@"CIColorControls" keysAndValues:kCIInputImageKey, image, kCIInputBrightnessKey, @0.0, kCIInputContrastKey, @1.1, kCIInputSaturationKey, @0.0, nil].outputImage;
CIImage *output = [CIFilter filterWithName:@"CIExposureAdjust" keysAndValues:kCIInputImageKey, blackAndWhite, kCIInputEVKey, @0.7, nil].outputImage;
CGImageRef ref = [[CIContext contextWithOptions:nil] createCGImage:output fromRect:output.extent];
UIImage *result = [UIImage imageWithCGImage:ref scale:scale orientation:UIImageOrientationUp];
CGImageRelease(ref);
return result;
static UIImage *grayscaleImageFromCGImage(CGImageRef imageRef, CGFloat scale)
NSInteger width = CGImageGetWidth(imageRef) * scale;
NSInteger height = CGImageGetHeight(imageRef) * scale;
NSMutableData *pixels = [NSMutableData dataWithLength:width*height];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate(pixels.mutableBytes, width, height, 8, width, colorSpace, 0);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(context);
UIImage *result = [UIImage imageWithCGImage:ref scale:scale orientation:UIImageOrientationUp];
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
CGImageRelease(ref);
return result;
- (UIImage *)grayscaleImage
if (self.CIImage)
return grayscaleImageFromCIImage(self.CIImage, self.scale);
else if (self.CGImage)
return grayscaleImageFromCGImage(self.CGImage, self.scale);
return nil;
@end
【讨论】:
将透明背景转换为黑色。 @SHN 看起来像额外的图像遮罩可以帮助:incurlybraces.com/…以上是关于将图像转换为灰度的主要内容,如果未能解决你的问题,请参考以下文章
将灰度图像转换为 rgb 图像并在 matlab 中将其替换为 imread()