如何更改UIImage / UIImageView的单个像素的颜色
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了如何更改UIImage / UIImageView的单个像素的颜色相关的知识,希望对你有一定的参考价值。
我有一个UIImageView,我已应用过滤器:
testImageView.layer.magnificationFilter = kCAFilterNearest;
这样就可以看到各个像素。此UIImageView位于UIScrollView中,图像本身为1000x1000。我使用以下代码来检测已敲击的像素:
我首先设置了一个轻敲手势识别器:
UITapGestureRecognizer *scrollTap = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(singleTapGestureCaptured: )];
scrollTap.numberOfTapsRequired = 1;
[mainScrollView addGestureRecognizer:scrollTap];
然后使用tap的位置来产生点击UIImageView像素的点击坐标:
- (void)singleTapGestureCaptured:(UITapGestureRecognizer *)gesture
{
CGPoint touchPoint = [gesture locationInView:testImageView];
NSLog(@"%f is X pixel num, %f is Y pixel num ; %f is width of imageview", (touchPoint.x/testImageView.bounds.size.width)*1000, (touchPoint.y/testImageView.bounds.size.width)*1000, testImageView.bounds.size.width);
}
我希望能够点击一个像素,并改变它的颜色。但是,我找到的StackOverflow帖子都没有找到有效或未过时的答案。但是,对于熟练的编码人员,您可以帮助我破译较旧的帖子以制作有效的内容,或者使用我的上述代码自行生成一个简单的修复程序,以检测UIImageView的哪个像素已被点击。
所有帮助表示赞赏。
编辑原始用户2:
在关注originaluser2的帖子之后,当我在我的物理设备上运行它的示例GitHub项目时,运行代码非常有效。但是,当我在自己的应用程序中运行相同的代码时,我遇到的图像被替换为空格,并出现以下错误:
<Error>: Unsupported pixel description - 3 components, 16 bits-per-component, 64 bits-per-pixel
<Error>: CGBitmapContextCreateWithData: failed to create delegate.
<Error>: CGContextDrawImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
<Error>: CGBitmapContextCreateImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
代码清楚地起作用,正如我在手机上测试所证明的那样。但是,相同的代码在我自己的项目中产生了一些问题。虽然我怀疑它们都是由一两个简单的核心问题引起的。我该如何解决这些错误?
You'll want to break this problem up into multiple steps.
- 获取图像坐标系中触摸点的坐标
- 获取要更改的像素的x和y位置
- 创建一个位图上下文,并用新颜色的组件替换给定像素的组件。
首先,要获得图像坐标系中触摸点的坐标 - 您可以使用我在UIImageView
上编写的类别方法。这将返回一个CGAffineTransform
,它将一个点从视图坐标映射到图像坐标 - 具体取决于视图的内容模式。
@interface UIImageView (PointConversionCatagory)
@property (nonatomic, readonly) CGAffineTransform viewToImageTransform;
@property (nonatomic, readonly) CGAffineTransform imageToViewTransform;
@end
@implementation UIImageView (PointConversionCatagory)
-(CGAffineTransform) viewToImageTransform {
UIViewContentMode contentMode = self.contentMode;
// failure conditions. If any of these are met – return the identity transform
if (!self.image || self.frame.size.width == 0 || self.frame.size.height == 0 ||
(contentMode != UIViewContentModeScaleToFill && contentMode != UIViewContentModeScaleAspectFill && contentMode != UIViewContentModeScaleAspectFit)) {
return CGAffineTransformIdentity;
}
// the width and height ratios
CGFloat rWidth = self.image.size.width/self.frame.size.width;
CGFloat rHeight = self.image.size.height/self.frame.size.height;
// whether the image will be scaled according to width
BOOL imageWiderThanView = rWidth > rHeight;
if (contentMode == UIViewContentModeScaleAspectFit || contentMode == UIViewContentModeScaleAspectFill) {
// The ratio to scale both the x and y axis by
CGFloat ratio = ((imageWiderThanView && contentMode == UIViewContentModeScaleAspectFit) || (!imageWiderThanView && contentMode == UIViewContentModeScaleAspectFill)) ? rWidth:rHeight;
// The x-offset of the inner rect as it gets centered
CGFloat xOffset = (self.image.size.width-(self.frame.size.width*ratio))*0.5;
// The y-offset of the inner rect as it gets centered
CGFloat yOffset = (self.image.size.height-(self.frame.size.height*ratio))*0.5;
return CGAffineTransformConcat(CGAffineTransformMakeScale(ratio, ratio), CGAffineTransformMakeTranslation(xOffset, yOffset));
} else {
return CGAffineTransformMakeScale(rWidth, rHeight);
}
}
-(CGAffineTransform) imageToViewTransform {
return CGAffineTransformInvert(self.viewToImageTransform);
}
@end
这里没有什么太复杂,只是一些额外的尺度方面拟合/填充逻辑,以确保图像的居中被考虑在内。如果您在屏幕上以1:1显示图像,则可以完全跳过此步骤。
接下来,您需要更改像素的x和y位置。这很简单 - 您只想使用上面的类别属性viewToImageTransform
来获取图像坐标系中的像素,然后使用floor
使值成为积分。
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(imageViewWasTapped:)];
tapGesture.numberOfTapsRequired = 1;
[imageView addGestureRecognizer:tapGesture];
...
-(void) imageViewWasTapped:(UIGestureRecognizer*)tapGesture {
if (!imageView.image) {
return;
}
// get the pixel position
CGPoint pt = CGPointApplyAffineTransform([tapGesture locationInView:imageView], imageView.viewToImageTransform);
PixelPosition pixelPos = {(NSInteger)floor(pt.x), (NSInteger)floor(pt.y)};
// replace image with new image, with the pixel replaced
imageView.image = [imageView.image imageWithPixel:pixelPos replacedByColor:[UIColor colorWithRed:0 green:1 blue:1 alpha:1.0]];
}
最后,您将需要使用我的另一种类别方法 - imageWithPixel:replacedByColor:
来获取具有给定颜色的替换像素的新图像。
/// A simple struct to represent the position of a pixel
struct PixelPosition {
NSInteger x;
NSInteger y;
};
typedef struct PixelPosition PixelPosition;
@interface UIImage (UIImagePixelManipulationCatagory)
@end
@implementation UIImage (UIImagePixelManipulationCatagory)
-(UIImage*) imageWithPixel:(PixelPosition)pixelPosition replacedByColor:(UIColor*)color {
// components of replacement color – in a 255 UInt8 format (fairly standard bitmap format)
const CGFloat* colorComponents = CGColorGetComponents(color.CGColor);
UInt8* color255Components = calloc(sizeof(UInt8), 4);
for (int i = 0; i < 4; i++) color255Components[i] = (UInt8)round(colorComponents[i]*255.0);
// raw image reference
CGImageRef rawImage = self.CGImage;
// image attributes
size_t width = CGImageGetWidth(rawImage);
size_t height = CGImageGetHeight(rawImage);
CGRect rect = {CGPointZero, {width, height}};
// image format
size_t bitsPerComponent = 8;
size_t bytesPerRow = width*4;
// the bitmap info
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
// data pointer – stores an array of the pixel components. For example (r0, b0, g0, a0, r1, g1, b1, a1 .... rn, gn, bn, an)
UInt8* data = calloc(bytesPerRow, height);
// get new RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create bitmap context
CGContextRef ctx = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo);
// draw image into context (populating the data array while doing so)
CGContextDrawImage(ctx, rect, rawImage);
// get the index of the pixel (4 components times the x position plus the y position times the row width)
NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));
// set the pixel components to the color components
data[pixelIndex] = color255Components[0]; // r
data[pixelIndex+1] = color255Components[1]; // g
data[pixelIndex+2] = color255Components[2]; // b
data[pixelIndex+3] = color255Components[3]; // a
// get image from context
CGImageRef img = CGBitmapContextCreateImage(ctx);
// clean up
free(color255Components);
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
free(data);
UIImage* returnImage = [UIImage imageWithCGImage:img];
CGImageRelease(img);
return returnImage;
}
@end
这样做首先以255 UInt8格式输出要写入其中一个像素的颜色分量。接下来,它使用输入图像的给定属性创建一个新的位图上下文。
这个方法的重点是:
// get the index of the pixel (4 components times the x position plus the y position times the row width)
NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));
// set the pixel components to the color components
data[pixelIndex] = color255Components[0]; // r
data[pixelIndex+1] = color255Components[1]; // g
data[pixelIndex+2] = color255Components[2]; // b
data[pixelIndex+3] = color255Components[3]; // a
这样做是得出给定像素的索引(基于像素的x和y坐标) - 然后使用该索引将该像素的组件数据替换为替换颜色的颜色分量。
最后,我们从位图上下文中获取一个图像并执行一些清理。
完成结果:
完整项目:https://github.com/hamishknight/Pixel-Color-Changing
您可以尝试以下内容:
UIImage *originalImage = [UIImage imageNamed:@"something"];
CGSize size = originalImage.size;
UIGraphicsBeginImageContext(size);
[originalImage drawInRect:CGRectMake(0, 0, size.width, size.height)];
// myColor is an instance of UIColor
[myColor setFill];
UIRectFill(CGRectMake(someX, someY, 1, 1);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
以上是关于如何更改UIImage / UIImageView的单个像素的颜色的主要内容,如果未能解决你的问题,请参考以下文章
即使在运行时设置 UIImage,如何在 UIImageView 上做弯角?