Suppose there are following two images the background may be totally different image, e.g. not just plain color.
|
Image number one |
|
Image number two |
So basically I want to get the diff image of these two images i.e.
|
Diffed image |
The diff image of two images is the image with the same size but the
pixels are set to be transparent that haven't been changed. The
difference image is constructed from the diff pixels with the color from
the second image
Using difference blending mode
Actually if we use the difference blending mode it doesn't solve the
problem as it doesn't keep right colors of the pixels. If we apply
difference blend mode to above 2 images we'll get following
|
Difference blended image |
Which seems to have inverted color but, so after inverted the colors we will get
|
Inverted diff blended image |
|
So basically this method doesn't solve the issue, so what we need is to
Solution implementation with Objective-C, tested on iPhone/iPad
- (CGContextRef)createCGContextFromCGImage:(CGImageRef)img
{
size_t width = CGImageGetWidth(img);
size_t height = CGImageGetHeight(img);
size_t bitsPerComponent = CGImageGetBitsPerComponent(img);
size_t bytesPerRow = CGImageGetBytesPerRow(img);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); //CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(NULL, // Let CG allocate it for us
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaNone); // RGBA
CGColorSpaceRelease(colorSpace);
NSAssert(ctx, @"CGContext creation fail");
return ctx;
}
- (UIImage*) computeDifferenceOfImage:(CGImageRef)oldImage withImage:(CGImageRef)newImage
{
// Return the old image if the newImage is nil
if (newImage == nil) {
return [UIImage imageWithCGImage:oldImage];
}
// We assume both images are the same size, but it's just a matter of finding the biggest
// CGRect that contains both image sizes and create the CGContext with that size
CGRect imageRect = CGRectMake(0, 0,
CGImageGetWidth(oldImage),
CGImageGetHeight(oldImage));
// Create our context based on the old image
CGContextRef ctx = [self createCGContextFromCGImage:oldImage];
// Draw the old image with the default (normal) blendmode
CGContextDrawImage(ctx, imageRect, oldImage);
// Change the blendmode for the remaining drawing operations
CGContextSetBlendMode(ctx, kCGBlendModeDifference);
// Draw the new image "on top" of the old one
CGContextDrawImage(ctx, imageRect, newImage);
// Grab the composed CGImage
CGImageRef diffed = CGBitmapContextCreateImage(ctx);
// Make the gray color based image black and white based
const CGFloat myMaskingColors[6] = { 1, 255, 1, 255, 1, 255 };
// Get the masked image consisting of black and transparent pixels
CGImageRef myColorMaskedImage = CGImageCreateWithMaskingColors(diffed, myMaskingColors);
// Clean the context
CGContextClearRect(ctx, imageRect);
// Fill the context with white color
CGContextSetFillColorWithColor(ctx, [[UIColor whiteColor] CGColor]);
CGContextFillRect(ctx, imageRect);
CGContextDrawImage(ctx, imageRect, myColorMaskedImage);
// Memory cleanup
CGImageRelease(diffed);
CGImageRelease(myColorMaskedImage);
// Grab the composed CGImage
diffed = CGBitmapContextCreateImage(ctx);
// Close the context
CGContextRelease(ctx);
// Apply the constructed diff mask to newImage
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(diffed),
CGImageGetHeight(diffed),
CGImageGetBitsPerComponent(diffed),
CGImageGetBitsPerPixel(diffed),
CGImageGetBytesPerRow(diffed),
CGImageGetDataProvider(diffed), NULL, false);
CGImageRef masked = CGImageCreateWithMask(newImage, mask);
UIImage *finalDiffedImage = [UIImage imageWithCGImage:masked];
CGImageRelease(mask);
CGImageRelease(masked);
CGImageRelease(diffed);
return finalDiffedImage;
}