-
-
Save steipete/1144242 to your computer and use it in GitHub Desktop.
- (UIImage *)pspdf_preloadedImage { | |
CGImageRef image = self.CGImage; | |
// make a bitmap context of a suitable size to draw to, forcing decode | |
size_t width = CGImageGetWidth(image); | |
size_t height = CGImageGetHeight(image); | |
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB(); | |
CGContextRef imageContext = CGBitmapContextCreate(NULL, width, height, 8, width*4, colourSpace, | |
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little); | |
CGColorSpaceRelease(colourSpace); | |
// draw the image to the context, release it | |
CGContextDrawImage(imageContext, CGRectMake(0, 0, width, height), image); | |
// now get an image ref from the context | |
CGImageRef outputImage = CGBitmapContextCreateImage(imageContext); | |
UIImage *cachedImage = [UIImage imageWithCGImage:outputImage]; | |
// clean up | |
CGImageRelease(outputImage); | |
CGContextRelease(imageContext); | |
return cachedImage; | |
} |
Hmm, I already though about switch to JPG to in my current project. However, some of my resources need an alpha channel. What compression level are you using?
Sweet! I'll look for it after school. Thanks.
What exactly does this code do? It's not clear to me how it's used. Are you supposed to have already loaded a CGImage? Is this supposed to be called on a background thread? Any sample usage would be much appreciated. Thanks in advance!
Same queries here. And one get the sample code for the background thread? Thanks a lot!
Is there a reason to use a bitmap context instead of an image context like the following implementation?
- (UIImage *)pspdf_preloadedImage {
UIGraphicsBeginImageContextWithOptions(self.size, NO, 0.0f);
CGRect rect = (CGRect){CGPointZero, self.size};
[self drawInRect:rect];
UIImage *preloadedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return preloadedImage;
}
Using an image context is a little easier to setup. And as far as I understand, there is no need for the low-level control that a bitmap context provides over an image context.
AFAIK, UIGraphicsBeginImageContextWithOptions doesn't set up the bitmap context in the same way as this snipped above (kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little) - so when you then draw the image some post-processing still happens on the main thread. But it's also an implementation detail and might change in future iOS releases.
Alright, I see. Thank you for the clarification.
Just giving a 👍 here....went looking for a fix to my main thread stuttering issue and this solved it. Thanks man!
Using it in my open source project here: https://github.com/pj4533/OpenPics Have a look if you get a second, would love any feedback.
Thank you so much! This snippet is very helpful.
Here is a swift version of it.
extension UIImage {
func preloadedImage() -> UIImage {
// make a bitmap context of a suitable size to draw to, forcing decode
let width = CGImageGetWidth(CGImage)
let height = CGImageGetHeight(CGImage)
let colourSpace = CGColorSpaceCreateDeviceRGB()
let imageContext = CGBitmapContextCreate(nil,
width,
height,
8,
width * 4,
colourSpace,
CGImageAlphaInfo.PremultipliedFirst.rawValue | CGBitmapInfo.ByteOrder32Little.rawValue)
// draw the image to the context, release it
CGContextDrawImage(imageContext, CGRect(x: 0, y: 0, width: width, height: height), CGImage)
// now get an image ref from the context
if let outputImage = CGBitmapContextCreateImage(imageContext) {
let cachedImage = UIImage(CGImage: outputImage)
return cachedImage
}
print("Failed to preload the image")
return self
}
}
I think there is a bug where it adds an alpha channel to an image which did not previously have an alpha channel. This slows down rendering slightly I believe.
Could this be fixed by checking the original alpha info with CGImageGetAlphaInfo
?
However I am not sure if kCGImageAlphaPremultipliedFirst
was picked because it was generic or it was optimal for iOS and using a different alpha info will have other performance implications?
Does this generates a smaller version from the original UIImage?
@rasaunders100: Using kCGImageAlphaNoneSkipFirst
instead of kCGImageAlphaPremultipliedFirst
gets you an image without alpha.
@steipete: I have seen a similar snippet floating around which does a few additional checks, such as:
CGImageRef imageRef = self.CGImage;
CGImageAlphaInfo alpha = CGImageGetAlphaInfo( imageRef );
BOOL anyAlpha = ( alpha == kCGImageAlphaFirst ||
alpha == kCGImageAlphaLast ||
alpha == kCGImageAlphaPremultipliedFirst ||
alpha == kCGImageAlphaPremultipliedLast );
if ( anyAlpha )
{
return self;
}
size_t width = CGImageGetWidth( imageRef );
size_t height = CGImageGetHeight( imageRef );
// current
CGColorSpaceModel imageColorSpaceModel = CGColorSpaceGetModel( CGImageGetColorSpace( imageRef ) );
CGColorSpaceRef colorspaceRef = CGImageGetColorSpace( imageRef );
bool unsupportedColorSpace = ( imageColorSpaceModel == 0 ||
imageColorSpaceModel == -1 ||
imageColorSpaceModel == kCGColorSpaceModelIndexed );
if ( unsupportedColorSpace )
{
colorspaceRef = CGColorSpaceCreateDeviceRGB();
}
What do you think about these – are they unnecessary?
Any idea whether this applies to watchOS as well?
ah... why apple don't let us:
let image = UIImage(...)
image.prepareForRenderingAsynchronously = true
Thanks for the snippet!
Swift 3 version:
import UIKit
extension UIImage {
func forceLoad() -> UIImage {
guard let imageRef = self.cgImage else {
return self //failed
}
let width = imageRef.width
let height = imageRef.height
let colourSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo: UInt32 = CGImageAlphaInfo.premultipliedFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue
guard let imageContext = CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: width * 4, space: colourSpace, bitmapInfo: bitmapInfo) else {
return self //failed
}
let rect = CGRect(x: 0, y: 0, width: width, height: height)
imageContext.draw(imageRef, in: rect)
if let outputImage = imageContext.makeImage() {
let cachedImage = UIImage(cgImage: outputImage)
return cachedImage
}
return self //failed
}
}
Pls don't forget to add old image orientation with:
let cachedImage = UIImage(cgImage: outputImage, scale: scale, orientation: imageOrientation)
According to this article it's faster to use a JPG than a PNG. Even faster than a crushed one. Do you know how it comes that the PNGs were faster in your tests?