Programming with Quartz

David Gelphman, Bunny Laden

Mentioned 7

An in-depth overview of the new sophisticated graphics system of Mac OS X Tiger explains how to exploit the latest state-of-the-art graphics of OS X in a variety of applications, providing an introduction to 2D graphics concepts and tips on working with PDF documents, bitmap graphics, and text. Original. (Intermediate)

More on Amazon.com

Mentioned in questions and answers.

I have a 32-bit NSBitmapImageRep which has an alpha channel with essentially 1-bit values (the pixels are either on or off).

I want to save this bitmap to an 8-bit PNG file with transparency. If I use the -representationUsingType:properties: method of NSBitmapImageRep and pass in NSPNGFileType, a 32-bit PNG is created, which is not what I want.

I know that 8-bit PNGs can be read, they open in Preview with no problems, but is it possible to write this type of PNG file using any built-in Mac OS X APIs? I'm happy to drop down to Core Image or even QuickTime if necessary. A cursory examination of the CGImage docs didn't reveal anything obvious.

EDIT: I've started a bounty on this question, if someone can provide working source code that takes a 32-bit NSBitmapImageRep and writes a 256-color PNG with 1-bit transparency, it's yours.

A great reference for working with lower level APIs is Programming With Quartz

Some of the code below is based on examples from that book.

Note: This is un-tested code meant to be a starting point only....

- (NSBitmapImageRep*)convertImageRep:(NSBitmapImageRep*)startingImage{

    CGImageRef anImage = [startingImage CGImage];

    CGContextRef    bitmapContext;
    CGRect ctxRect;
    size_t  bytesPerRow, width, height;

    width = CGImageGetWidth(anImage);
    height = CGImageGetHeight(anImage);
    ctxRect = CGRectMake(0.0, 0.0, width, height);
    bytesPerRow = (width * 4 + 63) & ~63;
    bitmapData = calloc(bytesPerRow * height, 1);
    bitmapContext = createRGBBitmapContext(width, height, TRUE);
    CGContextDrawImage (bitmapContext, ctxRect, anImage);

    //Now extract the image from the context
    CGImageRef      bitmapImage = nil;
    bitmapImage = CGBitmapContextCreateImage(bitmapContext);
    if(!bitmapImage){
        fprintf(stderr, "Couldn't create the image!\n");
        return nil;
    }

    NSBitmapImageRep *newImage = [[NSBitmapImageRep alloc] initWithCGImage:bitmapImage];
    return newImage;
}

Context Creation Function:

CGContextRef createRGBBitmapContext(size_t width, size_t height, Boolean needsTransparentBitmap)
{
    CGContextRef context;
    size_t bytesPerRow;
    unsigned char *rasterData;

    //minimum bytes per row is 4 bytes per sample * number of samples
    bytesPerRow = width*4;
    //round up to nearest multiple of 16.
    bytesPerRow = COMPUTE_BEST_BYTES_PER_ROW(bytesPerRow);

    int bitsPerComponent = 2;  // to get 256 colors (2xRGBA)

    //use function 'calloc' so memory is initialized to 0.
    rasterData = calloc(1, bytesPerRow * height);
    if(rasterData == NULL){
        fprintf(stderr, "Couldn't allocate the needed amount of memory!\n");
        return NULL;
    }

    // uses the generic calibrated RGB color space.
    context = CGBitmapContextCreate(rasterData, width, height, bitsPerComponent, bytesPerRow,
                                    CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB),
                                    (needsTransparentBitmap ? kCGImageAlphaPremultipliedFirst :
                                     kCGImageAlphaNoneSkipFirst)
                                    );
    if(context == NULL){
        free(rasterData);
        fprintf(stderr, "Couldn't create the context!\n");
        return NULL;
    }

    //Either clear the rect or paint with opaque white,
    if(needsTransparentBitmap){
        CGContextClearRect(context, CGRectMake(0, 0, width, height));
    }else{
        CGContextSaveGState(context);
        CGContextSetFillColorWithColor(context, getRGBOpaqueWhiteColor());
        CGContextFillRect(context, CGRectMake(0, 0, width, height));
        CGContextRestoreGState(context);
    }
    return context;
}

Usage would be:

NSBitmapImageRep *startingImage;  // assumed to be previously set.
NSBitmapImageRep *endingImageRep = [self convertImageRep:startingImage];
// Write out as data
NSData *outputData = [endingImageRep representationUsingType:NSPNGFileType properties:nil];
// somePath is set elsewhere
[outputData writeToFile:somePath atomically:YES];

What are some suggested "paths" for getting better at drawing in code in Cocoa? I think at this point, that's my biggest weakness. Is drawing in code something general, or Cocoa-specific?

Thanks! - Jason

I'm in the same boat as you; I'd like to learn more about drawing code.

It's a big document, but the Quartz 2D programming guide on the developer website seems like a good place to start from. They introduce Graphics Contexts and Paths and include plenty of images.

There's also a book referenced in that document, Programming With Quartz: 2D and PDF Graphics in Mac OS X which looks good. The APIs for iPhone and OSX are almost identical, so there's no problem using a Mac OSX book.

So I'd suggest start with the Apple documentation (you don't need to read past the section on CGLayer drawing), try some sample code and figure out how it's working. Then move on to either that book or find more sample code on the web. Good luck!

I am working with some stuff in Core Graphic's and I am looking for some additional clarification regarding a couple of topics.

drawRect: I have an understanding of this and know it is where all of the drawing aspect's of a UIView goes, but am just unclear as to what is happening behind the scene's. What happen's when I create a UIView and fill out drawRect then set another object's UIView to be that custom view? When is drawRect being called?

CGGraphicsContext: I know what the purpose of this is and understand the concept, but I can't see exactly how it is working. For example:

CGContextSaveGState(context);
CGContextAddRect(context, rect);
CGContextClip(context);
CGContextDrawLinearGradient(context, gradient, startPoint, endPoint, 0);
CGContextRestoreGState(context);

The code above is in my app and work's correctly. The thing that confuses me is how it is working. The idea of saving/restoring a context makes sense, but it appears like I am literally saving a context, using that exact same context to make change's, then restoring the same context once again. It just seem's like I am saving a context and then writing on top of that context, only to restore it. How is it getting saved to a point where when you restore it, it is a different instance of the context than what was just used to make changes? You use the same reference of the variable context in every situation.

Lastly I would appreciate any resource's for practice project's or example's on using Core Graphics. I am looking to improve my skill in the matter since I obviously don't have much at the current time.

What happen's when I create a UIView and fill out drawRect then set another object's UIView to be that custom view? When is drawRect being called?

Adding a view to a 'live' view graph marks the view's frame as in need of display. The main run loop then creates and coalesces invalid rects and soon returns to invoke drawing. It does not draw immediately upon invalidation. This is a good thing because resizing, for example, would result in significant overdrawing -- redundant work which would kill many apps' drawing performance. When drawing, a context is created to render to -- which ultimately outputs to its destination.

Graphics Contexts are abstractions which are free to work optimally for their destination -- a destination could be a device/screen, bitmap, PDF, etc.. However, a context handle (CGContextRef) itself refers to a destination and holds a set of parameters regarding its state (these parameters are all documented here). These parameter sets operate like stacks: Push = CGContextSaveGState, Pop = CGContextRestoreGState. Although the context pointer isn't changing, the stack of parameter sets is changing.

As far as resources, see Programming with Quartz. It's 8 years old now, and was originally written for OS X -- but that ultimately doesn't matter a whole lot because the fundamentals of the drawing system and APIs really haven't evolved significantly since then -- And that is what you intend to focus on. The APIs have been extended, so it would be good to review which APIs were introduced since 10.4 and see what problems they solve, but it's secretly a good thing for you because it helps maintain focus on the fundamental operation of the drawing system. Note that some functionalities were excluded from iOS (e.g. often due to floating point performance and memory constraints, I figure), so every example may not be usable on iOS, but I know of no better guide.

Tip: Your drawing code can be easily reused on OS X and iOS if you use Quartz rather than AppKit/UIKit. Plus, the Quartz APIs have a lower update frequency (i.e. the APIs tend to be longer lived).

I have two images. One of them is just plain white and has some areas with alpha transparency. It is intended to be a mask for making another image transparent. The other image, full colored and also PNG, has no alpha applied anywhere.

So I want to add the alpha values of the mask image to the alpha values of the other. Both images have the exact same size. It would be just a matter of looping through the pixels, I guess. Any idea how that would look in detail?

The quickest way I can think of is to get a pointer to the buffer for each image and combine them in a third buffer looping through all of the pixels, like you mentioned.

Or if the source (color) image has an alpha channel, but it is set to 1 just replace that channel with the alpha from the second image.

Starting in Panther Quartz has an alpha only bitmap context that can be used to mask other images. The excellent book programming with quartz 2d and pdf graphics in mac os x has a section on alpha only bitmap contexts.

To get the alpha from a crunched PNG file you can do something similar to the following:


    CGImageRef myImage = [self CGImage];
    CGSize newSize = {CGImageGetWidth(myImage), CGImageGetHeight(myImage)};

if(!isPowerOf2(newSize.width))
    newSize.width = nextPowerOf2(newSize.width);
if(!isPowerOf2(newSize.height))
    newSize.height = nextPowerOf2(newSize.height);


const GLint picSize = newSize.height * newSize.width * 4;

// The bitmapinfo provided by the CGImageRef isn't supported by the bitmap context.
// So I'll make a conforming bitmap info
CGBitmapInfo myInfo = kCGImageAlphaPremultipliedLast;

unsigned char * actual_bytes = new unsigned char[picSize];
CGContextRef imageContext = CGBitmapContextCreate(
                                                  actual_bytes,
                                                  newSize.width,
                                                  newSize.height,
                                                  CGImageGetBitsPerComponent(myImage),
                                                  newSize.width * 4,
                                                  CGImageGetColorSpace(myImage),
                                                  myInfo);

CGContextSaveGState(imageContext);
CGContextDrawImage(imageContext, [self bounds], myImage);
CGContextRestoreGState(imageContext);

At this point actual_bytes has the RGBA data. Don't forget to delete the memory for actual_bytes. This is a category on UIImage so self is a UIImage that has already been loaded from a PNG file.

I need to draw some shapes on retina display, so I convert logic points for retina display like this:

self.inverseScale = (CGFloat)1.0/[UIScreen mainScreen].scale;
CGContextRef context=UIGraphicsGetCurrentContext();
CGContextScaleCTM(context, inverseScale, inverseScale);

and after that define next:

CGContextSetShouldAntialias(context, NO);
CGContextSetLineWidth(context, 1.0);

Ok, now I get 640x960 retina display size, but I little confused about the coordinates. I thought that if I need to draw some rectangle frame I need to do the next:

CGContextSetFillColorWithColor(context, someBlackColor);
CGContextFillRect(context, displaySizeRect);
CGContextSetStrokeColorWithColor(context, white);
CGContextAddRect(context, CGRectMake(0, 0, 639, 959));
CGContextStrokePath(context);

But instead of it, I have found that I need to write:

CGContextAddRect(context, CGRectMake(0, 1, 639, 959));

instead of:

CGContextAddRect(context, CGRectMake(0, 0, 639, 959));

Why???

Is my conversion is wrong or what?

EDITS/ADDITIONS AT BOTTOM

It sounds like you might be confused about how the coordinate system maps to the pixel grid. When you're drawing into a CGContext, you're drawing into a "continuous" floating-point-based plane. This continuous plane is mapped onto the pixel-grid of the display such that integer values fall on the lines between screen pixels.

In theory, at default scale for any given device (so 1.0 for non-retina, 2.0 for retina), if you drew a rect from 0,0 -> 320,480, with a 100% opacity black stroke with a width of 1.0pt, you could expect the following results:

  • On non-retina, you would have a 1 pixel wide rect around the outside of the screen with 50% opacity.
  • On retina, you would have a 1 pixel wide rect around the outside of the screen with 100% opacity.

This stems from the fact that the zero line is right at the edge of the display, between the first row/col of pixels in the display and the first row/col pixels NOT in the display. When you draw a 1pt line at default scale in that situation, it draws along the center of the path, thus half of it will be drawn off the display, and in your continuous plane, you'll have a line whose width extends from -0.5pt to 0.5pt. On a non-retina display, that will become a 1px line at 50% opacity, and on retina display that will be rendered as a 1px line with 100% opacity.

EDITS

OP said:

My goal is to draw some shapes on retina display and that each line will be 1 pixel width and 100% opacity.

There is nothing about this goal that requires antialiasing to be turned off. If you are seeing alpha blending on horizontal and vertical lines with antialiasing turned on, then you are not drawing the lines in the right places or at the right sizes.

And in that case in function: CGContextMoveToPoint(context, X, Y); between 2 pixel on X-axis the engine will choose the right one, and on Y-axis it will choose the higher one. But in the function: CGContextFillRect(context, someRect); It will fill like it maps on pixel grid (1 to 1).

The reason you are seeing this confusing behavior is that you have antialiasing turned off. The key being able to see and understand what's going on here is to leave antialiasing on, and then make the necessary changes until you get it just right. The easiest way to get started doing this is to leave the CGContext's transform unchanged from the default and change the values you're passing to the draw routines. It's true that you can also do some of this work by transforming the CGContext, but that adds a step of math that's done on every coordinate you pass in to any CG routine, which you can't step into in the debugger, so I highly recommend that you start with the standard transform and with AA left on. You will want to develop a full understanding of how CG drawing works before attempting to mess with the context transform.

Is there some Apple defined method to map graphics on pixel and not on lines between pixels?

In a word, "no," because CGContexts are not universally bitmaps (for instance, you could be drawing into a PDF -- you can't know from asking the CGContext). The plane you're drawing into is intrinsically a floating-point plane. That's just how it works. It is completely possible to achieve 100% pixel-accurate bitmap drawing using the floating point plane, but in order to do so, you have to understand how this stuff actually works.

You might be able to get bootstrapped faster by taking the default CGContext that you're given and making this call:

CGContextTranslateCTM(context, 0.5, 0.5); 

What this will do is add (0.5, 0.5) to every point you ever pass in to all subsequent drawing calls (until you call CGContextRestoreGState). If you make no other changes to the context, that should make it such that when you draw a line from 0,0 -> 10,0 it will, even with antialiasing on, be perfectly pixel aligned. (See my initial answer above to understand why this is the case.)

Using this 0.5, 0.5 trick may get you started faster, but if you want to work with pixel-accurate CG drawing, you really need to get your head around how floating-point-based graphics contexts work, and how they relate to the bitmaps that may (or may not) back them.

Turning AA off and then nudging values around until they're "right" is just asking for trouble later. For instance, say some day UIKit passes you a context that has a different flip in it's transform? In that case, one of your values might round down, where it used to round up (because now it's being multiplied by -1.0 when the flip is applied). The same problem can happen with contexts that have been translated into a negative quadrant. Furthermore, you don't know (and can't strictly rely on, version to version) what rounding rule CoreGraphics is going to use when you turn AA off, so if you end up being handed contexts with different CTMs for whatever reason you'll get inconsistent results.

For the record, the time when you might want to turn antialiasing off would be when you're drawing non-rectilinear paths and you don't want CoreGraphics to alpha blend the edges. You may indeed want that effect, but turning off AA makes it much harder to learn, understand and be sure of what's going on, so I highly recommend leaving it on until you fully understand this stuff, and have gotten all your rectilinear drawing perfectly pixel aligned. Then flip the switch to get rid of AA on non-rectilinear paths.

Turning off antialiasing does NOT magically allow a CGContext to be addressed as a bitmap. It takes a floating point plane and adds a rounding step that you can't see (code-wise), can't control, and which is hard to understand and predict. In the end, you still have a floating point plane.

Just a quick example to help clarify:

- (void)drawRect:(CGRect)rect
{
    CGContextRef ctx = UIGraphicsGetCurrentContext();
    CGContextSaveGState(ctx);

    BOOL doAtDefaultScale = YES;
    if (doAtDefaultScale)
    {
        // Do it by using the right values for the default context
        CGContextSetStrokeColorWithColor(ctx, [[UIColor blackColor] CGColor]);
        CGContextSetLineWidth(ctx, 0.5); // We're working in scaled pixel. 0.5pt => 1.0px
        CGContextStrokeRect(ctx, CGRectMake(25.25, 25.25, 50, 50));    
    }
    else
    {
        // Do it by transforming the context
        CGContextSetStrokeColorWithColor(ctx, [[UIColor blackColor] CGColor]);
        CGContextScaleCTM(ctx, 0.5, 0.5); // Back out the default scale
        CGContextTranslateCTM(ctx, 0.5, 0.5); // Offset from edges of pixels to centers of pixels
        CGContextSetLineWidth(ctx, 1.0); // we're working in device pixels now, having backed out the scale.
        CGContextStrokeRect(ctx, CGRectMake(50, 50, 100, 100));    
    }
    CGContextRestoreGState(ctx);
}

Make a new single-view application, add a custom view subclass with this drawRect: method, and set the default view in the .xib file to use your custom class. Both sides of this if statement produce the same results on retina display: a 100x100 device-pixel, non-alpha-blended square. The first side does it by using the "right" values for the default scale. The else side of it does it by backing out the 2x scale, and then translating the plane from being aligned to the edges of device pixels to being aligned with the center of device pixels. Note how the stroke widths are different (scale factors apply to them too.) Hope this helps.

OP replied:

But one note, there is some alpha blend, a little bit. This is the screenshot with 3200x zoom:

No, really. Trust me. There's a reason for this, and it's NOT anti-aliasing being turned on in the context. (Also, I think you mean 3200% zoom, not 3200x zoom -- at 3200x zoom a single pixel wouldn't fit on a 30" display.)

In the example I gave, we were drawing a rect, so we didn't need to think about line endings, since it's a closed path -- the line is continuous. Now that you're drawing a single segment, you do have to think about line endings to avoid alpha blending. This is the "edge of the pixel" vs. "center of the pixel" thing coming back around. The default line cap style is kCGLineCapButt. kCGLineCapButt means that the end of the line starts exactly where you start drawing. If you want it to behave more like pen -- that is to say, if you put a felt-tip pen down, intending to draw a line 10 units to the right, some amount of ink is going to bleed out to the left of the exact point you pointed the pen at -- you might consider using kCGLineCapSquare (or kCGLineCapRound for a rounded end, but for single-pixel-level drawing that will just drive you mad with alpha-blending, since it will calculate the alpha as 1.0 - 0.5/pi). I've overlaid hypothetical pixel grids on this illustration from Apple's Quartz 2D Programming Guide to illustrate how line endings relate to pixels:

Line endings with hypothetical pixel grids overlaid

But, I digress. Here's an example; Consider the following code:

- (void)drawRect:(CGRect)rect
{
    CGContextRef ctx = UIGraphicsGetCurrentContext();
    CGContextSaveGState(ctx);    
    CGContextSetStrokeColorWithColor(ctx, [[UIColor blackColor] CGColor]);
    CGContextSetLineWidth(ctx, 0.5); //On device pixel at default scale
    CGContextMoveToPoint(ctx, 2.0, 2.25); // Edge of pixel in X, Center of pixel in Y
    CGContextAddLineToPoint(ctx, 7.0, 2.25); // Draw a line of 5 scaled, 10 device pixels
    CGContextStrokePath(ctx); // Stroke it
    CGContextRestoreGState(ctx);
}

Notice here that instead of moving to 2.25, 2.25, I move to 2.0, 2.25. This means I'm at the edge of the pixel in the X dimension, and the center of the pixel in the Y dimension. Therefore, I wont get alpha blending at the ends of the line. Indeed, zoomed to 3200% in Acorn, I see the following:

Non-alpha-blended 10 device pixel line at 3200%

Now, yes, at some point, way beyond the point of caring (for most folks) you may run into accumulated floating point error if you're transforming values or working with a long pipeline of calculations. I've seen error as significant as 0.000001 creep up in very complex situations, and even an error like that can bite you if you're talking about situations like the difference between 1.999999 vs 2.000001, but that is NOT what you're seeing here.

If your goal is only to draw pixel accurate bitmaps, consisting of axis-aligned/rectilinear elements only, there is NO reason to turn off context anti-aliasing. At the zoom levels and bitmap densities we're talking about in this situation, you should easily remain free of almost all problems caused by 1e-6 or smaller magnitude floating point errors. (In fact in this case, anything smaller than 1/256th of one pixel will not have any effect on alpha blending in an 8-bit context, since such error will effectively be zero when quantized to 8 bits.)

Let me take this opportunity to recommend a book: Programming with Quartz: 2D and PDF Graphics in Mac OS X It's a great read, and covers all this sort of stuff in great detail.

Is Quartz programming consistent between iOS and OS X? I'm looking for a good book on Quartz and came across one for Mac OS X but am only building for iOS. Since some APIs are different, such as OpenGL, I'm wondering if the same applies to Quartz?

No, Quartz seems to be very similar, if not identical. Coordinate systems are the same, with the origin in the lower left.

I'm not aware of any differences. Even bugs are identical.

I'm new in Ios programing. I would like to have class to draw and manage rectangle / circle in my view. So i try use solutions from here. It works fine when I want to draw one Rectangle, but I would like to have opportunity to draw and manage more Rectangles. So I add this to my CircleManager.h file:

-(void)AddRectangle:(CGRect) rect;
{
//[self drawRect:rect];
[[UIColor blueColor] setFill];
UIRectFill(CGRectInset(self.bounds, 150, 150));
}

And in my ViewControler.m add this code:

- (void)viewDidLoad {  
CircelManager* view = [[CircelManager alloc]initWithFrame:CGRectMake(10, 30, 300, 400)];  
view.backgroundColor = [UIColor blackColor];
[view AddRectangle:CGRectMake(100, 100, 100, 100)];
[self.view addSubview:view];
[super viewDidLoad];   
}

And when I try run this code I get this in my output:

Jul 11 13:37:38 SG-MacBook-Air.local NavTry1[1730] <Error>: CGContextSetFillColorWithColor: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context  and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update.
Jul 11 13:37:38 SG-MacBook-Air.local NavTry1[1730] <Error>: CGContextGetCompositeOperation: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context  and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update.
Jul 11 13:37:38 SG-MacBook-Air.local NavTry1[1730] <Error>: CGContextSetCompositeOperation: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context  and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update.
Jul 11 13:37:38 SG-MacBook-Air.local NavTry1[1730] <Error>: CGContextFillRects: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context  and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update.
Jul 11 13:37:38 SG-MacBook-Air.local NavTry1[1730] <Error>: CGContextSetCompositeOperation: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context  and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update.

I noticed that if I comment line [self.view addSubview:view]; there is no error.

Any help will be appreciated.

The reason you are getting the error is because you are drawing in viewDidLoad, which does not have a drawing context. When you call AddRectangle: you are attempting to draw, not just storing the coordinates of the rectangle. That function would be better named drawBlueRect: or something like that.

To get this to work:

  1. MoveAddRectangle: in your UIView subclass
  2. CallAddRectangle: from drawRect: in your UIView subclass
  3. Remove AddRectangle: from your view controller subclass

The second and third parameter to UIRectInset will cause the rectangle not to be drawn, because it results in a rectangle with zero width. You are also ignoring the rect parameter in AddRectangle: and always drawing relative to self.bounds.

You should never call drawRect: yourself. If you should call setNeedsDisplay instead and the graphics context will be configured for you by the time your drawRect: override is called. The best practice on iOS is to use CALayers instead of drawRect:. Layers are much faster than drawRect:, especially for animations.

I concur with Joride that you should read what he suggested as well as the Quartz 2D Programming Guide for iOS. Depending on the level of depth you need, while a bit dated, Programming with Quartz: 2D and PDF Graphics in Mac OS X by David Gelphman and Bunny Laden is the definitive guide to Core Graphics.