Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Analog Kid

macrumors G3
Original poster
Mar 4, 2003
9,543
12,983
I have some processing code that works on double precision arrays that I'd like to visualize using an NSImage. I thought I could be clever and just use the NSBitmapImageRep data buffer directly, but it doesn't seem to support double precision values. I can get it to accept 32 bit floats, but not 64 bit doubles.

Now, I could keep a separate buffer of doubles, and then resize them all to floats for display but I'd like to find a better way.

I think the answer is a custom NSImageRep, but I'm having a hard time finding an example of one that draws bitmaps. In particular, how to implement the -draw method. Do I draw a colored rectangle for each pixel or is there a better way to transfer bitmap data?

Anybody have a better idea of how to do this?
 
I have some processing code that works on double precision arrays that I'd like to visualize using an NSImage. I thought I could be clever and just use the NSBitmapImageRep data buffer directly, but it doesn't seem to support double precision values. I can get it to accept 32 bit floats, but not 64 bit doubles.

Now, I could keep a separate buffer of doubles, and then resize them all to floats for display but I'd like to find a better way.

I think the answer is a custom NSImageRep, but I'm having a hard time finding an example of one that draws bitmaps. In particular, how to implement the -draw method. Do I draw a colored rectangle for each pixel or is there a better way to transfer bitmap data?

Anybody have a better idea of how to do this?

You don't draw an NSImageRep. You create an NSImage containing the NSImageRep instead and draw it.
 
You don't draw an NSImageRep. You create an NSImage containing the NSImageRep instead and draw it.


Isn't -draw one of the methods in NSImageRep I need to implement?

The only references I've seen are in the Cocoa Drawing Guide:

If you want to add support for new image formats or generate images from other types of source information, you may want to subclass NSImageRep. Although Cocoa supports many image formats directly, and many more indirectly through the Image IO framework, subclassing NSImageRep gives you control over the handling of image data while at the same time maintaining a tight integration with the NSImage class. If you decide to subclass, you should provide implementations for the following methods:

imageUnfilteredTypes
canInitWithData:
initWithData:
draw

And the NSImageRep Class Reference:

Subclass override this method to draw the image using the image data. By the time this method is called, the graphics state is already configured for you to draw the image at location (0.0, 0.0) in the current coordinate system.

The standard Application Kit subclasses all draw the image using the NSCompositeCopy composite operation defined in the “Constants” section of NSImage. Using the copy operator, the image data overwrites the destination without any blending effects. Transparent (alpha) regions in the source image appear black. To use other composite operations, you must place the representation into an NSImage object and use its drawAtPoint: fromRect: operation: fraction: or drawInRect: fromRect: operation: fraction: methods.

That doesn't sound like drawing rectangles, but I can't figure out which more primitive method to call from within my custom representation that I can give the NSCompositeCopy operator to.
 
That doesn't sound like drawing rectangles, but I can't figure out which more primitive method to call from within my custom representation that I can give the NSCompositeCopy operator to.

Drawing a whole bunch of tiny rectangles is naive. Rethink the problem in terms of using an existing drawable representations, combined with a way to maintain coherence between a double-component representation and the drawable representation (hint: look up cache coherence).

There's already an implementation for float components. So internally convert the double-component representation to float-component representation, and then draw that. Your code only converts between double and float components (one-way only; never float to double), and the existing float-drawing code is used as-is.

Or better still, try several different component representations, find the fastest-drawing one, and use that representation for drawing. I think it's unlikely that display hardware will show a discernible difference for float-rep vs. 8-bit rep.

When the double-component representation is changed, it invalidates some part of the cached drawable representation. Reconvert only that part of the double-component representation into the drawable representation. The naive approach would be to invalidate the entire drawable representation for any change to the double-component one. A bit of thought should show why that could be a performance-killing approach.
 
Drawing a whole bunch of tiny rectangles is naive. Rethink the problem in terms of using an existing drawable representations, combined with a way to maintain coherence between a double-component representation and the drawable representation (hint: look up cache coherence).

There's already an implementation for float components. So internally convert the double-component representation to float-component representation, and then draw that. Your code only converts between double and float components (one-way only; never float to double), and the existing float-drawing code is used as-is.

Or better still, try several different component representations, find the fastest-drawing one, and use that representation for drawing. I think it's unlikely that display hardware will show a discernible difference for float-rep vs. 8-bit rep.

When the double-component representation is changed, it invalidates some part of the cached drawable representation. Reconvert only that part of the double-component representation into the drawable representation. The naive approach would be to invalidate the entire drawable representation for any change to the double-component one. A bit of thought should show why that could be a performance-killing approach.

I think you're getting at the first approach I listed-- keep a separate buffer of doubles and convert them through to a standard NSBitmapImageRep. I could, for example, create a CustomImageRep with an encapsulated NSBitmapImageRep, do a data conversion and transfer into that rep on each update, and then route the draw call to it. I'm not sure that's less naive though... For one thing I've now got two copies of each large data set hanging around.

How does NSBitmapImageRep do it's drawing? I'm sure it's not going straight to hardware, it must be using some Quartz calls, no? I mention drawing rectangles because I've seen that suggested in other places for transferring bitmap data-- I agree it feels a bit much, but when I think about what a bitmap really is it certainly isn't a sparse set of points-- it's an array of color patches.

Apple has intentionally not gone with the class cluster model on ImageReps to make it easier to subclass, so it seems like this is what they have in mind-- the only worked examples I can find though are vector formats.
 
I think you're getting at the first approach I listed-- keep a separate buffer of doubles and convert them through to a standard NSBitmapImageRep.

I don't know what you mean by "a separate buffer of doubles". There should be exactly one representation of the data using doubles. If there are multiple double representations, you're doing it wrong.

I could, for example, create a CustomImageRep with an encapsulated NSBitmapImageRep, do a data conversion and transfer into that rep on each update, and then route the draw call to it. I'm not sure that's less naive though... For one thing I've now got two copies of each large data set hanging around.
How big are the underlying double-component images (height & width)?

If the underlying images are larger than the screen, then you don't have to produce a drawable representation for off-screen pixels. Refer to NSScrollView, and how it draws only a clipped portion of another view.

If the underlying images are fairly small, then calculate the max memory needed for a drawable representation with 8-bit components. What is that number, in bytes?

If the underlying images are much larger than the screen, then you may have to come up with a strategy for virtualizing the double representation. What is the size of the largest underlying image, in bytes? Or just a typical one?

Calculate how many tiny rectangles you'd have to draw to produce a full-sized drawing of your double representation. Measure the speed of drawing that many tiny rectangles from double color-components (write a test). Calculate the time it would take to draw a full-sized double representation using a "draw tiny rectangles" strategy. Compare to the time/space tradeoff of keeping a directly drawable representation, e.g. with 8-bit components.

If you don't have numbers, or even estimates, you should probably work those up. If you don't know what the max sizes are, you should work those up. If you don't know what the slowest acceptable speed is, estimate 1/10 sec.
 
Data sets can be of varying sizes, some much larger than a screen size.

I could certainly treat the data as a custom view and do all the windowing and whatnot that you're suggesting. It seems more natural to me, however, to treat it as an image. I agree it's odd that the -draw method is the base method to overload, rather than drawInRect: fromRect: operation: fraction: respectFlipped: hints:, but I'll have to see if overriding the later helps once I figure out how to override the former. I'm really not even to the point of optimizing yet, I'm just looking for where to start.

I feel like this is drifting away from my question: How do I implement the draw method of a custom NSImageRep representing a bitmap image?

(Content clipped for brevity, but quote left for notification)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.