Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

sfurlani

macrumors newbie
Original poster
Feb 4, 2010
10
0
I am working on a project similar to the one described here:
http://cocoadev.com/forums/comments.php?DiscussionID=1220&page=1#Item_0

And while my problem is memory management (being a "crazy scientist"), it's not the same exact kind. The program I'm working with (OsiriX) loads massive amounts of bitmaps into an NSData, byte copied from a float* malloc(). Then, it creates an NSArray of Faux Image Object which point to specific chunks of the entire NSData set of images. Upon rendering, it creates a rectangle and texture maps the chuck of bitmap pointed to by the FauxImageObject.

I want to render & process this information from multiple computers (CUDA/Clustering).
Instead of copying the NSData onto each machine, I want it to sit on one machine, and have each other machine point to it via an Array of FauxImageObjects.

  • Since a NSData appears to be easily Distributed (via DO help on the apple site), is DO an appropriate method for doing this?
  • The drawRect method re-textures every frame. Should I instead grab the current image's texture and store it locally, but keep the entire NSData on DO?
  • When will a Gigabit Ethernet show a bottle-neck if every machine is grabbing a 1MP bitmap a single machine each screen refresh? (1MP image = 32Mb @ 1000Mbps, ~30 images/Second. Screen refresh at 60Hz -> 1 image per 2 Screen Refreshes!? math hard!)
  • Is there a loss-less compression capability for NSData or DO?
  • The FauxImageObjects will also need to be distributed... since they contain information about the image like a CLUT and if one machine changes it, then all the others need to as well.

I'm not looking for a solution, but if anyone here thinks this is not feasible, please let me know!
Any tutorials, guides, books, examples that may be of use to me?

Thank you in advance for your help.

-Stephen
 

Cromulent

macrumors 604
Oct 2, 2006
6,802
1,096
The Land of Hope and Glory
Well 1000Mbps is 125 megabytes a second. But that is the theoretical maximum. You won't see that with the overhead for TCP and your own app overhead. So you want to transmit 30 32MB images a second? That's 960MB/s. You're looking at 10GigE for that which is not cheap.

Plus you'll need a large RAID array for the hard drives to even cope with that data rate. The average SATA hard drive is lucky if it can push 100MB/s consistently.
 

sfurlani

macrumors newbie
Original poster
Feb 4, 2010
10
0
Thanks Cromulent

Thats... what I was afraid of. With a 125MB bandwith, I can likely get 4 images per second... not what I need for plan A.

Plan B - I was looking into it, and the software copies the portion of the float* buffer into a Gluint* buffer for rendering. If I kept the Gluint* buffer local (for every screen refresh), then the only time I would need the network is if the display changed.

As for loading the data, likely the data is originating from the network or CD and stored in the RAM of the Front Node - then pushed to whichever cluster node has space. Lags on loading is perfectly fine - lags when scrolling through an image series is not.

But it can be done with DO? That's the appropriate framework?

-S!
 

sfurlani

macrumors newbie
Original poster
Feb 4, 2010
10
0
Well 1000Mbps is 125 megabytes a second. But that is the theoretical maximum. You won't see that with the overhead for TCP and your own app overhead. So you want to transmit 30 32MB images a second? That's 960MB/s. You're looking at 10GigE for that which is not cheap.

Plus you'll need a large RAID array for the hard drives to even cope with that data rate. The average SATA hard drive is lucky if it can push 100MB/s consistently.

Oh no, wait. 1 MP image is a 32Mbit file. So my initial math of 1000Mbps / 32Mbpi = ~30 images/second.

No one else has any comments on this or any experience in OpenGL or DO?
-S!
 

sfurlani

macrumors newbie
Original poster
Feb 4, 2010
10
0
DO is unlikely to be what you want for this. It's really not designed as a bulk transfer mechanism.

Is there a framework that will allow me to have a single copy of a large image database in memory and access it from multiple machines instead of having copies on every machine? I'm restricted in the way I write my plugin because of how the base software operates.

-S!
 

chown33

Moderator
Staff member
Aug 9, 2009
10,706
8,346
A sea of green
Is there a framework that will allow me to have a single copy of a large image database in memory and access it from multiple machines instead of having copies on every machine? I'm restricted in the way I write my plugin because of how the base software operates.

I'm not sure your request makes sense. Any image you hope to access from another machine must ultimately reside in the memory of that other machine. It can get loaded into the other machine's memory on-demand, only as needed, but there is no way to avoid transferring the data across the network and into the resident memory of the other machine.

If you're asking how to store and then transfer only the needed portions of large images, then any database would work. You simply tile the large images into a set of smaller images, and return the smallest set of tiles (sub-images) that completely covers the requester's range.

For example, split any image into 9 parts, 3 rows and 3 columns. Now, any sub-section of the total image can be represented by a minimum of 1 sub-image, and at most 9 sub-images.

You can apply the 9-part division recursively, if the first division's sub-images are too large.
 

sfurlani

macrumors newbie
Original poster
Feb 4, 2010
10
0
Yes, I apologize @chown33, thanks for replying. My original question was born out of ignorance of OpenGL and DO, which I hopefully have corrected.

My original question was - can I grab information from DO every frame to generate a texture to bind to a quad to draw?

I know that's not feasible (or practical or anything but stupid) now, but I'm wondering if there are any problems with this - If I store the current texture locally, and only regenerate it from the network when the image changes, can I do it with DO or some other database structure?

For example - user loads a series of images from a CD. Let's assume for the sake of argument it's an extremely large file - 4GB uncompressed with overhead. That image series resides and is vended on node 1. nodes 2 and 3 in the cluster are notified of the image series. Each node has 4 graphics cards and 4 monitors, which display 4 1MP images each (monitors are 2560x1600).

At the start, the cluster loads 16 images per node (except node 1) from the network so that's 32 images loaded up front. That's a 1-2 sec delay on a 1000Mbps as nodes 2 and 3 saturate the network asking for the 1MPixel images to generate the textures. On the next and subsequent frame draw, each node draws the textures stored locally on that machine.

The user at some point changes the CLUT for one of the images, and that graphics pipe has to re-build the texture and go back out to the network to get the original image data, transform it, and generate a new texture.​

I'm not doing GIS what @chown33 seems to suggest, but I need really quick access to this large database since I don't know what portions of it the user will want to look at, or transform into a Volume Rendered image instead of a series of 2D images.

I'm trying to get a grasp of how feasible this might be and where I could run into problems with network speeds, memory access, rendering, etc. I know little about DO, OpenGL, etc and I'm afraid there's some pitfall I can't see out there.

-S!
 

lazydog

macrumors 6502a
Sep 3, 2005
709
6
Cramlington, UK
Hi

Sounds like you have your server and nodes all on the same network. Is it possible to put additional network cards in your server?

b e n
 

sfurlani

macrumors newbie
Original poster
Feb 4, 2010
10
0
@lazydog, yes. All the new Mac Pros come with two ethernet ports for that reason, I think.

@cromulent, the images are formatted in DICOM, which offers various image formats but the data I retrieve from the program is stored as a float* buffer in memory (the NSData).

-S!
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.