Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
jawdrop.gif


This hurts. SO. MUCH. Because I can't get one anymore.

Tesla_c1060_3qtr_low.png


No video out means that this would be used in conjunction with Snow Leopard ONLY, as you would need another card to display the image rendered by...

Wait for it...

Eight Gainestown Xeon cores and the 240 cores of the Tesla GPU.

Do you want to see the whole report? You can run Apple Hardware Profiler on your comp too. Have a gander.

/usr/sbin/ioreg -l -w80
 
Do you want to see the whole report? You can run Apple Hardware Profiler on your comp too. Have a gander.

/usr/sbin/ioreg -l -w80

The Tesla info comes up under the nVidia 8xxx driver sub... thingy for me.

I don't know if that's just because I'm using a 8600M, or if that is the same for all computers, but...

Okay, I really don't know what that means. :p

We need to find out if there are Tesla kexts in Snow Leopard; that'll tell us for sure.
 
The Tesla info comes up under the nVidia 8xxx driver sub... thingy for me.

I don't know if that's just because I'm using a 8600M, or if that is the same for all computers, but...

Okay, I really don't know what that means. :p

We need to find out if there are Tesla kexts in Snow Leopard; that'll tell us for sure.

Yeah, apparently we ran it on the the same computer. I don't think it really means anything. Would be cool though if Apple used Tesla and SLI tech.
 
You do realize that Tesla is nothing but a GT200/GT200b chip (Geforce GTX 280/285) with a premium price tag?

Personally it is nothing to be overly impressed by. Granted, it will be the fastest graphic card ever available in Mac OSX.
 
You do realize that Tesla is nothing but a GT200/GT200b chip (Geforce GTX 280/285) with a premium price tag?

Personally it is nothing to be overly impressed by. Granted, it will be the fastest graphic card ever available in Mac OSX.

Bring on options for the 260 55nm, 285, and 295, as well as ATI models. *chirkles* I wish.
 
when they get here, how much of a performance difference should i expect between the top model and the next best processor? i keep reading that clock speed won't matter as much as architecture in the next generation of models, but i don't know if that means we should expect a lot more bang-per-Hz


for a wee bit o' background, i'm thinking that i'd like the capability to work in uncompressed HD, which means i'll need at least one SAS drive, which will mean i'll need the raid card. the combined cost of these items means two things for me... not much additional RAM at the time of the purchase (probably only 6 or less), and opting for the second best processor option.

then again... i suppose that i could just stick a thousand bucks worth of high speed drive stuff on my credit card if the need ever actually arises to work in 4:4:4 uncompressed 10-bit HD at home. by then i'll either be making decent money or the drives will be cheaper.

... no matter what i'll be able to work in uncompressed SD and ProRes HD, but without fast drives uncompressed 10-bit is not a possibility, let alone 2k or 4k three or four years down the road (for which i'd need way more RAM anyway, not to mention that extra processor power i'm potentially turning down now)

I'm a bit late to this party, and I'm no expert in NLE but I can possibly help you understand where some of the bottlenecks will be...

SAS and SATA both refer to the transfer interface technology which currently supports up to 3Gb/s transfer speeds (6Gb/s is on the way).

The physical hard drive itself is usually the limiting factor and their max sustained data transfer rate (STR) is dependent on platter RPM, and cache size. A 15k RPM drive will max out around 135MB/s (1Gb/s) while a typical 7200 RPM desktop drive will max out around 70MB/s. As you can see, the bottleneck in your storage system is not the interface but the physical media.

The latest solid state disks are starting to advertise speeds well above physical disks now... even 15k RPM disks.

To get amazing read performance from your HD's you really need RAID0 or RAID 10 which provides parallelism... With RAID0, STR scales with the number of disks.

Cache makes an enormous difference as well (particularly on write operations). Cache can be implemented at three levels and the more you have of each the better... 1) On the drive itself... 2) on the RAID card... and 3) Using system memory (only used for read-ahead cache and managed by the OS - I'm not sure if OSX uses main memory for this purpose)

Based on a quick search, it seems like uncompressed HD is about 400MB/s... :eek: This is going to require a monster system with as much HD throughput, RAM, and CPU as you can afford. Even then, I'm not sure you will be pleased with the performance when working on such material.

I think that these days, with CPU power being relatively abundant and cheap relative to solid state disk and high performance memory, you are better off getting the fastest storage system you can afford... 8 solid state disks in RAID 10 would be nirvana or two arrays of 4 disks each for different tasks might be even better depending on the application... but even a pair of SSD's in RAID0 would be better than spending money on more CPU. Then I would buy as much RAM as I could possibly justify... the less swapping to disk you need to do, obviously the better. At 400MB/s you need a Gig of RAM to store just a couple of seconds of video! :eek:

I'm curious what kinds of systems are used to edit uncompressed HD video! :confused:
 
I work at a post facility which does not work with uncompressed HD, we do uncompressed SD just fine on dual 3ghz xeons with Kona cards. (these provide hardware acceleration for prores and hdv etc)
We work with ProRes 4:2:2 HQ for finishing HD material....most people cant tell the difference.

Working with 4:4:4 Uncompressed 10bit HD is an expensive undertaking..usually they have a fiber network that can do 1700mbs
The systems that usually work with this stuff are Discreet Smoke systems, daVinci R400, and Discreet Inferno's... if someone wants to see what specs they are sporting I'd be curious to know...

Anyway.. those systems are $300,000+
 
There are no personal use programs that can take advantage of those 8-cores.

"Grand Central" in Snow Leopard makes all apps, even single-threaded, take advantage of multiple cores. Grand Central virtualizes the cores into general compute resources, as OpenCL does with GPU cores. Togther they allow the OS to farm out chunks of executable code (well-defined loops for example) to be handled by generalized compute resources. The OS wraps that with scheduling and semaphores. :apple:
 
"Grand Central" in Snow Leopard makes all apps, even single-threaded, take advantage of multiple cores. Grand Central virtualizes the cores into general compute resources, as OpenCL does with GPU cores. Togther they allow the OS to farm out chunks of executable code (well-defined loops for example) to be handled by generalized compute resources. The OS wraps that with scheduling and semaphores. :apple:

Something that isn't clear to me. Is Grand Central like OpenCL/GL (an API) or is it something that the kernel runs that would affect all applications? Because the way some talk about it they make it seem like it is running in the background making magic happen without any programmer interaction required.
 
Something that isn't clear to me. Is Grand Central like OpenCL/GL (an API) or is it something that the kernel runs that would affect all applications? Because the way some talk about it they make it seem like it is running in the background making magic happen without any programmer interaction required.

there isnt any programmer action required :rolleyes:
 
there isnt any programmer action required :rolleyes:

So it isn't an API, but something that is apart of the kernel. That is cool. Then even current applications would see speed increases from running under Snow Leopard.

Has Apple said how many cores Snow Leopard will be able to support? Win2k8R2 is supposed to support up to 256 cores.


Is the TeslaGLContext stuff remnants of CUDA?
 
"Grand Central" in Snow Leopard makes all apps, even single-threaded, take advantage of multiple cores. Grand Central virtualizes the cores into general compute resources, as OpenCL does with GPU cores. Togther they allow the OS to farm out chunks of executable code (well-defined loops for example) to be handled by generalized compute resources. The OS wraps that with scheduling and semaphores. :apple:

Wait, say that again? You mean, Grand Central does a JIT recompilation of apps, so that they can run in parallel? Or something like that? That sounds very exciting, and a bit complicated (for Apple, at least).

I thought Grand Central was just some re-branded threading model or something, and Snow Leopard was going to be all about the OpenCL, but this Grand Central thing sounds really cool.
 
So it isn't an API, but something that is apart of the kernel. That is cool. Then even current applications would see speed increases from running under Snow Leopard.

Has Apple said how many cores Snow Leopard will be able to support? Win2k8R2 is supposed to support up to 256 cores.


Is the TeslaGLContext stuff remnants of CUDA?

lol i dont actually know the correct answer. judging by what i have read and my knowledge it seems obvious that "Grand Central" is an automated thing and will handle processing by itself.
 
Wait, say that again? You mean, Grand Central does a JIT recompilation of apps, so that they can run in parallel? Or something like that? That sounds very exciting, and a bit complicated (for Apple, at least).

I thought Grand Central was just some re-branded threading model or something, and Snow Leopard was going to be all about the OpenCL, but this Grand Central thing sounds really cool.

From Apple's own Snow Leopard Page:

"“Grand Central,” a new set of technologies built into Snow Leopard, brings unrivaled support for multicore systems to Mac OS X. More cores, not faster clock speeds, drive performance increases in today’s processors. Grand Central takes full advantage by making all of Mac OS X multicore aware and optimizing it for allocating tasks across multiple cores and processors. Grand Central also makes it much easier for developers to create programs that squeeze every last drop of power from multicore systems."

So it takes developer effort: it's not automatic at an application level.
 
Oh. :(

Well, I guess that's more work for developers then. :)

Developer jobs that require a Mac, no less.
 
From Apple's own Snow Leopard Page:

"“Grand Central,” a new set of technologies built into Snow Leopard, brings unrivaled support for multicore systems to Mac OS X. More cores, not faster clock speeds, drive performance increases in today’s processors. Grand Central takes full advantage by making all of Mac OS X multicore aware and optimizing it for allocating tasks across multiple cores and processors. Grand Central also makes it much easier for developers to create programs that squeeze every last drop of power from multicore systems."

So it takes developer effort: it's not automatic at an application level.

While it will require dev. effort to get the most of it, by optimizing the OS for allocation of tasks we should probably see performance gains at the application level since the OS is better at giving them resources.
 

Thanks... I also found this which helps plan storage for editing video...

The storage and data rates for uncompressed video are listed below.


720p HDTV uncompressed;
8 bit @ 1280 x 720 @ 59.94field = 105 MB per/sec, or 370 GB per/hr.
10 bit @ 1280 x 720 @ 59.94field = 140 MB per/sec, or 494 GB per/hr.

1080i and 1080p HDTV uncompressed;
8 bit @ 1920 x 1080 @ 24fps = 95 MB per/sec, or 334 GB per/hr.
10 bit @ 1920 x 1080 @ 24fps = 127 MB per/sec, or 445 GB per/hr.

8 bit @ 1920 x 1080 @ 25fps = 99 MB per/sec, or 348 GB per/hr.
10 bit @ 1920 x 1080 @ 25fps = 132 MB per/sec, or 463 GB per/hr.

8 bit @ 1920 x 1080 @ 29.97fps = 119 MB per/sec, or 417 GB per/hr.
10 bit @ 1920 x 1080 @ 29.97fps = 158 MB per/sec, or 556 GB per/hr.

1080i and 1080p HDTV RGB (4:4:4) uncompressed;
10 bit @ 1280 x 720p @ 60fps = 211 MB per/sec, or 742 GB per/hr.
10 bit @ 1920 x 1080 @ 24PsF = 190 MB per/sec, or 667 GB per/hr.
10 bit @ 1920 x 1080 @ 50i = 198 MB per/sec, or 695 GB per/hr.
10 bit @ 1920 x 1080 @ 60i = 237 MB per/sec, or 834 GB per/hr.

I guess the figure I had previously found (400MB/s) for uncompressed HD must have been for 1080p at 60p.
 

great links. thanks!

for now, i think it's best to concentrate on processor power and RAM after all, so that i can do what i already am doing as fast as i can (FCP 1080p 24 in DVCPRO or ProRes and After Effects for the same projects). 15,000 RPM drives would nice if i could swing it, but i can't.

as it is, i plan to get the 2nd best processor, a 3rd party RAID card, and stripe my existing 7200RPM internal drives. then i'll be SAS 15,000rpm drive-ready if i ever need to take that route, only requiring the new drives and an enclosure for my current drives. these enclosures ( http://www.macgurus.com/productpages/sata/BurlyPortMultiEncl.php ) are intriguing
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.