Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
this is awesome.

i am getting more excited for snow leopard by the day. great to see apple focusing on the "under the hood" tech...rather than trying to distract me with new features.
 
I'm surprised nobody has realized this, but it's very UNlikely that this has anything to do with the OS handling video rendering or decoding differently.

This difference is much more likely the result of the entire BUS speed and RAM speed on the new machines being 40% faster than their predecessors.

Everyone gets caught up in processor speeds and ports, but the really important jump here (albeit the geekiest and most difficult to understand for consumers) is that the thruput on the entire board is drastically improved with the new chipset. RAM is faster, BUS is faster, L2 cache is doubled. Those things have a dramatic effect on speed across the board.

Hate to break it to you, but those have absolutely no impact on CPU time an app takes. If an app is blocked on I/O, it isn't using CPU. And RAM fetch improvements alone don't account from 100% CPU to 20% CPU. Quite simply... The code on the CPU is doing much, much less work, for the exact same 'speed'.

Odds very much are that there is hardware decoding going on. And via another post, looks like someone found the kext too. ;)
 
i am getting more excited for snow leopard by the day. great to see apple focusing on the "under the hood" tech...rather than trying to distract me with new features.

me, too. Yet, there are so many people out there who don't understand that there are things under the hood. They think that if something changes, it will always be big and flashy. If it's all under the hood stuff, they'd just say "how is this any different from 10.5?" and not buy Snow Leopard.
 
That would be nice if Snow Leopard or just a new driver update improves H.264 playback on the 8000-series GPUs in older equipment.

On my 2.4GHz early 2008 MBP, playing back 1920x1080 content at ~2400kbps takes 115% of my CPU power (I'm guessing both cores maxed out would be 200%), but it still seems to flow fine, best I can tell.
 
So, for the user who doesn't know most of this lingo (not saying it's me, :rolleyes:), what does h.264 GPU encoding mean?

H.264 = A modern standard for video compression. It provides great quality and a relatively small filesize.
GPU = Graphics Processing Unit (The processor on your graphics-card).
Encoding = In this context, converting, say, a DVD Video file (MPEG-2 format), into the H.264 format.

So, H.264 GPU encoding, means that the GPU can take care of converting a, say, MPEG-2 video to H.264 (which is a far superior format to MPEG-2).

Video-encoding will normally use all your CPU time, leaving your machine nearly unusable (very, very slow). That's why GPU encoding would be a very useful feature for a lot of people.
 
So, H.264 GPU encoding, means that the GPU can take care of converting a, say, MPEG-2 video to H.264 (which is a far superior format to MPEG-2).

Video-encoding will normally use all your CPU time, leaving your machine nearly unusable (very, very slow). That's why GPU encoding would be a very useful feature for a lot of people.

Although this topic is discussing that the GPU decodes H.264 video, not encoding it.
 
Hate to break it to you, but those have absolutely no impact on CPU time an app takes. If an app is blocked on I/O, it isn't using CPU. And RAM fetch improvements alone don't account from 100% CPU to 20% CPU. Quite simply... The code on the CPU is doing much, much less work, for the exact same 'speed'.

Odds very much are that there is hardware decoding going on. And via another post, looks like someone found the kext too. ;)

That's a huge oversimplification. Sure, the faster video card is helping the process along quite a bit, but the fact is that ram speeds, bus speeds, and cache all have a dramatic impact on CPU cycles. It's undeniable. You can watch it in real time by placing older machines side by side with newer ones and watching the CPU usage spike on older machines during even simple tasks, let along more intensive ones.

It's harder to quantify, which is why it's not explicitly talked as often as processor speeds which can be clocked, but it's a fact that faster RAM and faster throughput equals a faster experience and lower CPU usage.
 
That's a huge oversimplification. Sure, the faster video card is helping the process along quite a bit, but the fact is that ram speeds, bus speeds, and cache all have a dramatic impact on CPU cycles. It's undeniable. You can watch it in real time by placing older machines side by side with newer ones and watching the CPU usage spike on older machines during even simple tasks, let along more intensive ones.

It's harder to quantify, which is why it's not explicitly talked as often as processor speeds which can be clocked, but it's a fact that faster RAM and faster throughput equals a faster experience and lower CPU usage.
I'd like some proof of this please. Jumping to the 1333 MHz FSB and 6 MB of cache on Penryn over Conroe desktop processors didn't amount to much.
 
I'd like some proof of this please. Jumping to the 1333 MHz FSB and 6 MB of cache on Penryn over Conroe desktop processors didn't amount to much.

In addition, when I ran Geekbench & Xbench on my new MacBook and MacBook Pro, the scores for CPU and memory were similar between the two, indicating that the DDR3 and 1066MHz FSB does not make much difference (at least not enough to account for almost 5X better CPU utilization), which has been shown before for the change to the Santa Rosa Platform (comparing 800MHz FSB to 667MHz FSB).
 
Seeing how long GPU accelerated MPEG-2 decode has been around I'm assuming Apple has long supported it. And now they have hardware accelerated h.264 in their arsenal.

GPUs also have hardware accelerated support for Divx/Xvid and VC-1/WMV9. I wonder if Apple already support those or are planning to support them. I presume they would need to co-operate with Divx and Flip4mac, but Apple probably needs to lead by tweaking the low-level interfaces to the GPU and making them available to codecs.
 
Seeing how long GPU accelerated MPEG-2 decode has been around I'm assuming Apple has long supported it. And now they have hardware accelerated h.264 in their arsenal.

GPUs also have hardware accelerated support for Divx/Xvid and VC-1/WMV9. I wonder if Apple already support those or are planning to support them. I presume they would need to co-operate with Divx and Flip4mac, but Apple probably needs to lead by tweaking the low-level interfaces to the GPU and making them available to codecs.
DVD Player uses software only decoding. Yes, really.
 
dvd player does use an assist from the graphics card, but most of the work is done on cpu.
 
That's a huge oversimplification. Sure, the faster video card is helping the process along quite a bit, but the fact is that ram speeds, bus speeds, and cache all have a dramatic impact on CPU cycles. It's undeniable. You can watch it in real time by placing older machines side by side with newer ones and watching the CPU usage spike on older machines during even simple tasks, let along more intensive ones.

It's harder to quantify, which is why it's not explicitly talked as often as processor speeds which can be clocked, but it's a fact that faster RAM and faster throughput equals a faster experience and lower CPU usage.

I would think that CPU spike would be very related to it being an older machine. Slower, less capable CPU. You'd have to get the same chip set, same everything else EXCEPT the bus speed. The scientific method -- only one variable, please.

If a processor is delivering to you HD video at 30fps, then the processor differences are the processor differences, and not related to the bus. If one machine can only cough up 15fps while the other produces 30fps, then you need to look at the interaction of bus and CPU (and HDD) to determine which is the bottleneck.
 
I can verify this is true, too. On my Mac Mini and my old MacBook Pro, the 1080p Quantum of Solace trailer ate a ton of CPU cycles (120% on my Mac Mini as I recall out of 200% - 2 cores). On my new MacBook Pro 2.53Ghz, the same trailer eats around 12%!
 
Can those with the new MB/MBP tell us the version and builds of your Quicktime Player app and Quicktime itself (also in the QTP "About" window). Just want to make sure Apple's not shipping a special version of QT. Mine are QTP 7.5.5 (249.13) and QT 7.5.5 (990.7).

The build of QuickTime is newer (QTP 7.5.5 (249.24) and QT 7.5.5 (995.22.3)), but I have tried copying this newer version into my older MacBook Pro and it made no difference (I think the actual decoder is somewhere else on the system).
 
That's a huge oversimplification. Sure, the faster video card is helping the process along quite a bit, but the fact is that ram speeds, bus speeds, and cache all have a dramatic impact on CPU cycles. It's undeniable. You can watch it in real time by placing older machines side by side with newer ones and watching the CPU usage spike on older machines during even simple tasks, let along more intensive ones.

CPU is the same in both cases, so we can scratch that.
Bus speed is 25% faster in perfectly ideal conditions.
Unfortunately, streaming video from disk tends to be bottlenecked more by disk I/O than RAM. You will see much, much smaller gains in the time it takes to decode a single frame.

And to give you a comparison of the difference we are seeing here: 400% faster. The change in northbridge simply /cannot/ account for that sort of gain, even if we assume ideal conditions.

If we even assume the devs have been optimizing, and ideal performance gains from the faster chipset, 375% in optimizations is just too large to reasonably account for in the time they had.

It's harder to quantify, which is why it's not explicitly talked as often as processor speeds which can be clocked, but it's a fact that faster RAM and faster throughput equals a faster experience and lower CPU usage.

Which is true, but again, the changes in the architecture cannot account for this perf gain. Period. It simply doesn't add up even close.

The only logical explanation in this case is that the hardware decoding feature of the GPU is being used. Not that the GPU being faster is the reason, but that a feature that was previously unused, is now being used.

Now, if we see similar perf gains in other apps without code changes, you have a point... but we aren't seeing them.
 
The build of QuickTime is newer (QTP 7.5.5 (249.24) and QT 7.5.5 (995.22.3)), but I have tried copying this newer version into my older MacBook Pro and it made no difference (I think the actual decoder is somewhere else on the system).


can you confirm you do not have perian installed on the older macbook?
 
so, a little more data.

the latest version of Snow Leopard seeded to certain individuals does not cause similar improvements in older Macs yet.

arn
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.