This is a waste of time. OpenCL will allow GPU to accelerate h.264 encoding. And even without OpenCL, ATI has already partnered with Cyberlink PowerProducer to allow GPU accelerated encoding and similarly nVidia is doing the same with the BadaBOOM software.
*snip* And of course, all discrete GPUs in the current Apple lineup already accelerate h.264 decode with Blu-ray support, although *snip*
Yep, you are totally correct. I agree with you and the others here that this rumor doesn't make any sense. At this point, full H.264/VC-1
decoding is done by all relatively new discrete GPUs, and now finally on the Intel "Montevina" X4500 integrated graphics chipset.
Encoding on GPUs is still in it's infancy, but nVidia and AMD both support H.264 encoding on their newer cards. As you mentioned, software support is very limited at the moment, although I know there are a few different companies creating updates for their software or plugins for commercial products like Adobe Premiere, After effects, etc that enable H.264 encoding on different GPUs.
So although a small, dedicated H.264 encoding chip would probably outperform all but the most powerful GPUs at the task, there is really no reason to include one. The Macbook and Mini just need a low-end discrete GPU from nVidia or ATI, and they would be capable of both H.264 decoding and encoding acceleration.
*Adding* video hardware? Don't many macs already have video hardware that at least does hardware encoding, but it sits there unused because apple doesn't provide the software drivers for it? Wouldn't adding the software support for stuff they're shipping already be the first step?
Video cards are not made to encode and de-code H.264. The processor still does that before passing it on to the video card.
Yes they are. ATI/AMD have UVD, NVIDIA have PureVideo. Intel now have a unit in their latest chipset. This isn't new stuff either, UVD and PureVideo have been around a couple of years, and they have a real noticeable benefit and reduce power consumption in mobile platforms too.
Hattig is correct. Here is a chart I made to show video DEcoding acceleration support among ATI, Nvidia, and Intel graphics chipsets:
* All systems below offer MPEG2 decoding support (DVD)
Intel
GMA 950 (pre-Santa Rosa) - NO support for H.264/VC1 decoding.
GMA X3100 (Santa Rosa) - NO support for H.264. Limited support for VC1 decoding.
GMA X4500 (Monevina/future) - Full H.264, Full VC1 decoding.
Nvidia
PureVideo HD 1 - Very limited H.264, Very limited VC1 decoding.
Available on Geforce 7900 and older "G80" based versions of 8800
PureVideo HD 2 - Full H.264 / Partial VC1 decoding.
Available on Geforce 8300, 8400, 8600, 8700, and newer "G92" based 8800 (this includes 8800GT for Mac Pro, and all 8800 laptop cards)
PureVideo HD 3 - Full H.264, Full VC1 decoding.
Available on Introduced with GTX 260/280 (and some newer 9-series models like Geforce 9600GT
ATI/AMD
UVD/UVD+ - Full H.264, Full VC1 decoding. Greater offloading for both codecs than Nvidia's PureVideo 2.
Available on all Geforce 2xxx, 3xxx series
UVD2 - Full H.264/VC1/MPEG2. Adds support for Blu-ray Profile 2.0 / BD-Live, and Picture-in-Picture.
Available on Geforce 4800 series
h.264 encoding is absolutely a compromise between processing power used and quality. There are two main reasons for this.
The first reason is, a lot of the improvements in h.264 compared to MPEG-2 for example come from the fact that h.264 allows use of a variety of different algorithms. Some algorithms work better for some scenes (or parts of some scenes), some work better in others. To make use of this, an encoder needs to try say 16 different methods of encoding the video data, and then pick the one that gave the best/smallest results. Of course trying 16 different methods takes longer than trying only four or only one method. ...*snip*
Thanks for the information!
You're not mistaken and they don't trace vectors on-the-fly, at differing graphics depths, w/ or w/o anti-aliasing in independent views, etc.
People are also forgetting about Resolution Independence needing this dedicated Chipset [Vector Pipelines on steroids with a DSP on a separate chipset to handle other aspects of encoding/decoding] to deal with the heavy lifting of the heavy matrix transforms without taxing the CPU and lagging the WindowServer.
*snip*
What? All modern GPUs offer 2D hardware acceleration, which should provide plenty of power for a resolution independent vector GUI. There is no conceivable reason why you would need a dedicated "vector art" accelerating DSP... Am I misinterpreting your point here?