Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

pohl

macrumors regular
Oct 18, 2005
176
53
Lincoln, Nebraska
Apple is just saying that users of it's (now) very old PowerMac line will have to suffice with Leopard, instead of Snow Leopard.

AFAIK, Apple has said no such thing. They've been entirely mute on the subject. The notion that SL won't run on PPC was invented by people trying to read the tea leaves, and now it is taken as gospel. So, a better response to HyperZboy would have been [citation needed].
 

deconstruct60

macrumors G5
Mar 10, 2009
12,286
3,882
By the time Snow Leopard drops later this summer it will have been over three years since Apple sold their last G5. There is not a reason to continue to develop for a legacy machine.

3 year machines are "legacy" ? That's ridiculous. Applecare runs 3 years long. So your saying that after Applecare runs out Apple won't support the product anymore? Really?

Mac OS X is coming out in somewhat 2 year cycles. If Apple chops off support then they can only sell follow on upgrades just once instead of twice. Isn't 2x $129 a are amount than $129. The profit margin on software is substantially higher than software. There is extremely little customer value is in forcing people off if not offering something that has substantially more value that what you are trying to replace. The hardware should sell itself. Not screwing around with the software to sell more hardware. That shows a trend to slack off in offered value.


These decisions will not haunt Apple. The company is still making money, and lots of it in comparison to Microsoft, which posted their first ever loss.

What alternative universe did that happen in???? Microsoft made $2.98B
in profits last quarter. Apple made $1.05B in profits. 2.98 is a positive number (bigger than zero) and almost 3x as big as 1.05.

If Microsoft profits dropped 32% for two more quarters they would still be in the black. They have already axed folks negate
that (and part of the reason why that number if much higher than the revenue drop which was only 6%). The news about
microsoft's profits was that they aligned with the economy and were not imperious to it. They were not on some magical growth
carpet anymore. Not sure why that is new, the signs have been there for a long time.


If Apple can do no wrong how come they canned folks at their existing stores

http://news.cnet.com/8301-13579_3-10226486-37.html

and going backwards in year-over-year same store sales.



Bad corporate governance would be to dedicate developers' time to working on software for a ppc machine in a bad economy.

Bad corporate governance would be not allocating money to cover the overhead of switching platforms. Also somewhat dubious would be to use one part of the company to prop up the other side. The product divisions should make money on their own.

It is prefectly fine to work on software that will make money. Apple has $25B in funds to "loan" against an investment. So the "bad economy" argument is moot. Besides the decision to limit PowerPC targeting on Snow Leopard likely happened over a year ago.

People tend to hold and run Apple boxes longer, not shorter. Certainly, there are folks who "have to have" the newest , fastest , prettiest mac. But they are not the bulk of folks out there. That was one reason folks bought mac, because had some confidence not out to be hustled onto "must by" treadmill.
 

SydneyDev

macrumors 6502
Sep 15, 2008
346
0
The notion that SL won't run on PPC was invented by people trying to read the tea leaves, and now it is taken as gospel.

It's a bit better supported than that - the betas have all required Intel boxes (or so I have heard).
 

pohl

macrumors regular
Oct 18, 2005
176
53
Lincoln, Nebraska
It's a bit better supported than that - the betas have all required Intel boxes (or so I have heard).

Yeah, those are the 'tea leaves' I mentioned. There's no way to know what this says about the final product. It's no more sensible than claiming that Safari 4 beta won't work in SL because it was broken in the last seed.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,286
3,882
Hi all! I've been lurking for a while, but this is my first post.

I was wondering if implementation of open CL in 3rd party applications and plugins is determined by OSX or by the 3rd party developer?

Will I be able to take advantage of the technology with existing 3rd party software, or do I have to wait until the developer has made some kind of update?

Thanks.

Most likely yes, you will need to wait until the 3rd party applications rewrite their code to explicitly call OpenCL routines.

The may be some cases where parts of the the Apple libraries that the 3rd party applications already call where Apple has changed their framework to call OpenCL. An example might be that the app called CoreAnimation and perhaps some aspects of it were farmed out using OpenCL. (somewhat unlikely since the graphics stuff was already pretty well hooked into the GPU anyway. but perhaps some of the physics ( how to model the object movement) could be farmed out.


The catch-22 that many folks are overlooking is that the GPU is a finite resource. So if you fire up some heavy duty 3-D realistic game/program there will not be as many "spare" , unused cycles on the GPUs for your heavy number crunching. However, most of the time folks aren't so can get more out of those cycles.
 

HailToTheVictor

macrumors regular
Feb 1, 2007
179
0
The error in your argument is that people paying lots of money for Macs expect them to last for a long time, so when that person who paid lots of money for a G5 finally buys a new machine, they will evaluate how much value they got for their money the last time they bought a Mac. I am sure Apple can force people to spend $2,500 to get a new tower, but they can't force people to spend $2,500 to get a new Macintosh tower.

You didn't mention any error, just a difference of opinion. I consider the three years and two months that I have had my Yonah based MBP to be a very good lifetime on a machine. Yes, I understand that my machine will have a longer lifespan than a ppc, but Apple announced they were switching to Intel over a full year before the MP was released. The person that bought the machine in question also made it sound as if they purchased the machine very recently (the current economy was mentioned), which would make your arguement not a very good one. I am willing to bet that nearly anyone who purchased a PM or a MP would buy one again. They are great machines, but nearly four years since it was revealed that the technology was dead to Apple is long enough to stop developing for future OSs
 

snowmoon

macrumors 6502a
Oct 6, 2005
900
119
Albany, NY
3 year machines are "legacy" ? That's ridiculous. Applecare runs 3 years long. So your saying that after Applecare runs out Apple won't support the product anymore? Really?

From Apple support http://support.apple.com/kb/HT1752

Vintage products are those that were discontinued more than five and less than seven years ago. Apple has discontinued hardware service for vintage products with the following exception:

* Products purchased in the state of California, United States, as required by statute. Owners of these products may obtain service and parts from Apple Service Providers within the state of California, United States.

Obsolete products are those that were discontinued more than seven years ago. Apple has discontinued all hardware service for obsolete products with no exceptions. Service providers cannot order parts for obsolete products.

The most recently discontinued PPC hardware is about 3 years old ( fall 2006 ). I see no reason why Apple would not support Leopard for another 24 months while the entire PPC line slips into vintage state. There are tons of even obsolete PPC hardware that is supported by Leopard right now.

Right now only the G4 and G5 towers even have the possibility of the necessary hardware and software to be able to even think about open CL. It's safe to say that probably won't happen. The fact that SL betas have only been seeded to Intel is another huge sign. If their intention are to support PPC they have sure waited till the last minute to do more general hardware testing.
 

gnasher729

Suspended
Nov 25, 2005
17,980
5,565
Yeah, those are the 'tea leaves' I mentioned. There's no way to know what this says about the final product. It's no more sensible than claiming that Safari 4 beta won't work in SL because it was broken in the last seed.

Didn't they have betas that didn't run on laptops? So Snow Leopard will only run on desktop machines? :p
 

deconstruct60

macrumors G5
Mar 10, 2009
12,286
3,882
You're right. It's relatively easy to code the OpenCL API as Universal and make it available to PowerPC computers. However, it's a pointless waste of time. Why? Because there are basically no GPUs on PowerPC computers that would support GPGPU operation or support it with sufficient performance.

If OpenCL was solely targeted at GPGPUs then you'd have a better point. The fact that there are no GPGPUs for the G5 class of hardware should make the port even easier (i.e., less expensive) to do. They only have to do it for the G5 CPU. That's it.

When the OpenCL calls are made, when it comes between deciding where to farm out the work it will always pick the G5.

In the big picture, it is uniformity and well as performance that is being strived for here. The fact that folks do not have to write GPCPU/CPU specific code for the myriad of CPU/coprocessor combos out there. That sofware will just work the "best it can" when coded against a stable, more universially available API.

Similar to Core Image and any other Core XXXX API. On better machines you get better performance. But on slower machines you can run the same software.... just slower. Since the better machines cost more, no big deal; the folks in each slots are probably getting the value they paid for.


I don't think anybody on G5 is asking for performance better than what their hardware can do. They are asking to not be left out in the cold/code. It is up to them to decide whether running the code at a "slower" rate is acceptable to them or not. It is about user choice. If the newer hardware is spectacularly faster and faster would pay for itself in their specific situation then they will buy new hardware. Otherwise will just make do for a couple more years longer.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,286
3,882
So then, if this is true then why is it that core i7 can perform this task much faster than an older CPU or even rather a GPU that does this by itself. It would make sense that a software that cuts out the middle man from operation (CPU) and goes straight to the genius (GPU) would be faster and more efficient. But this does not seem like the case...

The core i7 has improvements to the SSE engine and also in heavy math pipeline processing. The bandwidth improvements help stream in/out matrix elements faster.

If to get the data processing have to make calls the the operating system, those calls are on the host processor. For example if converting a movie on disk to another format, you need to get all the bits out of that file. That takes making a bunch of OS calls to get the data out of that 'bits container'. The graphics card isn't running an OS, so it isn't going to access the disk (or other abstracted parts of the computer) itself. Memory to memory transfers is primarily going to be what the communication medium is going to be.
 

dernhelm

macrumors 68000
May 20, 2002
1,649
137
middle earth
It wasn't mine, it was Anand's.

And yes there can be dramatic differences in the quality of the output (see link above). If I write a crappy h.264 encoder, and then I compare the output of my encoder vs the output of x264 at the same bitrate, it will be obvious that their encoder is superior to mine. Its painfully obvious you don't follow video encoding software development, I suggest you read this thread.

Likewise, if I write a crappy encoder, and then port it to CUDA and it speeds up 5x, then I still have a crappy encode when its done. So you just wasted 5x less time, and it still says nothing as to the speed and quality vs x264, just a comparison over Nero's own software encoder vs their CUDA enhanced encoder.

I'm not denying there are benefits to OpenCL, rather the video encoding community has looked into CUDA and its a huge pain in the ass because of CUDA's threading model. Mabye OpenCL changes that, but you're probably going to need a recent, high end video card to make it faster than your CPU. Even offloading certain tasks like motion estimation require a fair amount of overhead that could partially negate benefits.

No question that CUDAs threading model is different than what you would use otherwise. But I also think it is a stretch to say that the entire video encoding community has declared CUDA a failure. Perhaps some people have decided it wasn't worth their time, that doesn't imply everyone thinks that way.

So you essentially admitted was that you were not comparing apples to apples (which was my original point). It is painfully obvious that you didn't get what I was trying to say, so I'll say it again. You cannot compare a crappily written encoder that used the CUDA model against a well written encoder that doesn't and declare CUDA a failure. That's just moronic. An encoder using CUDA could absolutely be written that produced the same exact result as an encoder that wasn't using CUDA. It might be really hard to do - but it could be done. It may even run slower than a standard encoder because the multi-threading overhead might be higher than the benefit you get from the threads themselves.

I only dabble in video encoding (mostly for medical images - DICOM, etc). I'm familiar with the VC-1 codec, as well as h.264, and that's about it, so I am by no means an expert. But I can tell you that if the quality of the images I produced changed drastically because I switched to using CUDA I would assume that I did something wrong, not that there was something wrong with CUDA itself.
 

snowmoon

macrumors 6502a
Oct 6, 2005
900
119
Albany, NY
I only dabble in video encoding (mostly for medical images - DICOM, etc). I'm familiar with the VC-1 codec, as well as h.264, and that's about it, so I am by no means an expert. But I can tell you that if the quality of the images I produced changed drastically because I switched to using CUDA I would assume that I did something wrong, not that there was something wrong with CUDA itself.

Then would you accept the fact that, for now, the CUDA programming and threading model does not lend itself to producing fast, and high quality, h264 output due the limitations and overhead of the tool?

Sure it can probably do some things faster, but you have to trade off some of the fine optimizations that x.264 have implemented, and are faster, in the CPU.

Will this change one day... I'm sure.
 

MrCrowbar

macrumors 68020
Jan 12, 2006
2,232
519
VisualHub > Handbrake, seriously handbrake takes twice as long to encode stuff.

Agreed. Handbrake has more options though and is great for high quality dvd ripping. Visualhub kicks ass for iPhone encoding though. I use the iPhone setting, then set the video bitrate to 512 kbit, audio to 96kbit and let everthing else as it is. 1-pass results are better than what Quicktime can give you.
 

mrgreen4242

macrumors 601
Feb 10, 2004
4,377
9
VisualHub > Handbrake, seriously handbrake takes twice as long to encode stuff.

Speed isn't everything. Handbrake is more customizable and can produce far superior results. Can VH even pass along 5.1 audio streams (as well as encode a DPL compliant 2-channel)? Does it support any of the advanced x264 encoder features? Is it even supported any more?
 

dernhelm

macrumors 68000
May 20, 2002
1,649
137
middle earth
Then would you accept the fact that, for now, the CUDA programming and threading model does not lend itself to producing fast, and high quality, h264 output due the limitations and overhead of the tool?

Sure it can probably do some things faster, but you have to trade off some of the fine optimizations that x.264 have implemented, and are faster, in the CPU.

Will this change one day... I'm sure.

I said in my original post that coding against CUDA is hard. Go back and read it. I continue to stand by that statement. So much so, that I chose not to use it for what I do either (and we have a situation where the hardware this needs to run on is very controlled)!

OpenCL will hopefully make it easier. It looks to me like it will, but I haven't done anything more than read an advance copy of the spec, so I won't pretend I have anything to say from experience.

The real problem is what you said earlier. It means that app developers will have to adopt (and/or adapt to) the OpenCL programming model for their apps. This will be easier and more beneficial than adopting CUDA, and there may be a lots of potential performance gains looming in the future if they do. But even still the effort won't be free (or even easy) and a lot of developers simply will not bother.
 

Durendal

macrumors 6502
Apr 12, 2003
287
1
All current Macs, except the Mac Pro, ship with the 9400m chipset whether you get an ATI card or not.
Not to nitpick, but that's not quite true. The higher end iMacs do not have a 9400 in them. They do have the Nvidia chipset, but the 9400M GPU is not included.
 

michelepri

macrumors 6502a
May 27, 2007
511
61
Rome, Paris, Berlin
Apple is an excellent platform for many uses, and I love it. But video professionals are going to have to migrate to Windows. Better hardware, software, and drivers. Even Apple doesn't show interest in supporting video professionals. FCP is old and even primitive. DVD software is gone. Imovie is a joke. Compressor flawed, and Premiere and AVID are just better on Windows.
 

Analog Kid

macrumors G3
Mar 4, 2003
8,865
11,405
Aside from the published spec, are there any books on OpenCL programming? I've been looking, but can't find anything yet. This could be huge for some of the projects we're working on.


It doesn't make sense that the quality would be worse. It isn't as if the graphics card would multiply bits differently than a CPU.
Not sure about OpenCL, but I believe GLSL (or at least the CoreImage subset of it) only supports single precision floats. So there may be a difference in bit multiplies...

Or I could be misremembering a hunch about a completely different technology that I only know a little about.
Does that mean that Apple might be thinking about using more powerful video cards in their desktop and mobile products? Or just trying to take advantage of what is already there. Because unless you own a Mac Pro with a honkin' video card there isn't much to take advantage of.
There is a huge amount of power even in the current mobile GPUs.
But the problem is that it can't be done in zero time. There are huge ( processing wise ) delays in getting the data to the GPU and offloading the data. Between overhead and a completely different programming model where many optimizations are impossible ( or not cost effective ) there are serious drawbacks.

Maybe open CL will break down some of these drawbacks and allow more hybrid applications where CUDA failed, but I'm not holding my breath.
Enter: Grand Central.
OpenCL could easily be implemented for the last generation POWERPC Macs.
Here's a great business idea for you: write an OpenCL framework for PowerPC. Turn Apples oversight into easy money!
 

commander.data

macrumors 65816
Nov 10, 2006
1,057
183
If OpenCL was solely targeted at GPGPUs then you'd have a better point. The fact that there are no GPGPUs for the G5 class of hardware should make the port even easier (i.e., less expensive) to do. They only have to do it for the G5 CPU. That's it.

When the OpenCL calls are made, when it comes between deciding where to farm out the work it will always pick the G5.

In the big picture, it is uniformity and well as performance that is being strived for here. The fact that folks do not have to write GPCPU/CPU specific code for the myriad of CPU/coprocessor combos out there. That sofware will just work the "best it can" when coded against a stable, more universially available API.

Similar to Core Image and any other Core XXXX API. On better machines you get better performance. But on slower machines you can run the same software.... just slower. Since the better machines cost more, no big deal; the folks in each slots are probably getting the value they paid for.


I don't think anybody on G5 is asking for performance better than what their hardware can do. They are asking to not be left out in the cold/code. It is up to them to decide whether running the code at a "slower" rate is acceptable to them or not. It is about user choice. If the newer hardware is spectacularly faster and faster would pay for itself in their specific situation then they will buy new hardware. Otherwise will just make do for a couple more years longer.
OpenCL is only suitable for very specific tasks, mainly floating point intensive, and is supposed to supplement existing programming methods and not necessarily replace them. I just don't see GPGPU revolutionizing and miraculously speeding everything up overnight.

There are currently no OpenCL drivers. There aren't even beta-drivers. nVidia has only just started having developers sign up for a pre-beta driver to get feedback for their future beta driver. I believe the Khronos Group hasn't even finialized the OpenCL compliance tests yet, so nothing can even be labeled OpenCL yet.

In the first few years of OpenCL, the early adopters will probably still keep their existing CPU based methods in their programs, so OpenCL won't likely be a requirement either software or hardware based. Why would they want to throw out fully optimized CPU code for an immature OpenCL implementation and drivers overnight? Besides, pure CPU code should always be faster than software OpenCL anyways since there is less overhead. By 1-2 years after OpenCL drivers are made available, perhaps OpenCL programs will become more mainstream and at least software OpenCL will be a requirement, but by that time even the newest G5s will be close to 5 years old if not more. I just don't see lack of software OpenCL drivers for the G5 to be accelerating it's EOL anymore than time naturally does.

There is a huge amount of power even in the current mobile GPUs.
http://www.anandtech.com/video/showdoc.aspx?i=3374&p=5

GPGPU isn't as fast as people make it out to be. A 3GHz Dual core can still definitively beat one assisted by a nVidia 8500GT, which is probably equivalent to the 9400M. No doubt synchronization makes weak GPUs not worthwhile. Similarly, an old quad core like the Q6600 still beats a desktop 9500GT and 8600GTS, which are faster than the mobile 8600M GT and probably the mobile 9600M GT too. I believe the GT120 in the new iMacs and Mac Pro are actually a rebranded 9500GT. You really need a fairly decent GPU like the 8800 series to see definitive speed-ups over CPUs to make the effort worthwhile.

Dual core MBP could see some benefit with the 9600M GT, iMacs with a GT120, but the iMac needs a GT130 and up and the Mac Pro needs a 8800GT or HD4870 to make a real difference.
 

hiimamac

macrumors 6502a
Jun 7, 2007
610
0
Boston
We'll soon see how much Apple has integrated OpenCL power into the OS and it's applications also. Can't knock a 5x speed increase - I'd imagine we'll see more and more benefits as we move towards Snow Leopard's release.

Remember when AMD had great chips and Intel needed 1.0 GHZ more just to keep up? One can only hope that AMD can release cards that allow great upgrades. Think about it, Apple too, we hope, will have no say, then you have a mac pro and can upgrade via new cards. Hope so anyway.

Would be nice to see AMD do well.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,286
3,882
Easily? Have you ever tried programming the underlying architecture of an operating system. By your naive post, I would think not. The truth of the matter is that getting this stuff to work is extremely difficult and try to have it work on two different chips would add to the work enormously.

P-Worm

Eh?? OpenCL should only work a some small number of chips? Here is the objective of OpenCL from http://www.khronos.org/opencl/ ) :

OpenCL provides a uniform programming environment for software developers to write efficient, portable code for high-performance compute servers, desktop computer systems and handheld devices using a diverse mix of multi-core CPUs, GPUs, Cell-type architectures and other parallel processors such as DSPs.
....
OpenCL is being created by the Khronos Group with the participation of many industry-leading companies and institutions including 3DLABS, Activision Blizzard, AMD, Apple, ARM, Barco, Broadcom, Codeplay, Electronic Arts, Ericsson, Freescale, HI, IBM, Intel, Imagination Technologies, Kestrel Institute, Motorola, Movidia, Nokia, NVIDIA, QNX, RapidMind, Samsung, Seaweed, Takumi, Texas Instruments and Umeå University.

See as many "other" CPU folks on that list as GPGPU vendors. For instance IBM and Freescale, they'd have zero interest in a PowerPC OpenCL engine right?

When lots of people split up and do the work even hard tasks become easier. True not only of applications but also in implementing lower level constructs.

OpenCL can't possibly be Mac OS X specific. Most GPGPUs don't even run something that is operating system in the normal sense. There may be some lightwieght abstraction but OpenCL just provides a very straightforward computational model.

So it isn't a trivial amount of work but as long as there isn't tons of work duplication ( multiple folks writing generators for the a single hardware platform), this is a doable amount of work.

Lots of duplication of work is what would made this all prohibitively expensive.



For example, there is no reason why IBM ( or a vendor ) might not want to come out with a Cell PCI card that plugs into a machine to give it more OpenCL "horse power". [ Yes, most likely would want to do a PCI-e card, but to illustrate the point past just that of solely GPCPU card that happen to be there already. There are many more systems with empty PCI slots than those with bleeding edge, modern GPGPUs installed in them. ]. Well now there is a reason ...because it is 'hard' to put that extra horsepower to work. But if have a OpenCL system then it is much easier.


So in short this could be a way of making an older G5 box with just 8 CPUs into something with more "parallel processor" units. Depends upon if there is a 'big enough' market for folks with the older style slots.

Instead of artificially limiting value, this would be increasing value proposition.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,286
3,882
Sounds great! Five fold says alot and I wonder if Apple will use this anytime soon. Also, how will the battery be affected? I figure we see this technology end of this year or next

Similar impact on battery life if were playing some heavy-duty 3D game that was stressing the GPU. If leveraging GPUs with OpenCL then it was consuming power before. It will consume more if use it more. Likewise if the CPU stays just as busy ( farms out work , but pick up more work to do) then it will not "give back" any power either. So the net power draw would go up.

If you turn on all subsystems of a laptop (spin the disk , run the processors full blast, etc. ) , the battery lasts a shorter amount of time.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.