Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
If a standard emerged, I could see this going somewhere in the short term-- until Intel woke up and built their own Cell. In the GPU space, the video cards are largely abstracted through either OpenGL or DirectX. If nVidia or ATI disappeared as so many have before them, the OS and application vendors wouldn't blink. They keep writing to the same interface and new hardware picks up the load.
If Nvidia or AMD(ATI) dissapeared, we would be in for a world of hurt. Name there hasn't been a true third player in the GPU business for a long while. Any other company that makes GPU's nowadays is gunning for the low end sector (the same sector Intel dominates). Very few places can do what Nvidia and AMD(ATI) do best.
This is exactly like AltiVec. It's bigger and it's off chip, but it's the same idea: fast, dedicated, vector processing.
SSE is closer to AltiVec than GPU's are. How many AltiVec units are there? I thought there was only 1. In a GPU you have waaaay more than one (otherwise it would be slow at drawing pixels).
Careful, I've brought this up in other threads, and people get their panties in a bunch about it... Because Apple went Intel rather than Cell they think Cell is a dead concept. Cell is exactly where we're all heading. Because of inertia, we may see this wasteful division of labor for a while before the cost of doing it this way is too prohibitive, but eventually it'll all get brought to the motherboard and then into the chipset, then into the main processor.

Mac Pros have 8 cores on them now-- and each core is wasting a huge amount of logic. Each CPU is handling a single purpose thread, but carrying all the logic necessary for any type of thread that might be thrown at it. I've got a whole core handling a stream of integers coming off the network, but I've got all the floating point and SSE logic sitting there idle. Meanwhile I'm compressing a video stream and only have one SSE unit available to that thread.

And through all this my high powered GPU is being taxed with nothing more than a progress bar...
Cell is faster than people give it credit for. The problem is having Coders change how they code. Larabee isn't something Intel cooked up because they though Cell architecture was going away.
I wonder if macs will support the 9000 series when it comes out
They will support the new GPU's as fast as they supported the 3x00 and 8x00 series.
 
Wow, being clueless there. Altivec and SSE is the same crap, Single Instruction Multiple Data. Each implementation have their own strengths and weakness but overall they provide the capability to do high performance Vector calculation. Stuff that required multimillion dollar Crays.

Where did I say AltiVec was nothing like SSE? I never mentioned SSE in my post. I said GPGPUs were nothing like AltiVec.
 
If Nvidia or AMD(ATI) dissapeared, we would be in for a world of hurt. Name there hasn't been a true third player in the GPU business for a long while. Any other company that makes GPU's nowadays is gunning for the low end sector (the same sector Intel dominates). Very few places can do what Nvidia and AMD(ATI) do best.
Oh I seem to remember names like 3Dfx and S3 getting a fair amount of attention not too long ago... Companies come and go, especially in markets like this. No one would be in a world of hurt if they went out of business because someone else would have had to unseat them. I think we're reaching a time in the market where a disruptive technology could come in out of nowhere and unseat the biggies. That was my lead-in point above: Nvidia knows their existing business model is under pressure and this is how they're looking to adapt.

My point was merely that relying on a single vendor to make your stuff work isn't a good idea-- and that concern will play a role in slowing the adoption of a technology like this. Apple could hide the CUDA interface by rolling it into something like their Accelerate framework, but unless they could get similar performance out of other GPU vendors, then they would be forced to rely on nVidia (Nvidia? I hate company names in all caps...) as their sole supplier of video cards.
SSE is closer to AltiVec than GPU's are. How many AltiVec units are there? I thought there was only 1. In a GPU you have waaaay more than one (otherwise it would be slow at drawing pixels).
I don't understand your reasoning here... When someone makes lasagna with chicken, then someone else makes lasagna with beef, and a third person says "that's kind of the same concept", I don't follow the "turkey is more like chicken" argument...

Yes, SSE is a vector processor too, and in the grand continuum of vector processors it is probably closer to Altivec than a GPU is. I wasn't talking about SSE I was talking about this being an extension of the same concept-- offload vector computations to specialized hardware.

As far as unit counts, I think you can look at AltiVec as 4 units rather than one: it handles 4, 32bit operands at a time. It all depends on where you draw your "unit" boundaries...
Cell is faster than people give it credit for. The problem is having Coders change how they code. Larabee isn't something Intel cooked up because they though Cell architecture was going away.
Yeah, I think that's why we'll see incremental movement towards non-symmetric multiprocessing. Have to ease into it. AltiVec/SSE is kind of a first step in that direction, but eventually the scheduler is going to have to figure out how to hand out threads based on the resources of individual execution units. Or they'll give up on that and let the compiler handle the complexity of it.

That's the first I've seen of Larabee, but you're right-- it seems to be designed a lot like Cell, but with slimmed down x86 cores rather than PPC. For now though, it seems to be targeted primarily as a GPU, not a CPU. It'll be interesting to see if I'm right in guessing where we go from there-- onto the motherboard, then the chipset, then the CPU. Intel seems to have designed this architecture to be able to do that quite easily.

Thanks for pointing me at it-- I've got a new place to pin my hopes for the future.
 
Oh I seem to remember names like 3Dfx and S3 getting a fair amount of attention not too long ago... Companies come and go, especially in markets like this. No one would be in a world of hurt if they went out of business because someone else would have had to unseat them. I think we're reaching a time in the market where a disruptive technology could come in out of nowhere and unseat the biggies. That was my lead-in point above: Nvidia knows their existing business model is under pressure and this is how they're looking to adapt.
Ah, okay. I seem to remember 3Dfx being bought by nvidia in like 2000 or so. S3 is still around, but as far as performance is concerned, they are a non-player. But I can see where you were going.
My point was merely that relying on a single vendor to make your stuff work isn't a good idea-- and that concern will play a role in slowing the adoption of a technology like this. Apple could hide the CUDA interface by rolling it into something like their Accelerate framework, but unless they could get similar performance out of other GPU vendors, then they would be forced to rely on nVidia (Nvidia? I hate company names in all caps...) as their sole supplier of video cards.
True, that is always the downside. Think of it this way, the cards ATI and nV have have extensions in OpenGL. There are things that the ATI card can do that the nV can't and vis-versa. So Apple only uses ARB commands otherwise they would have to write workarounds for both ATI and nV. So yeah I understand the reluctance towards adding something like CUDA, Apple would have to hope ATI would adopt it. Of course maybe Apple could get CUDA added to OpenGL (ARB-ized).
I don't understand your reasoning here... When someone makes lasagna with chicken, then someone else makes lasagna with beef, and a third person says "that's kind of the same concept", I don't follow the "turkey is more like chicken" argument...

Yes, SSE is a vector processor too, and in the grand continuum of vector processors it is probably closer to Altivec than a GPU is. I wasn't talking about SSE I was talking about this being an extension of the same concept-- offload vector computations to specialized hardware.

As far as unit counts, I think you can look at AltiVec as 4 units rather than one: it handles 4, 32bit operands at a time. It all depends on where you draw your "unit" boundaries...
Okay, I may have misunderstood where you were going with your comparison. My bad. I agree with what you were saying though.
Yeah, I think that's why we'll see incremental movement towards non-symmetric multiprocessing. Have to ease into it. AltiVec/SSE is kind of a first step in that direction, but eventually the scheduler is going to have to figure out how to hand out threads based on the resources of individual execution units. Or they'll give up on that and let the compiler handle the complexity of it.
Yup. It would behoove coders to take the Cell approach to existing code. Split as much as possible. Make everything as small and lean as possible. It is tedious, but that kind of code would be way more portable than it looks.
 
Aperture

So, I'm looking at a mac pro dual quad, and I want to run MCAD applications on the side. Currently, Apple only has the Quadro 5600 that would be "certified" for these applications, and that still might only be under Boot Camp.

So now that I have that card ($3K!), I can see that FC Studio will get some love, but, what might I expect from Aperture? Is this the card that might bring Aperture the performance that makes it the killer app?
 
So, I'm looking at a mac pro dual quad, and I want to run MCAD applications on the side. Currently, Apple only has the Quadro 5600 that would be "certified" for these applications, and that still might only be under Boot Camp.

So now that I have that card ($3K!), I can see that FC Studio will get some love, but, what might I expect from Aperture? Is this the card that might bring Aperture the performance that makes it the killer app?

The Quadro is overkill for Aperture.
 
Why oh why can't apple cut a deal with nvidia like it did with intel so I don't have to buy a Mac Pro. AMD/ATI for graphics.....pleeeeeese
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.