If Nvidia or AMD(ATI) dissapeared, we would be in for a world of hurt. Name there hasn't been a true third player in the GPU business for a long while. Any other company that makes GPU's nowadays is gunning for the low end sector (the same sector Intel dominates). Very few places can do what Nvidia and AMD(ATI) do best.
Oh I seem to remember names like 3Dfx and S3 getting a fair amount of attention not too long ago... Companies come and go, especially in markets like this. No one would be in a world of hurt if they went out of business because someone else would have had to unseat them. I think we're reaching a time in the market where a disruptive technology could come in out of nowhere and unseat the biggies. That was my lead-in point above: Nvidia knows their existing business model is under pressure and this is how they're looking to adapt.
My point was merely that relying on a single vendor to make your stuff work isn't a good idea-- and that concern will play a role in slowing the adoption of a technology like this. Apple could hide the CUDA interface by rolling it into something like their Accelerate framework, but unless they could get similar performance out of other GPU vendors, then they would be forced to rely on nVidia (Nvidia? I hate company names in all caps...) as their sole supplier of video cards.
SSE is closer to AltiVec than GPU's are. How many AltiVec units are there? I thought there was only 1. In a GPU you have waaaay more than one (otherwise it would be slow at drawing pixels).
I don't understand your reasoning here... When someone makes lasagna with chicken, then someone else makes lasagna with beef, and a third person says "that's kind of the same concept", I don't follow the "turkey is more like chicken" argument...
Yes, SSE is a vector processor too, and in the grand continuum of vector processors it is probably closer to Altivec than a GPU is. I wasn't talking about SSE I was talking about this being an extension of the same concept-- offload vector computations to specialized hardware.
As far as unit counts, I think you can look at AltiVec as 4 units rather than one: it handles 4, 32bit operands at a time. It all depends on where you draw your "unit" boundaries...
Cell is faster than people give it credit for. The problem is having Coders change how they code. Larabee isn't something Intel cooked up because they though Cell architecture was going away.
Yeah, I think that's why we'll see incremental movement towards non-symmetric multiprocessing. Have to ease into it. AltiVec/SSE is kind of a first step in that direction, but eventually the scheduler is going to have to figure out how to hand out threads based on the resources of individual execution units. Or they'll give up on that and let the compiler handle the complexity of it.
That's the first I've seen of Larabee, but you're right-- it seems to be designed a lot like Cell, but with slimmed down x86 cores rather than PPC. For now though, it seems to be targeted primarily as a GPU, not a CPU. It'll be interesting to see if I'm right in guessing where we go from there-- onto the motherboard, then the chipset, then the CPU. Intel seems to have designed this architecture to be able to do that quite easily.
Thanks for pointing me at it-- I've got a new place to pin my hopes for the future.