Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
What's this about Apple making their own GPUs??:rolleyes:

For iPhones and Touches and any other iPhoneOS device.
The licensed GPU IP from Imagination Technologies (PowerVR) joins the licensed ARM tech for CPUs. Eventually, that will all collapse onto one chip. If going to sell 20-30 million iPhones devices per year makes sense.
The Touch (and Phone) in part is competing with hand held gaming devices. The screen is smaller so easier to compete with the more hard core graphics providers. (less pixels less work. )

Note that intel licenses IMG tech for the GMA500 while at the same time doing work on a different line of graphics processors for desktops/laptops. Same thing, different market constraints and objectives.


Apple might be able to target homegrown CPUs/GPUs to apple tv and airport devices to boost the run rate a bit more as they grown in horsepower.


Probably, has little to do with the Mac side of the house and this issue.

Fact is that Arrandale chip package will include a integrated graphics processor. Why would apple buy the integrated graphics from Intel and then through another whole R&D effort into not using what they just bought?
 

Which is clocked about 4x slower than most of 9400M alternatives being bandied about. Different objective, very low power. Can't necessarily just crank the clock up 4x and still have a functioning circuits ( timing issues ).

Besides the 9400M (ION Atom + 9400M ) is crushing the the all-Intel Atom offerings on netbooks too.
 
about 4x slower

You mean "1/4th the speed."

Thats a pet peeve of mine. Often tv commercials will say "10 times faster!" but its really not grammatically correct, or even logical when you think about it. My car can't be "10 times faster" than yours, but it can be "10 times as fast."
 
You mean "1/4th the speed."

Thats a pet peeve of mine. Often tv commercials will say "10 times faster!" but its really not grammatically correct, or even logical when you think about it. My car can't be "10 times faster" than yours, but it can be "10 times as fast."

At least it didn't "go missing".
 
The major impediment to nVidia chipset adoption post Nehalem really isn't reliability or licensing issues but the fact that mainstream Nehalem's don't have QPI links but only a low-speed DMI link to a southbridge. ... Even for quad core Clarksfield that doesn't include an Intel IGP, it doesn't have QPI links and only DMI making it hard to make an IGP viable.

Clarksfield is targeted at a higher power budget devices. So yeah discrete graphics and even more power dissipation aren't as much of a problem. The other problem will have is that the Memory is off in the CPU/Memory controller package. However, most of that traffic should be in the other direction from the other I/O. The IGP asking for memory as oppose to sending data to memory. Nvidia could make it work.... it is just extremely awkward. It has a round peg into a square hole feel to it. Those usually don't turn out well.





In terms of Intel's IGP in Arrandale, there is some hope that at least it won't be a downgrade from the 9400M. The GMA X4500MHD is about half the speed of the 9400M. Intel never took full advantage of the die shrink from 90nm in the X3100 to 65nm in the X4500 since they could have double the shader count but only increased it from 8 to 10. Arrandale's IGP will be further shrunk from 65nm to 45nm, so Intel could easily put in 16 shaders or more and coupled with higher clock speeds to at least match the 9400M.

That isn't as "easy" as you make it out to be. Before the integrated graphics were off in their own chip package. That chip package probably had its own heatsink/heatpipe/etc. With Arrandale Intel is building a multichip module (MCM). If the dies are stacked vertically the distance between them is now less than a millimeter. If placed horizontally the chip package itself will be slightly bigger ( pin outs and spacing ) however again millimeters (or less ) apart. Two heat sources packed mm apart have bigger problems then two heat sources packed centimeters apart.



I doubt Apple would go back to Intel IGPs unless they would be competitive with existing solutions. Although Apple did go from the Mobility Radeon 9550 in the last iBook to the GMA950 so it isn't unheard of. In addition, admittedly the problem with Intel IGPs hasn't been the raw theoretical hardware capabilities, but poor driver support that takes a long time or never takes full advantage of the hardware.


I agree they can boost the X4500 performance. Easily without meltdown... a bit tougher than just keep adding graphics cores until match. So wouldn't be surprising if don't match the 9400M. But if intel gets within 10-15% and most games play as did before it is probably a compromise Apple is willing to take.
 
Hehehe, lets see...

The ONLY concern I had when I bought my Macbook 13" was the graphics. I bought it, I was extremely happy with it, however, when I decided to give games a shot, a total catastrophe. I HATE INTEL GRAPHICS!!!

If they are going to drop nVidia, why not adopt ATi? It seems to me that a fall back to Intel would be the worst option of all, totally nonsense.
 
Hehehe, lets see...

The ONLY concern I had when I bought my Macbook 13" was the graphics. I bought it, I was extremely happy with it, however, when I decided to give games a shot, a total catastrophe. I HATE INTEL GRAPHICS!!!

If they are going to drop nVidia, why not adopt ATi? It seems to me that a fall back to Intel would be the worst option of all, totally nonsense.

In the long term neither Intel nor AMD are going to give you a choice about integrated graphics. By moving the integrated graphics to the CPU's chip package they can eliminate the competition. The PC chip busines is increasingly oriented that way. Collapse as much functionality into the chip package that you control.

The long term problem for Nvidia is that the CPU is a bigger "black hole" than the GPU is. That's why Nividia now has an ARM/GPU option. For x86 they are out in the cold. (unless perhaps they merged with VIA somehow. ) They can continue with the discrete GPU options but that's going to increasingly becomes a niche area.

Also as other comments have put forth the Intel offerings have been less than spectacular because Intel wasn't fully focused. If they get focused they can come up with better offerings. May not take over the speed or price/performance crown but suck less. Intel saw Apple switch practically their whole line over to the 9400. If it hasn't sunk in that they were not competitive before, it certainly had by when they saw Apple steadly tossing them from design wins over a year ago.
 
In the long term neither Intel nor AMD are going to give you a choice about integrated graphics. By moving the integrated graphics to the CPU's chip package they can eliminate the competition. The PC chip busines is increasingly oriented that way. Collapse as much functionality into the chip package that you control.

The long term problem for Nvidia is that the CPU is a bigger "black hole" than the GPU is. That's why Nividia now has an ARM/GPU option. For x86 they are out in the cold. (unless perhaps they merged with VIA somehow. ) They can continue with the discrete GPU options but that's going to increasingly becomes a niche area.

Also as other comments have put forth the Intel offerings have been less than spectacular because Intel wasn't fully focused. If they get focused they can come up with better offerings. May not take over the speed or price/performance crown but suck less. Intel saw Apple switch practically their whole line over to the 9400. If it hasn't sunk in that they were not competitive before, it certainly had by when they saw Apple steadly tossing them from design wins over a year ago.
ION does support VIA Nano for what it's worth. :rolleyes:

Clarksfield is targeted at a higher power budget devices. So yeah discrete graphics and even more power dissipation aren't as much of a problem. The other problem will have is that the Memory is off in the CPU/Memory controller package. However, most of that traffic should be in the other direction from the other I/O. The IGP asking for memory as oppose to sending data to memory. Nvidia could make it work.... it is just extremely awkward. It has a round peg into a square hole feel to it. Those usually don't turn out well.
That does feel like a very roundabout way of requesting system RAM for the IGP's usage. It's possible but it does feel awkward. nVidia is going to need to push a budget solution that takes care of the "IGP" side and the video memory on one chip if possible. Otherwise you do have to solder on side port memory for the IGP's usage. Are you still technically an IGP then? You have to take into account the additional costs of adding another IGP to Arrandale/Clarkdale and the board space.

AMD is living the high life right since they control the processor and the GPU side in their market. Not to mention they don't have all these sockets to support right now.

nVidia is getting kicked out and they struck gold with the 9400's chipset, I/O controller, and IGP all on one package. AMD/ATI is never touching Intel again for an IGP but they're quite friendly with discrete solutions and Crossfire. Getting SLI is still like pulling teeth while every board with dual x16 slots for an Intel processor gets Crossfire.
 
ION does support VIA Nano for what it's worth. :rolleyes:

Inside the same chip package? No.
I don't see Nvidia licesning their tech. VIA can't since entangled with Intel and AMD. Into a single chip package or die, barring huge shift for Nvidia's approach, would be a merger/acquistion/etc.


That does feel like a very roundabout way of requesting system RAM for the IGP's usage. It's possible but it does feel awkward. nVidia is going to need to push a budget solution that takes care of the "IGP" side and the video memory on one chip if possible. Otherwise you do have to solder on side port memory for the IGP's usage. Are you still technically an IGP then? You have to take into account the additional costs of adding another IGP to Arrandale/Clarkdale and the board space.

As the poster I responding to outlined. You could attach local memory to the IGP ( which is nice because then the IGP isn't pinching main memory), but at that point you have something that looks more like a discrete GPU with "video ram".

AMD is living the high life right since they control the processor and the GPU side in their market.

Not really. Their market is not really separate from Intel's. They are still a step slower on getting to 32nm. Their integrated CPU/GPU product got pushed back. 20/20 hindsight kind of goofy on Nvidia's part because the AMD/ATI is the more "open" approach ( HyperTransport). To run away from AMD because they jumped deeper into grahics (with ATI) to Intel ( who was doing EXACTLY the same thing but through internal development ) didn't really make sense.


nVidia is getting kicked out and they stuck gold with the 9400's chipset, I/O controller, and IGP all on one package.

It is/was a big winner. Just strategically not the long term winner.

Getting SLI is still like pulling teeth while every board with dual x16 slots for an Intel processor gets Crossfire.

Another longer term dead end for the general market. Both SLI (in gerneral) and Crossfire need to network the cards. As the GPU gets small/cool enough to be on one card (or motherboard ) just hook the two together with a standard, fast enough, interconnect.
 
Terrible? by what metric? are you trying to run Crysis or something on there? its a laptop, it does laptoppy things; if you want something to run games you're better off with a console, a high end iMac or a PC.

Firts off, you seem to be defending intel alot :( Is this because you are upset that you are stuck with some crap intel graphics? The X3100 in my macbook will not even play san andreas which requires 64!!! Mb of v-ram and the X3100 has 144!! After people have had a taste of something good (better performing gpu`s) they will not want to go back to intel, even if they are more reliable its cuz they cant do s**t to make them break.

I think apple should make their own gpu`s and proscessors!!!
 
Inside the same chip package? No.
I don't see Nvidia licesning their tech. VIA can't since entangled with Intel and AMD. Into a single chip package or die, barring huge shift for Nvidia's approach, would be a merger/acquistion/etc.


As the poster I responding to outlined. You could attach local memory to the IGP ( which is nice because then the IGP isn't pinching main memory), but at that point you have something that looks more like a discrete GPU with "video ram".
True nVidia can't go anywhere for an IGP once AMD and Intel put everything onto the processor packaging via MCM or even on-die. Tegra seems to be a small way out right now but it's not a big market.

Discrete desktop and mobile solutions are a different matter though.

Not really. Their market is not really separate from Intel's. They are still a step slower on getting to 32nm. Their integrated CPU/GPU product got pushed back. 20/20 hindsight kind of goofy on Nvidia's part because the AMD/ATI is the more "open" approach ( HyperTransport). To run away from AMD because they jumped deeper into grahics (with ATI) to Intel ( who was doing EXACTLY the same thing but through internal development ) didn't really make sense.
I was talking about AMD's chipset/IGP and socket market compared to Intel's and not Fusion.

Another longer term dead end for the general market. Both SLI (in gerneral) and Crossfire need to network the cards. As the GPU gets small/cool enough to be on one card (or motherboard ) just hook the two together with a standard, fast enough, interconnect.
You're going to end up with crippled hardware without that space for cooling if you go to the motherboard or on-die.

SLI/Crossfire is going to be something for discrete solutions even with the limited market and utility for the masses.
 
yep, that gpu is why my macbook pro broke (and was denied repair)

really...thanks apple...
and more so, thanks nvidia for making worse gpus
 
SemiAccurate is an interesting name for a website... in this case I think they're NotatallAccurate.

I'm pretty sure Apple will be sticking with nVIDIA.

Agreed. Why would Apple go backwards in graphics? Perhaps they are going back to ATI, but even that would be a step in the wrong direction.
 
Actually, it was Motorola's PPC - and "the day" didn't last very long. Except for some Altivec-sweet applications Intel chips held their own in most tests.

I only meant that it was chips that Apple used while the others used Intel. I don't remember all the tests so...
 
I think I heard mention of discrete graphics available on the G4 mini... couple historical comments to share about that mess:

1. 32MB of vRAM
2. Not all features of the ATI chipset were enabled, not running the latest firmware available for the 9200.
3. UNDERPOWERED - literally, likely to cut heat... Which meant all sorts of grief using some DVI-D displays.

A lot of corners cut, and really the biggest disappointment to the original mini - It was quoted as a vastly superior offering to integrated graphics at the time... only to be superceded by GMA.
 
I updated my post after I found an Ars article that suggests Intel will launch Larrabee earlier and with a mobile GPU component. The computers that will benefit the most from this are the underpowered machines such as the Mini - with Grand Central and OpenCL, their machines will be screamers. Mac Pros have exchangeable GPUs, so Apple only needs to add a BTO option for the PCIe card for the Mac Pro.
Yes. The iMac and Mac Mini are a perfect fit for OpenCL, and it would be very easy to add a video option, but the problem is the Power Supply which isn't "top in its class" – I had my Mac Pro 2009 replaced after the PS burned out (with two video adapters).

The greatest thing about Larrabee is the software pipeline. You can upgrade to DirectX 11 or 12 or support OpenCL or Shader Model 6.0 through a software update. You can also delegate OS processes such as Spotlight to one of those cores when you're not in a graphics-intensive app.
Also, I think that either OpenCL is currently overestimated, or it will come at an additional price somehow, because why would Apple give Mac Pro users more raw power... without paying for it first? By using more expensive video adapter maybe?

And why would anyone purchase a new Mac Pro when replacing or adding a new video adapter would do the trick?
 
Firts off, you seem to be defending intel alot :( Is this because you are upset that you are stuck with some crap intel graphics? The X3100 in my macbook will not even play san andreas which requires 64!!! Mb of v-ram and the X3100 has 144!! After people have had a taste of something good (better performing gpu`s) they will not want to go back to intel, even if they are more reliable its cuz they cant do s**t to make them break.

I think apple should make their own gpu`s and proscessors!!!

The amount of Video Ram has nothing to do with performance considering that it is all pooled from the same resource - main memory. X3100 was never designed for gaming and to state that a custom CPU and GPU would solve the problem is ignorance to the nature of economies of scale versus complexity versus a business case to justify it.

Oh, and I'm not defending Intel, I'm correcting the record because of the number of people who make claims and yet have nothing to back it up. They complain about the X3100 and yet ignore the fact that its design is for low power, low cost and basic tasks - not for gaming. If you want gaming, firstly doing it on a laptop is ridiculous and secondly expecting something that uses shared memory and designed to be low powered and low cost - its crazy.
 
The amount of Video Ram has nothing to do with performance considering that it is all pooled from the same resource - main memory. X3100 was never designed for gaming and to state that a custom CPU and GPU would solve the problem is ignorance to the nature of economies of scale versus complexity versus a business case to justify it.

Oh, and I'm not defending Intel, I'm correcting the record because of the number of people who make claims and yet have nothing to back it up. They complain about the X3100 and yet ignore the fact that its design is for low power, low cost and basic tasks - not for gaming. If you want gaming, firstly doing it on a laptop is ridiculous and secondly expecting something that uses shared memory and designed to be low powered and low cost - its crazy.

Yes but my main point is that once apple has moved to something that is capable of some gaming (9400M) it would be absurd to move to something that is not able to do it one bit you stated it yourself the X3100 is not capable of gaming and even if it was the successor to the X3100 im sure it would still be awful. And most people saying the X3100 suck are just voicing the fact that they dont want apple to go back to intel graphics And also gaming on a laptop is not absurd, i have a gateway fx 17 inch 1900X1200 res and 1Gb of dedicated withthe 9800gts and its a dream to game on, you just have to have the right laptop :D
 
Slightly different take..

I do a fair bit of graphics programming. Some observations:

- nvidia are good for openGL, but actually pretty crap for core image. Presumably ATI have much better driver support for that. Core image is pretty important for a lot of apps.

- nvidia might be the current champion for performance or whatever else, but that can change with the next GPU.

- intel GPUs are slow, and the drivers aren't much good. They don't support much in the way of features too.

- apple are going to want decent openCL support when snow leopard is released. The current machines they sell all support it well. An intel chip won't. I don't know how driver support is working out for openCL - on the one hand, nvidia want to support CUDA instead of openCL. On the other, adapting CUDA to work with openCL might be easier than starting from scratch, so maybe they'll have an advantage.

I reckon if apple are dropping nvidia, it's for something new from ati (remember that this won't happen until the next refresh at the earliest, and new chips will probably be out by then!) If that's the case, you'll most likely get more speed than the current stuff, MUCH more speed in core image applications, and good openCL support. Personally, I don't have much of a problem with that!
 
I do a fair bit of graphics programming. Some observations:
......

If that's the case, you'll most likely get more speed than the current stuff, MUCH more speed in core image applications, and good openCL support. Personally, I don't have much of a problem with that!

Agreed if it is fast and reliable the name on the chip is not really relevant :)

Edwin
 
Yup but is so much better than the Intel X3100 integrated solution that people forget it is actually an integrated card not a dedicated one :)

Edwin

Mate, if you throw a GPU on a machine that has access to DDR3 compared to DDR2 clocked at 667Mhz - what do you think the result is going to be?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.