Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Are you joking? Do you not see that Intel pushed Nvidia out because Intel couldn't compete with the performance of Nvidia chipsets/GPUs?

by volume Intel was selling more graphics solutions than Nvidia was.
Similarly there have been numerous tracks on System on a Chip for the last 10-12 years at chip conferences. The collapsing of CPU, GPU, and most supporting chips has been predicted and happening for a very long time now. The notion that Intel woke up one morning and found the 9400 eating their shorts and decided to purposely nuke them isn't backed by any factual observation of the industry over the last 10 years.

Most system vendors have been asking for solutions with few chip counts for a very long time. This is already extremely pervasive in the smaller computer markets ( embedded / handhelds / etc). Desktops/Laptops have been heading in the same direction at a slightly slower pace, but none-the-less have been traveling that way for a long time.


In the end, Intel will lose and Nvidia will be able to make chipsets for Intel CPUs again. However, until Intel is slapped on the wrists and told it must compete fairly, Intel will be forcing its chipsets on PC manufacturers. Unfortunately, CONSUMERS LIKE US ARE THE LOSERS! We are forced to use inferior products because Intel has the most money.

It is unlikely that Intel will loose to Nvidia. The CPU and memory controller will be the core "black hole" that the components collapse into (at least from the computational logic side. Off-board communication has other logical "block hole" points; e.g., radios and associated processing. )

Nvidia tried to defer the collapse by trying get things to coalesce around the GPU. That's doomed because applications and operating systems run on CPUs. People primarily buy computers to run Applications (and perhaps associated OS) , not push data through GPUs.

Nvidia will get into a market where they can provide a CPU. There will be a n increasing smaller and smaller business that sells discrete GPUs. The discrete GPU will eventually go the way in mainstream computing as the discrete Floating Point processor.


Where AMD and Via (the only others that have viable x86 licenses) delivery solutions there will be some competition and consumer benefit.



The problem with all of these scenarios is that the DOJ doesn't get involved early enough,

The DoJ has little leverage when the collapsing of the technology into fewer units is something that has been an industry practice for a very long time.
Intel doesn't "have to" license all their interfaces. The buses that collapse into the chip packages will in many cases disappear from being open and widely implemented. (at least in the x86 specific case.)



In the long run, I believe Mac computers will be using an alternative "APPLE" CPU.

No way Apple is getting a x86 license. Frankly, the industry slavish devotion to the x86 instruction set is what gives Intel most of their power.
There won't be a classic personal computer , Mac with anything but a x86 in it. That war is over. All the cleaner solutions lost.

Apple may eventually kill of Mac OS X, but that solution won't be a Macintosh.
 
Are you joking? Do you not see that Intel pushed Nvidia out because Intel couldn't compete with the performance of Nvidia chipsets/GPUs? What happened was Nvidia had a license and had been winning the battle with big contracts by Apple and other computer manufacturers. Since Intel couldn't compete, it claimed Nvidia's license wasn't valid for Nehalem based CPUs. Nvidia certainly didn't expect this as it was developing two new chipsets to compete and support Core i-series CPUs like mobile Arrandale. Intel played BULLY and sued Nvidia to stop, solely because INTEL COULD NOT COMPETE! Since it couldn't compete fairly, it pushed Nvidia out. It was OBVIOUS to everyone except Intel stakeholders that Intel was acting anti-competitively.
I'll agree that Intel had issues with competing against nVidia when it came to GPUs (although in reality, Intel's sales for IGPs are still tremendous, and in regards to the discrete GPU market, Intel still has more of a take-it-slow approach). However, I think it's hard to argue that in areas such as the desktop chipset market, that Intel's chipset performance was worse than nVidia's. The only reason people really considered nVidia's Intel chipset products was for SLI support. Otherwise, in terms of general performance, stability, etc., Intel's offerings were generally always more highly-regarded.

As the former owner of both 600- and 700-series nVidia chipset products (they were the only option at the time for SLI with the Core 2 series), I could go on and on about the stability issues I experienced with it. And I know quite a few individuals who had similar problems. nVidia's best chipset solutions were the old nForce 2 and 3 series for the Athlons. Anyone remember SoundStorm? It was amazing.

In the end, Intel will lose and Nvidia will be able to make chipsets for Intel CPUs again. However, until Intel is slapped on the wrists and told it must compete fairly, Intel will be forcing its chipsets on PC manufacturers. Unfortunately, CONSUMERS LIKE US ARE THE LOSERS! We are forced to use inferior products because Intel has the most money.
Honestly, I'm leaning more towards Intel likely winning the argument in regards to licensing for designing chipsets. Intel's original agreement, from 2004, basically covered DMI and support for external memory controllers. With Nehalem (and ultimately Atom revisions), Intel moved away from DMI and over to QPI, with integrated memory controllers, and QPI is different enough from DMI that I wouldn't expect it be covered by the original agreement. It seems more that nVidia is trying to claim that its agreement is a blanket licensing arrangement, but I don't know enough about that to say whether it is or not. cmaier would be a good one to comment in response to that :D

Now, that having been said, I do fully support there being as many available options in the chipset business, whether good or bad, as it helps to provide competition. X58 is a very excellent chipset though (albeit one that runs fairly hot).

The problem with all of these scenarios is that the DOJ doesn't get involved early enough, and the bully has enough time to catch up with the real innovation of the smaller company that's forced out of the game. By the time Intel loses in court, Nvidia will have been out of the chipset business long enough to no longer compete as Intel will have caught up with the Nvidia chipsets. Nvidia will not have had the money for R&D during the period when it couldn't manufacture the chipsets, and it will be incredibly difficult to compete again when the cycle completes.
Well, in this regard, I don't think the DoJ/FTC is getting involved in regards to the licensing agreement, but rather how Intel has been trying to only allow Atom to be bundled with its own chipsets to vendors (at least in regard to the nVidia portion; AMD is a whole different matter, although with the Intel-AMD agreement it's unlikely AMD will offer much help to investigators).

This is a serious problem. I hardly believe it when people act like Intel has every right to force its products on people. In the CPU business, Intel has competed well. When it comes to chipsets and graphics, Intel is crap! So Intel just forces its chipsets and graphics through playing big bully and acting extremely anti-competitively. Most of these people acting like Intel has every right to deny competition are STAKEHOLDERS... for the rest of us it's common sense that Intel is in the wrong.
Intel Chipsets =/= crap. Intel GPUs are what they are - basic offerings ideally suited for non-power users who simply do basic office productivity and light media usage (and honestly, that's the majority of users, whether on a Mac or a PC).

Is Intel in the wrong for trying to force product bundling on vendors so as to thwart competition? Yes, most definitely. Is Intel wrong for threatening to withdraw financial support from vendors who wanted to offer AMD products? Yes (although a bit murkier I believe). Is Intel wrong for not necessarily licensing its IP so other companies can develop chipsets? Honestly, I don't think so, but I'm not an expert in that area, so I can't say for sure. People often point out that there's nothing wrong with Apple restricting access to its technologies, but yet some how Intel is "bad" or "evil" for wanting to do the same in regards to its tech portfolio. However, Apple does have another option to go with for processors/chipsets: AMD. And outside of the Mac Pros or i5/i7-based iMacs, Apple still relies heavily on the Core 2 lineup, which Phenom could fairly easily replace. There is choice in this regard. It's simply that everyone's favorite company chooses to stick with Intel.

Apple will be best off to eliminate Intel from its Macs. In the long run, I believe Mac computers will be using an alternative "APPLE" CPU. It will take some time, but I believe Apple understands that partnering with Intel is not in the best interest of Apple stakeholders, which include its customers!
The problem is that Intel provides both the top-performing CPUS (Nehalem-based) and the greatest options for both timely updates and cost. Apple's acquisition of PA Semi was a good movie in terms of getting themselves into designing low-power offerings and packages, but ultimately you're kidding yourself if you honestly think Apple is anywhere close to being able to provide desktop products equivalent to what Intel or even AMD provide. I honestly don't see them ever entering that area. Apple's total net income for 2009 was what, around $8 billion? Intel is planning to spend almost $7 billion on research and development alone this year. There's just no way Apple can compete with that.

The real constraints in today's computers has nothing to do with the CPUs. We would be fine with C2D for years. We can speed up user experiences by improving software, graphics hardware, and drive controllers bandwidth. It's just that Intel has everyone believing the only way to increase the speed to the end user is by making faster CPUs...
This all depends on what individuals are using their systems for. If all they're using them for are basic internet usage, office productivity, etc., then yes, that's perfectly true. But in reality, a lot of those users, aren't upgrading that regularly anyway. What's the average upgrade rate for the standard consumer, like once every 3-4 years? Technology evolves considerably in that time, and only helps to improve one's experience. And the reality is, processors are able to execute data faster, more reliably, with fewer "misses" during execution. Cache has improved, etc. That all results in faster and more stable performance that is directly experienced by the user. Is the Core 2 series more than adequate for most users? Yes, but those users aren't likely to upgrade for a number of years anyway, and by the time they do, advancements to the OS, applications, etc., will provide with a noticeable increase in the overall user experience.

it's really backwards if one thinks about the constraints of today's computers. It's just too bad people don't understand this, and these people just accept that Intel must be correct and we all need a Core-i7 CPU. As long as people remain out of the loop, they're going to keep believing buying a computer with a new CPU will best speed up their performance experience.
But all companies advertise their products in such a way as to make users want to upgrade, not just Intel. Look at Apple: every time a product refresh occurs, you see them talking about the performance increases, the reduction in time it takes to complete tasks, and how "this is the fastest Mac yet", and how much you want it. Do most Mac users need to upgrade? Of course not. But Apple makes it sound like you should. The iPhone is another example - for most iPhone users, I'd honestly believe the original model would suffice, or otherwise the 3G. Yet Apple wants you to upgrade to the latest model, not because in reality you'll experience that much more, but because they want you to buy their product so they make money. That's the name of the game, and every company does it. :)
 
Apple bought a large fraction of PowerPC chips (at least the non-embedded market) and worked with the arch team. PowerPC is and was not a custom chip. Neither is Itanium. You folks have seriously non-technical and grossly skewed definition for "custom chip" if buying 60% of them qualifies as "custom".
Well, I think you're a little lacking in the history of PowerPC. There were two companies working "together" on PowerPC: IBM and Motorola (which eventually spun off the chip division as Freescale). Apple worked with Motorola on PowerPC design, with occasional cross-work with IBM. For the most part though, for the G3s and G4s, it was Apple and Motorola. Motorola was basically the consumer-branch for PowerPC, while IBM handled the market for workstations, servers, etc. As such, their product lines were basically divergent to some extent.

Eventually, Motorola/Freescale decided that the consumer market for PowerPC was just too limited, especially since Apple was basically their only customer, and exited the business. Apple then relied on IBM in regards to the G5, but IBM was slow to release faster/upgraded versions, and along with heat issues, Apple eventually decided that Intel was the better way to go (less thermal management concerns, more regularly-updated product lines, better cost to performance, etc.).

Now, in regards to Itanium, it was actually originaly HP's design and development process that got it started, but HP decided it wasn't cost-effective for them to design and produce their own processors and chipsets, so they partnered with Intel to continue with it. It very much did begin as a project of HP though, and for a long time, Intel basically continued to design it with regards to what HP desired. Thus, it can be considered a "custom" design per HP's specifications. With time, HP had less influence and Intel basically exerted all of its own requirements upon it.

Itanium was targeted at killing off SPARC , Power, Alpha, MIPS, PA-RISC, Cray vector CPUs, etc. in the enterprise and high performance computing market. It has killed off Alpha, MIPS, PA-RISC and the custom supercomputer CPUs. SPARC .. well at least killed off Sun as an independent company. Power... well IBM has deeeeeeeeeeeep pockets. That was a long shot.
Itanium had nothing to do with Sun's downfall. SPARC was (and in some circles still is) highly regarded. When it came to SPARC, Sun banked on the technology sector boom of the late 90s to continue, with companies still wildly placing orders for its products. When the bubble collapse occurred, demand dried up almost instantly as tech companies went under, and Sun was left with a large amount of unsold inventory. That began its fall from grace.


Many smaller and/or less stable vendors who bet the farm on Itanium (Platform Solutions, SGI, etc.) lost because the versions didn't roll out in a timely fashion and the tech didn't perform as expected. None of that makes it "custom".
Well, Intel had hoped that Itanium would completely replace its X86 architecture across the board, but when it did finally debut, yes, performance was fairly... off the mark, and its emulation performance of X86 was really bad. Both CISC and RISC still outperformed it, and given the investment requirement to port to, or develop for, Itanium, it just didn't really succeed.

A lot of lessons, both good and bad, of Itanium ultimately helped Intel however when working on the Core 2 and Nehalem architectures though, so at least not all was lost.

HP isn't doing much on core design work anymore. Intel is practically doing all of that now.
HP doesn't do any core design work anymore as far as I know (and hasn't for quite some time). They basically go to Intel and say "Hey, we'd like to see these improvements", as do many other companies.

Compared to Intel's other workstation/server offerings, I don't recall Itanium being that prevalent in regards to sales. HP still accounts for something like 90% of all Itanium-based sales though, so yeah, they still have quite a bit of influence.
 
Just stick a powerful GPU in there and have it be smart enough to throttle based upon needs. No need for two GPUs.

Though not the same as a laptop, ATI does just that and even easy overclocking, which begs the question, why all the nvidia card. I can understand if your really needing that physx for those few games that get something out of it, but Macbooks?

I like to have seen iMac and macbook pro to have gone in the next revision to ATI 5000 serious video chips. :D
 
No it's not. AMD was working on a GPGPU for a very long time. Intel responded with the clarksdale to take away the possible advantage AMD was going to get with it.
I think you're misusing the term "GPGPU" in regards to what products you're referring to. If you're referring to AMD's Fusion and Intel's Clarkdale, those are more of simply combining a GPU and a CPU within the same package (and some sharing of similar architectural units between each). They're still generally discrete however.

GPGPU, or General-purpose processing on a GPU, is more the method of carrying out CPU application executions on the GPU instead, generally because a GPU can more efficiently/quickly process such executions, while also decreasing the performance overhead on the CPU since the overhead from those application executions is eliminated. nVidia's GeForce/Quadro line is the most well known for its GPGPU functionality.

Will Fusion and Clarkdale/Allandale be capable of GPGPU? Yeah, because traditional CPU computation can be offloaded to the GPU, but a GPU/CPU integration into the same package does not a GPGPU make.
 
Sony found a way to fit both a discrete Nvidia GT 330M with 1GB dedicated vRAM, using Optimus, plus dual SSDs, and a Blu-Ray drive in its Vaio Z. While it isn't a hybrid graphics selection,

Hmm, sub 1 inch thick , 7hr, sub $1,500, MBPro 13"

http://www.apple.com/macbookpro/specs-13inch.html

Vaio Z 1 inch thick , approx. 4hr , over $1,800 Sony Z

http://www.engadget.com/2010/02/11/sony-vaio-z-series-vpcz114gx-s-review/
http://www.engadget.com/photos/sony-vaio-z-series-review/#2695115


Not sure Apple is going to go the direction where the battery life drops.


P.S.
from the engaget review:
"While the integrated Intel GMA HD graphics were fine for basic everyday tasks, the 1GB NVIDIA GT330M was better suited for handling high-def video and 3D games."

Given there isn't going to be blu-ray HD video either... that just leaves 3D games.
 
Not sure Apple is going to go the direction where the battery life drops.
That's the amusing thing though. I'm all for battery life, but the Mac Book Pro's should not compromise performance/functionality for an extended battery life. It's meant to be a professional usage tool, not a chic fashion icon (and in reality, it still could be an amazingly nice design that looks nice, only slightly larger or with slightly less usage time).

The former Mac Books, before being rebranded as MBPs, were a perfect example of nice-looking notebooks that were perfectly suited for the casual user market.


"While the integrated Intel GMA HD graphics were fine for basic everyday tasks, the 1GB NVIDIA GT330M was better suited for handling high-def video and 3D games."

Given there isn't going to be blu-ray HD video either... that just leaves 3D games.
So it has a Blu-Ray drive in the device that is not capable of playing back Blu-Ray movies? Uh, ok... I'll quote Sony's own store profile of its features list:

With pristine HD resolution and stunning clarity, Blu-ray Disc™ technology is the ultimate way to enjoy your entertainment.
http://www.sonystyle.com/webapp/wcs/stores/servlet/CategoryDisplay?catalogId=10551&storeId=10151&langId=-1&categoryId=8198552921644570897&parentCategoryId=16154#features
 
It seems more that nVidia is trying to claim that its agreement is a blanket licensing arrangement, but I don't know enough about that to say whether it is or not. cmaier would be a good one to comment in response to that :D


As I'm not privy to the licensing agreement, I can't guess. But I wouldn't be at all surprised if the feds spank Intel in antitrust because of this.
 
No it's not. AMD was working on a GPGPU for a very long time. Intel responded with the clarksdale to take away the possible advantage AMD was going to get with it.

Just because Intel released theirs first, doesn't mean that they were the first with the idea, AMD has been planning it much earlier. ;)

You need to learn how to read.
 
It is not CPU/GPU on die and not GPU switching.

Fine it is not on die, but Intel still stole the idea.

If its not GPU switching then, what is it?

http://www.anandtech.com/mobile/showdoc.aspx?i=3737&p=3

Optimus does not "shutdown" the IGP. The IGP's connection to the display(s) is used no matter which GPU does the grunt work for a specific application. It is also not "automatic". Applications are looked up in a database and if listed will use the discrete processor if available. The default is to use the IGP. If don't use a "shared" framebuffer will need hardware switching which complicates the design. You application is not "switched" from one GPU to another. It is always on one specific GPU until update the database. Likewise the "output" logic/circuits from the discrete GPU aren't connected to anything; again not switched.


Likewise, it is not just a "driver" solution since need hardware frame buffer copy engine that can run in parallel with the graphics pipelines.

Your application mix will dictate what kind of battery life you get. It will be worse than IGP only mode, but likely less than the discrete only mode. If doing a mix of applications there will be times when in IGP only mode while the discrete unit is shut down (unless continuously running some complicated 3D app in the background constantly.... which is a corner case.) .


With Arrandale, the IGP is on a separate die. The i3/i5/i7 mobile processors packages contain two dies (cpu 32nm , igp 45nm). One with CPU/memory controller(/PCI ?) and the other with IGP (/PCI ? ) connecting by an internal QPI link. It is OK to separate the IGP from memory controller since QPI is what would use to link memory controllers on different CPU packages anyway.

[from some other comments here: It is very unlikely that the memory controller is on the die with the IGP since other Nehalem class processors have the memory controller built into the CPU die. No reason to change that just for the mobile versions. ]

This isn't Intel's merged solution yet. By the time Intel shrinks the Arrandale IGP down to 32 nm (or 28 nm) there will be a big enough power and transistor budget to equal the 9400. And then it becomes a question of what is "good enough". The current Arrandle IGP has OpenGL 2.1 , hardware video, and enough horsepower to do all of the mainstream CoreAnimation stuff Apple provides through standard APIs. It is only when push into stuff like mid-high range 3D games/apps that there is an issue.

http://techreport.com/articles.x/18218/8

The Arrandale IGP is 2.5 times faster than the old Intel IGP that folks moan about. That is a significant jump. No, can't play Call of Duty 4 at 30fps but that never was Apple's targeted market with MacBookPros. Apple has never targeted gamer laptops.

The IGP is doing about 60-70% of the 9400. For folks whose graphics usage was only 50% of the 9400 they will see no difference. Once get into the 15-20% difference the marketplace for discrete graphics will shrink even more.


It hasn't been so much the case of "low quality" GPUs with Intel, but more of what kind of price, power, and transistor budget they have allocated to their IGP. Intel's IGPs have been less expensive, lower power, and smaller than the other solutions.



The significant difference between the AMD version and the Nvidia Optimus solution is that it works across heterogeneous implementations of the GPU. (Intel & Nvidia as opposed to a AMD/ATI & AMD/ATI ). Conceptually, the Optimus technology would equally as well with AMD CPU/GPU as long as Nvidia updates the abstract driver interface to ship IGP work off to the ATI GPU. ( A noted, as of now, the Nvidia discrete stuff is running hotter than the ATI competition so not a highly likely market right now but could be done. )

There is similarity in that copying between GPU framebuffers ( similar to how CrossFire/SLI do for high end gaming graphics. Only in this case the two GPUs are not equal or compatible.


I'm not getting into this argument, its based off a mis-read statement and you have some of your words confused.

BTW, you'll be surprised how much of technology is actually implemented in the software and not the hardware. Like that CrossfireX and SLI have been enabled on competitors chipsets through the use of Driver mods.
 
Just stick a powerful GPU in there and have it be smart enough to throttle based upon needs. No need for two GPUs.

There is also the same reason why don't do this with CPUs.
Powerful GPUs cost more money.

This is almost equivalent to saying just stick a 12-16 core CPU package in there and just turn off cores when not using them. Conceptually you could but
i. going to get charged substantially more for that silicon and package.
ii. what happens when go max performance and still within the thermal envelope of the laptop's constraints.

The first is partially an issue of getting high volume/yield, low cost, access to runs of something with many more transistors than a smaller transistor budget solution.

Idling the cores on a Xeon processor isn't going to get you a low power/thermal alternative.

With the "integrated" aspect of a IGP part of the issue is that some portion of the die must be dedicated to the "other stuff" sharing the die with. Sharing also mean cost cutting. So with shared memory controller don't have duplicate memory controllers or RAM ( use the same ones that the CPU uses). However, if have large number of execution pipeplines in your monster GPU ... will there still be bandwidth to share with the CPU? Remember discrete solutions put much of the memory/pipeline pressure on the "back end" away from the CPU memory bandwidth.

With the large growth in CPU cores has also come growth in memory controllers. That would only get worse if pile significant number of GPU cores onto the same relatively small set of memory controllers.
 
The "GPU" in the Arrandale CPU also includes the memory controller for the CPU. There is no easy way to not include it. Not to mention it's been benchmarked to be about equal to the 9400M, and supports full screen video acceleration and other good features such as an audio output!

I read online somewhere and can kinda believe it that it said the fastest integrated offering from intel was still twice as slow as the 9400M in terms of gaming performance & video render times lol. Glad i got my MBP at xmas, cos these GPU chips could ruin everything. I mean take it from me I have 7 Macs 4 of which use intel graphics my 3 Mac minis use the GMA 950 and my old white MacBook uses a X3100 faster than a GMA 950, but still ultra slow compared to a 9400M, even the new HD series intel ones are slow. Also OpenCL in Snow Leopard currently doesn't support any intel cards and with 10.6.3 in testing I'm sure we'd have seen some new drivers to be honest this update doesn't look likely for quite a while perhaps April/May/June time
 
This is almost equivalent to saying just stick a 12-16 core CPU package in there and just turn off cores when not using them. Conceptually you could but
i. going to get charged substantially more for that silicon and package.
ii. what happens when go max performance and still within the thermal envelope of the laptop's constraints.
You mean like how the i7 "shuts down" (in fact they're idling, but at almost 0 power draw) cores that aren't in use and increases the clock rate of the non-idling cores via TurboBoost?

The whole design philosophy behind TurboBoost in Nehalem-based CPUs is that cores are throttled down to nearly-zero power consumption, and are essentially "turned off", when not in use. Let's use an example:

Let's say you have a quad-core processor where each core, at load, is rated for 2.8 Ghz. Now let's say the full TDP of the processor is rated for 100 W. Each core, at load, will be using 25 W.

Now, let's say you're not at load, and only one core is needed. The design behind Nehalem (that's especially taken advantage of with Lynnfield and later derivatives) uses what's called the Power Gate Transistor, which allows a core to be essentially put into a turned-off state, with nearly-zero power consumption. Thus, when only using one core, that core can essentially have almost the entire 100 W of rated TDP, which allows the core to increase its frequency. Hence, TurboBoost. Clarkdale itself includes TurboBoost, which very much shows that cores can be "turned off" in laptops.

Idling the cores on a Xeon processor isn't going to get you a low power/thermal alternative.
Yes and no. Using the above example again, when three of the four cores are "turned off", that one remaining active core will have a full 100 W available for use. However, remember that it's rated for 25 W at 2.8 Ghz, so if the current CPU utilization rate is no where near load, the core has no reason to scale up and use more power. Thus, technically it's not a consistent "low power/thermal" envelope, it still will result in commonly-lower power consumption and thus lower thermal output.

With the "integrated" aspect of a IGP part of the issue is that some portion of the die must be dedicated to the "other stuff" sharing the die with.
Not necessarily. Generally, yes, as it is more cost efficient, but you could provide discrete architectural units on-package for both the CPU and GPU. It would simply result in a large package size, which is something Intel, AMD and others would prefer to avoid.

Sharing also mean cost cutting. So with shared memory controller don't have duplicate memory controllers or RAM ( use the same ones that the CPU uses).
It all depends on how they implement it. For low-cost solutions, I could see the memory controller being shared to some extent. For any type of future performance GPU/CPU solutions, I'd imagine they'd probably have discrete controllers or, at the very least, enough redundancy in a single controller that would allow it achieve much of the same performance as two discrete solutions.

Remember too that, outside of bandwidth-intensive benchmarks that truly stress system bandwidth, the current i7s with an X58 chipset don't even see a performance advantage over Lynnfield's dual-channel memory controller, even though the X58 and QPI-based i7 has a third memory channel. Intel has a lot of smart people working for them, so I'm sure feeding bandwidth into both the CPU and GPU will not be a problem.

However, if have large number of execution pipeplines in your monster GPU ... will there still be bandwidth to share with the CPU? Remember discrete solutions put much of the memory/pipeline pressure on the "back end" away from the CPU memory bandwidth.
Both AMD and Intel have openly stated that same-package CPU/GPU solutions that rival discrete offerings will not be available until quite some time into the future. You think Intel is interested in developing their own discrete high-performance graphics solutions simply for giggles? :)

As it stands, even when high-performance GPU solutions become available in an integrated package, I don't think we'll have to be worrying about available bandwidth.

With the large growth in CPU cores has also come growth in memory controllers. That would only get worse if pile significant number of GPU cores onto the same relatively small set of memory controllers.
Memory controllers have actually shrunk - compare the space utilized in the current integrated solutions in a CPU die to the typical size of the memory controller from the old Northbridge-based controllers.

Also, if you compare the footprint of the memory controller for Gulftown to that of Bloomfield, it really hasn't changed that much. In reality, Nehalem-based solutions are far more dependent upon the latency of the cache and how well it can feed data into the cores than it is on the bandwidth provided by the memory controllers.
 
by volume Intel was selling more graphics solutions than Nvidia was.
Similarly there have been numerous tracks on System on a Chip for the last 10-12 years at chip conferences. The collapsing of CPU, GPU, and most supporting chips has been predicted and happening for a very long time now. The notion that Intel woke up one morning and found the 9400 eating their shorts and decided to purposely nuke them isn't backed by any factual observation of the industry over the last 10 years.

Most system vendors have been asking for solutions with few chip counts for a very long time. This is already extremely pervasive in the smaller computer markets ( embedded / handhelds / etc). Desktops/Laptops have been heading in the same direction at a slightly slower pace, but none-the-less have been traveling that way for a long time.

It is unlikely that Intel will loose to Nvidia. The CPU and memory controller will be the core "black hole" that the components collapse into (at least from the computational logic side. Off-board communication has other logical "block hole" points; e.g., radios and associated processing. )

Nvidia tried to defer the collapse by trying get things to coalesce around the GPU. That's doomed because applications and operating systems run on CPUs. People primarily buy computers to run Applications (and perhaps associated OS) , not push data through GPUs.

Nvidia will get into a market where they can provide a CPU. There will be a n increasing smaller and smaller business that sells discrete GPUs. The discrete GPU will eventually go the way in mainstream computing as the discrete Floating Point processor.

Where AMD and Via (the only others that have viable x86 licenses) delivery solutions there will be some competition and consumer benefit.

The DoJ has little leverage when the collapsing of the technology into fewer units is something that has been an industry practice for a very long time.
Intel doesn't "have to" license all their interfaces. The buses that collapse into the chip packages will in many cases disappear from being open and widely implemented. (at least in the x86 specific case.)

No way Apple is getting a x86 license. Frankly, the industry slavish devotion to the x86 instruction set is what gives Intel most of their power.
There won't be a classic personal computer , Mac with anything but a x86 in it. That war is over. All the cleaner solutions lost.

Apple may eventually kill of Mac OS X, but that solution won't be a Macintosh.

You bring up some interesting points, and I can see your side of the argument. I still see Intel's move as being anti-competitive and vertical in nature.

First, it's not about whether Intel was selling more chipsets or not. With regard to Nvidia, it had devised a mobile chipset integrating a superior GPU than Intel could provide. I believe Intel was embarrassed by Apple's move to Nvidia. Furthermore, Apple trashed Intel by stating its customers were getting 5X the graphics provided by Intel chipsets. This cost Intel not only sales but also loss of brand value. I believe Apple and HP together cost Nvidia its license. Nvidia certainly didn’t know it had lost its license as it developed two new Nvidia chipsets including GPUs that were based on Nehalem mobile CPUs. It was awfully late in the game when Intel claimed Nvidia had no license…

It's not about consolidating parts in consumers’ best interests. Looked at what really happened, the GMA 45nm die was added right to the CPU. All it really did was move part of the Intel chipset to the CPU. Does that really define integration in the consumer’s best interest? A computer still needs a CPU and chipset… both offered by Intel.

Intel decided to eliminate Nvidia's chipset license by adding its graphics die to the component makeup of the CPU thereby changing the chipset and CPU makeup – a technicality. That is all it was, a technicality. Tell me Apple wouldn’t buy Nvidia chipsets and GPUs because Intel is now including the graphics die right on the CPU. I wouldn’t believe it for a minute, because Intel’s graphics are inferior. Apple spent a lot of money not only convincing us that Intel’s graphics were inferior but that we could count on utilizing the extra graphics capabilities of the Nvidia GPU in the future via OS X’s OpenCL integration.

In my opinion, it leads to a vertical solution for Intel, which is an anti-competitive move. What Intel did was re-arrange the makeup of the CPU and chipset and it caused Intel to claim Nvidia was no longer provided a license... then Intel wouldn't let Nvidia license it for future Intel products. The reason it refused was because Nvidia cost it not only some of its mobile chipset business but also a lot of bad publicity with Apple. Intel finally found its way into not only Apple’s computers for sales, but also the premium branding that benefits Intel to be in some of the most highly regarded computers in the world. When Apple moved to Nvidia’s chipset/GPU model, it cost Intel more than some mobile chipset sales.

I definitely understand what you're saying with consolidating the graphics and etc. But that's not the core argument here. When there's vertical integration with inferior products throughout the vertical path, it's not in the best interests of consumers. That is exactly what we have here. Intel is providing an inferior component within the CPU itself instead of the chipset. This is the key for a core vertical argument. The anti-competitive abuser is the one requiring the consumer to buy a “package” filled with inferior components… the inferior component here is the graphics die. Not only that, Intel is still requiring consumers to buy all the parts just via a different method (both packages contain same parts between them but both packages must now be branded Intel). Intel’s graphics die cannot compete outside of the CPU, so Intel moved it right onto the CPU. It’s advantageous for Intel to do this, and it’s advantageous for computers that don’t need graphics capabilities. But it’s not advantageous for most, and it’s certainly not advantageous for all.

There lies the problem, the average consumer wants affordable but also needs both a CPU and a chipset to make his/her computer run… with Intel, it will require now that both those components be Intel because of a technicality of it moving the GMA die to the CPU. This is a vertical move forcing elimination of a competitor to better Intel’s own interests. It is anti-competitive and Intel knows it. Intel is still selling the same old parts in two different packages… just part of “package B” now ships in “package A.”

I hope Apple can find a way to use the Intel IGP via automatic switching with Optimus. Otherwise, we were FORCED to buy a component we didn’t need. See, there’s the last piece to the anti-competitive puzzle. Intel forces the IGP to be purchased by all within the CPU now! Different computer users/consumers/buyers require different needs. With Intel’s anti-competitive move, EVERY USER MUST BUY INTEL’S IGP and many will end up with graphics not even used or necessary.

Apple is the prime example with its current Nvidia GPU. The Nvidia 9400m GPU/chipset allowed Apple to buy the right graphics within the chipset to provide the performance required by Apple Mac users. This shows why the graphics being integrated within the chipset was far more advantageous for the consumer than that integrated within the CPU itself. Before Intel’s anti-competitive move, Apple was able to buy one Intel CPU and one Nvidia chipset that met the requirements for Mac users using Mac mini, iMac low-end, MBA, MB, 13” MBP, and 15” MBP low-end computers. Since Intel’s Arrandale IGP is inferior to what could be provided within a graphics chipset, Intel is forcing Apple to buy a CPU with integrated graphics, an Intel chipset, and a discrete solution to get the exact same minimum level of performance as was achievable previously via the integrated GPU/chipset and Intel CPU.

That is correct, and that shows anti-competitive nature more than any other piece of the puzzle. Every CPU buyer will have to buy Intel’s graphics whether they’re inferior to that buyer’s needs or not… so there’s MORE WASTE for the consumer NOT LESS!

Sorry, but this still sticks out to me as a vertical and anti-competitive move by Intel. I sure wish I could have written it in a lot fewer sentences… LOL.
 
I read online somewhere and can kinda believe it that it said the fastest integrated offering from intel was still twice as slow as the 9400M in terms of gaming performance & video render times lol. Glad i got my MBP at xmas, cos these GPU chips could ruin everything. I mean take it from me I have 7 Macs 4 of which use intel graphics my 3 Mac minis use the GMA 950 and my old white MacBook uses a X3100 faster than a GMA 950, but still ultra slow compared to a 9400M, even the new HD series intel ones are slow. Also OpenCL in Snow Leopard currently doesn't support any intel cards and with 10.6.3 in testing I'm sure we'd have seen some new drivers to be honest this update doesn't look likely for quite a while perhaps April/May/June time
http://developer.apple.com/graphicsimaging/opengl/capabilities/

Maybe it's a backup plan, but Apple actually seems to have already put significant effort into improving the OpenGL drivers for Intel GMAs. Leopard drivers only supported up to OpenGL 1.2 for both the GMA 950 and the GMA X3100 while Intel advertised the GMA 950 as having hardware OpenGL 1.4 support and the GMA X3100 as having OpenGL 1.5 support. In Snow Leopard, the GMA 950 drivers now have OpenGL 1.4 support and the GMA X3100 has OpenGL 2.0 support on par with the ATI X1000 series. And according to the chart, the GMA X3100 actually supports the extensions necessary for OpenGL 2.1 support which is supposed to be Arrandale's IGP's new feature. If Intel graphics really was a dead-end solution as Apple seemed to imply when they marketed the nVidia 9400M switch, it doesn't seem likely that Apple would invest so much development effort in bringing OpenGL support up to par for these older Intel IGPs, unless they were setting the stage for Arrandale's IGP which is an improved derivative of the GMA X3100 and could share the same driver roots.
 
Hmm, sub 1 inch thick , 7hr, sub $1,500, MBPro 13"

http://www.apple.com/macbookpro/specs-13inch.html

Vaio Z 1 inch thick , approx. 4hr , over $1,800 Sony Z

http://www.engadget.com/2010/02/11/sony-vaio-z-series-vpcz114gx-s-review/
http://www.engadget.com/photos/sony-vaio-z-series-review/#2695115


Not sure Apple is going to go the direction where the battery life drops.


P.S.
from the engaget review:
"While the integrated Intel GMA HD graphics were fine for basic everyday tasks, the 1GB NVIDIA GT330M was better suited for handling high-def video and 3D games."

Given there isn't going to be blu-ray HD video either... that just leaves 3D games.

I don't remember the quote you used, but the Sony Z weighs in at around 3 pounds; it's a full pound lighter weight than the 13" MBP. It actually competes in performance with the MBP while it matches the weight and physical characteristics of the MBA.

Let's instead use the Asus UL80JT as an example. Asus got 12 hours of battery life by autoswitching the graphics between an Nvidia Ge Force 310m and Core i5 Arrandale graphics (presumably using Nvidia Optimus). Perhaps that was where my head was when I mentioned it. Either way, both solutions show what Apple could/should do over using Intel's IGP as the MBP's sole graphics solution... even in low-end 13" and 15" MBPs.
 
I don't remember the quote you used, but the Sony Z weighs in at around 3 pounds; it's a full pound lighter weight than the 13" MBP. It actually competes in performance with the MBP while it matches the weight and physical characteristics of the MBA.

Let's instead use the Asus UL80JT as an example. Asus got 12 hours of battery life by autoswitching the graphics between an Nvidia Ge Force 310m and Core i5 Arrandale graphics (presumably using Nvidia Optimus). Perhaps that was where my head was when I mentioned it. Either way, both solutions show what Apple could/should do over using Intel's IGP as the MBP's sole graphics solution... even in low-end 13" and 15" MBPs.
In truth, I don't see a huge benefit in autoswitching graphics as provided by nVidia Optimus compared to non-automatic but still dynamic, no log-out requiring graphics switching as provided by ATI or previous nVidia solutions. Optimus does not improve pure maximum battery life, since that would be achieved with the IGP only with the discrete GPU off regardless of Optimus. And nVidia Optimus should actually be slightly detrimental to both battery life and performance when the discrete GPU is active since Optimus requires the IGP to be active all the time since the discrete GPU must copy to the IGP's framebuffer and use the IGP's display pipeline thereby using more power than with the IGP off and the discrete GPU on as in ATI and previous gen nVidia solutions. Performance is reduced, because with the IGP on all the time, the CPU would be less likely to Turbo Boost to the highest clock speeds since the IGP is taking up shared thermal room.

nVidia Optimus' benefit is in cases where on battery you constantly move between GPU intensive and not GPU intensive work where the software will automatically switch between the IGP and the discrete GPU to maximize battery life. However, at least for my use and I'm thinking for the average user too, this won't be a common case when mobile and on battery power. Arrandale's IGP can provide h.264 decoding acceleration, Flash acceleration (if Adobe ever expands support from Windows to Mac), and even basic GPGPU OpenCL acceleration (DirectCompute drivers are confirmed as coming for Windows) which are the primary arguments for needing a nVidia IGP. If I were to actually play an intensive game or heavy OpenCL app, these would likely be longer duration tasks, I would likely need the performance of the discrete GPU anyways so having the fastest IGP isn't that important, and switching to the discrete GPU myself manually as long as it doesn't require a restart isn't an onerous task making ATI's or nVidia's previous gen switching solution sufficient. As such, I'd prefer not to sacrifice ATI's better performance discrete GPUs for nVidia GPUs just to get nVidia Optimus support. Neither am I too concerned about the switching back to Intel IGP's as long as discrete GPUs are also included.
 
First of all, I have my doubts Apple will go beyond a G310M because the next step up is the GT325M/330M/335M, and those take up a lot more power than the 310M (and are more expensive as well).

Unfortunately, the 310M is just a 9400M rebadged w/ higher clocks and DX10.1 support, so that's not helping those of us who want performance. So Apple may be stuck between a rock and a hard place in terms of balancing performance with battery life.

Second, Optimus is NOT just a software solution. Yes, there is a software side, but the hardware side is that it doesn't use mux's to switch the GPU's, so it's not like the current solutions out there.

And third, some of you guys seriously need to look into how CPU and GPU development cycles work. No matter how highly you think of Apple, they aren't anywhere near close enough of a player to affect how Intel or even Nvidia act.

The architectures for the CPU's and GPU's are laid out years in advance. Apple may request/pay for different packages, but they aren't getting anything "custom" made beyond how it looks on the outside, basically. Intel's a huge player around the entire world, and Apple outside of the US is tiny, and has little to no server market, which Intel and AMD battle over as well.

Intel likes Apple because they certainly don't have many other customers who generate as much hype. Intel like riding in that bow wave. Similarly, Intel would like for PC vendors to pick up certain Intel only solutions sometimes that Apple sometimes is much more willing to lack onto. Then they can say "see these guys are doing it, you should too". That's not because they are doing Apple's bidding.

Are you serious?

Apple doesn't even advertise the specific model of the CPU, so how does that generate hype for Intel?

And for that matter, Apple is always late to the party when it comes to hardware, so how does that generate any hype for Intel's core i3/i5/i7 mobile processors that have already been out for more than a month?

Apple's just an additional market for Intel, and Intel being in a dominating position in the market right now doesn't really have to worry about losing the market
 
That's the amusing thing though. I'm all for battery life, but the Mac Book Pro's should not compromise performance/functionality for an extended battery life.
Name an occasion when Apple introduced a laptop which decreased battery time.

To me it seems clear that Apple applies "Pro" to models that would be primarily purchased by people who use it as a tool for whatever they there profession is that they make a living off of . It isn't short for "Video/Media Pro" or "Ultimate Power user Pro" ... just professional. Being a professional don't perclude being on the road or moving around a significant amount of your day and not being a slave to the nearest wall socket.

Being a slave to the nearest Wall socket would be more aptly abbreviated as "DTR" desktop replacement rather than "Pro".


So it has a Blu-Ray drive in the device that is not capable of playing back Blu-Ray movies? Uh, ok...

Apparently, the context is lost on you. This whole thread is about Mac laptops not sony ones. If you think Apple is about to put a blu-ray on the next gen laptops I have a bridge to sell you. I am completely uninterested in whether should or shouldn't put blu-ray in them ( that argument has been beaten far, far , far, far, beyond death on these forums ) , just what they are likely to do given their past behavior and comments.

The other part of the context is folks complaining about how the Arrandale IGP won't suffice as a sole solution. Again, really don't care about what folks would do if they were in charge of what Apple's design constraints are, just what Apple is likely to do given past indicators of design choices. The "need" for something faster than an Arrandale IGP could be driven by "need" to render blu-ray video content. If there is no content to render then that "need" isn't a design constraint.

Just to make it crystal clear, what that quote denotes is that the reviewer is saying that the Arrandale IGP is sufficient for most everyday situations. That there is a much smaller class of exceptional cases where "need" something faster.
 
Fine it is not on die, but Intel still stole the idea.

AMD bought ATI around 2006. Around 1997 this processor was introduced:

http://en.wikipedia.org/wiki/MediaGX

The notion that the idea/concept of merging the major functions onto a single package occurred in the mid 2000's is flawed. While the scale may be bigger (merging a modern GPU versus VGA ) the idea/concept is essentially the same.



If its not GPU switching then, what is it?

In that the application always goes to the same GPU unless you change the database is that "switching". Sure can tweak the database or perhaps set the system to ignore the database but is that going to be the normal usage mode or expectation ?



BTW, you'll be surprised how much of technology is actually implemented in the software and not the hardware.

I think you are underestimating the software involved. Some of the current MPB will switch on the fly using the available hardware if are in a "normal" windows PC implementation.

Making it opaque and artifact free isn't so software centric.

I would not be surprised at all if some of the key pieces to Optimus turn out to be hackery specific to Windows (and perhaps BIOS) APIs. On Mac OS all the windows displays go through the Windows server.

http://developer.apple.com/mac/library/technotes/tn2005/tn2083.html#SECWINDOWSERVER

The apps talk to it through Mach messages. So should make it rather interesting how the driver (below the server ) finds out which specific app is making the request so that the abstract driver can play hocus pocus with the workload.




Like that CrossfireX and SLI have been enabled on competitors chipsets through the use of Driver mods.

Not particularly interested in hacks that people made work. I have no doubt that can make them appear to work. Would find much more creditable are vendor ( OS and/or graphics ) supported solutions. Unless it is a stable, defect free ( no occasional spurious artifacts ) , supported solution I don't think it likely Apple will include it in the standard distribution.
 
Not particularly interested in hacks that people made work. I have no doubt that can make them appear to work. Would find much more creditable are vendor ( OS and/or graphics ) supported solutions. Unless it is a stable, defect free ( no occasional spurious artifacts ) , supported solution I don't think it likely Apple will include it in the standard distribution.

You do know that you just shot down ASRock and MSI don't you? In Fact, any MultiGPU solutions will work as long as the chipset supports peer-to-peer talking between the PCIE busses.

AMD bought ATI around 2006. Around 1997 this processor was introduced:

http://en.wikipedia.org/wiki/MediaGX

The notion that the idea/concept of merging the major functions onto a single package occurred in the mid 2000's is flawed. While the scale may be bigger (merging a modern GPU versus VGA ) the idea/concept is essentially the same.

" the core was developed by National Semiconductor into the Geode line of processors, which was subsequently sold to Advanced Micro Devices."

They stole the idea form AMD.
 
are you sure you have enough ram or you don't use a old ppc Photoshop?
Consider than photoshop use gfx very little
anyway... On my 13" I useany gfx programs like PixelMator and it damn fast.

I have 4GB of RAM and I'm using Photoshop CS4... I mean everything else runs smoothly, but I know that CS4's new "flick panning" thing is GPU accelerated, so that gets super choppy when not using the discrete GPU. It's very badly programmed, since I don't see why panning an image should be difficult. My iPod Touch does it without any problems :)
 
Are you serious?

Apple doesn't even advertise the specific model of the CPU, so how does that generate hype for Intel?

And for that matter, Apple is always late to the party when it comes to hardware,

Hmm, the MacPro with the Xeon 5500/5300 was on the market approx a month before the other vendors. Similarly how is this page:

http://www.apple.com/macpro/features/processor.html

not an advertisement for Intel. 'Intel' appears 7 times and there is a direct link to the intel website.

No apple doesn't talk specific CPU model numbers. Specific CPU model numbers don't build overall corporate brand awareness.

You can bet that when Apple introduces the i3/i5/i7 version of the laptops that will be at least a couple paragraphs if not a whole web page devoted to how "insanely great" it is that Intel (not AMD or anybody else) merged the CPU and most of the formerly classic northbridge onto one package. It will be battery life / power savings / better Stream, office application, 2D graphics , etc. benchmarks / something.


Likewise Apple currently running 9400m up and down almost its entire lineup is beneficial to Intel? No. If Intel wipes that out .... there is no upside for Intel in that?

EFI .... developed by Intel and implemented across the line up by Apple. Who else has done that? (sure Apple isn't unhappy that folks didn't follow along with that. )


Sure Apple lags on adoption of the "Extreme" and high thermal versions of the mainstream chip lines. Likewise doesn't peddle the VT crippled ones nor the Atom stuff. (although folks speculated they'd use Atom over and over again. Go back and look at any Pine Trail announcement here on macrumors. ). Nor does Apple put those tacky stickers on their products. However, Apple puts Intel's overall brand name out there over and over again in several major announcements. Much better than the days when running ads about how their products are snails or the possible alternative universe where that MacPro page extolds the virtues of Opteron. (effectively very similar arch benefits )

Apple has a relatively smaller product line up that other vendors their size. They also have a more spread out product release cycle ( goes with having fewer, non-overlapping products). So the opportunities to synchronize an Apple hype cycle with an Intel one is fewer. Doesn't mean they don't exist.

The laptops all got refreshed relatively close to before the mobile i3/i5/i7 product release came. They are a bit out of synch and so Apple will be rolling out after instead of before. Apple isn't going to bend over backwards to promote Intel; just when it is convenient and synergistic.
 
http://developer.apple.com/graphicsimaging/opengl/capabilities/

Maybe it's a backup plan, but Apple actually seems to have already put significant effort into improving the OpenGL drivers for Intel GMAs. Leopard drivers only supported up to OpenGL 1.2 for both the GMA 950 and the GMA X3100 while Intel advertised the GMA 950 as having hardware OpenGL 1.4 support and the GMA X3100 as having OpenGL 1.5 support. In Snow Leopard, the GMA 950 drivers now have OpenGL 1.4 support and the GMA X3100 has OpenGL 2.0 support on par with the ATI X1000 series. And according to the chart, the GMA X3100 actually supports the extensions necessary for OpenGL 2.1 support which is supposed to be Arrandale's IGP's new feature. If Intel graphics really was a dead-end solution as Apple seemed to imply when they marketed the nVidia 9400M switch, it doesn't seem likely that Apple would invest so much development effort in bringing OpenGL support up to par for these older Intel IGPs, unless they were setting the stage for Arrandale's IGP which is an improved derivative of the GMA X3100 and could share the same driver roots.

This looks great, but is strange as I can't see anywhere on the intel site that the GMA X3100 supports greater than Open GL 1.5 under windows, so I wonder if Apple have made a mistake on their table. The Intel site is a little confusing though so I may not have found the latest info, plus it's hard to separate the GMA from the chipset, different chipsets have the same GMA... Anyone know what chipset it is in an early 2008 MacBook (4,1), as I'm trying to run something that requires Open GL 2.0 support, don't care if it's on OSX or a bootcamp XP, doesn't work in either at the moment (though I am in 10.5 so maybe I should make the switch to 10.6 now, but only keen if it will definitley allow OpenGL2.0 compatibility...).

And back to the topic, my hunch would be they'll follow the imac line for MBA and MBP, intel integrated and ATI discrete. Will be interesting to see though, I'm hoping for a Sony Vaio Z killer/equaller in either MBA or MBP 13" format but suspect I'm going to be disappointed!
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.