Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Honestly, this has to be the single best and most comprehensive answer I have ever received on here. Normally the response is just a baseless assumption marked by overly inflated self-congratulation. Rumors or one thing, and they are great fun, but informed comments based on legitimate questions is a completely different situation. Thanks again!

Leopard does not, nor will it, require Shader 3.0. Most Apple users do not have an appropriate graphics card to do so. It may well be relevant to 10.6 or 10.7, but probably still will not be required, just as Shader 2.0 isn't absolutely required now.


Not at all. The chipset and CPU are quite distinct from each other; a single chipset can support several generations of vastly different CPUs. The venerable Intel 945, for example, supports Pentium 4, Pentium D, Core Duo, Core 2 Duo, and Core 2 Extreme CPUs (as well as Celeron derivatives). Thinking of them as a single unit places unnecessary and inappropriate limits on the technology.

In short, the chipset supports the operation of the system as a whole. It contains all the limiting logic of the motherboard--what memory types and speeds are supported, the socket/FSB/model of CPU supported, and it packages the onboard ethernet, sound, video, and wireless as applicable to the current platform. In most respects, the chipset is the logic board in the traditional Apple sense (i.e. it is "the" computer, minus the RAM and CPU [and graphics card, for systems without onboard graphics).

The CPU on the other hand is just the part that does the calculations. It is responsible for the major performance of software (apart from graphics). The chipset plays an important, but less dramatic role in speed and performance. The chipset has become the key limiting factor in the modern age--the onboard graphics can't be upgraded; you can't break the limits imposed on the type and quantity of RAM; you can only upgrade CPUs to the maximum supported by the chipset. Exceeding any of these limits requires a new computer.


I wouldn't say that Santa Rosa's major contribution is in the area of graphics. It is certainly the most visible and directly applicable improvement for end-users, but it is the next major step in the comprehensive overhaul of the x86 architecture as we know it. We aren't going to be seeing any massive speed changes like we experienced five years ago; the technology we have is mature and improvements will be incremental barring any major breakthroughs.

This is why we're having multiple cores pushed onto us--it's a way of extending Moore's Law while the semiconductor industry has essentially stalled. In its defense, home users rarely need even as much power as available to them now. There's not really any new necessity to push home user markets forward.


Updates usually do happen after the "back to school" sales have depleted inventory. This is advantageous for everyone--students get good deals and push out extra inventory allowing for the faster shipment of updated models. Students simply don't need the absolute newest and cutting edge to ship before school starts; in the traditional marketing sense, they're budget-oriented shoppers distracted by shiny objects. Any unusual student seeking the best and newest would schedule updates around the end of the first semester to capitalize on the typical release schedule; those students aren't first time computer owners.
 
I wouldn't say that Santa Rosa's major contribution is in the area of graphics. It is certainly the most visible and directly applicable improvement for end-users, but it is the next major step in the comprehensive overhaul of the x86 architecture as we know it. We aren't going to be seeing any massive speed changes like we experienced five years ago; the technology we have is mature and improvements will be incremental barring any major breakthroughs.

This is why we're having multiple cores pushed onto us--it's a way of extending Moore's Law while the semiconductor industry has essentially stalled. In its defense, home users rarely need even as much power as available to them now. There's not really any new necessity to push home user markets forward.

Sorry to pipe in again. But my rather naive question above about momentous advancements was related to the NYT article of a few weeks ago about precisely that new Intel breakthrough with the alloy that allows better insulation. The article specifically discussed Moore's law and said that just as everything seemed to point to a stall in chip advancement, here was another breakthrough proving the law. This would seem to contradict what you were just saying. The article said that Intel was already working to make chip prototypes compatible with Mac code and that the chips might be available by the end of the year. In fact, I was going to try to hold out to get one of the new superfast chips when they come out. Is this next wave not yet even on the radar for consumers?
 
Moore's Law

Sorry to pipe in again. But my rather naive question above about momentous advancements was related to the NYT article of a few weeks ago about precisely that new Intel breakthrough with the alloy that allows better insulation. The article specifically discussed Moore's law and said that just as everything seemed to point to a stall in chip advancement, here was another breakthrough proving the law. This would seem to contradict what you were just saying. The article said that Intel was already working to make chip prototypes compatible with Mac code and that the chips might be available by the end of the year. In fact, I was going to try to hold out to get one of the new superfast chips when they come out. Is this next wave not yet even on the radar for consumers?

Yes, let's hope we get them by the end of the year. At 45nm,
these chips will accelerate speed exponentially.
 
The article specifically discussed Moore's law and said that just as everything seemed to point to a stall in chip advancement, here was another breakthrough proving the law. This would seem to contradict what you were just saying.
Not so much a contradiction as an attempt to show that they haven't been sitting on their hands for years. The transition to 65nm was just recently completed, but there were no significant speed gains. There certainly were other improvements (reduced TDP, power consumption, increased yields), but no breakthroughs. Circa 2004 CPU cores are still highly competitive three years later--the benchmark numbers are only inflated on new CPUs because of the introduction of multi-core technology. The simple fact is that we've gone three or more years without any real clock speed advances, and this has caused manufacturers to look to other measures of performance to improve. The Pentium 4 is the perfect example. It was created because it had a lot of room for high clock speeds, but it is in fact a rather poor performer on the whole. The Core platform went back to the slower clocked, but higher performance Pentium III (Pentium M) design and looked to improve performance in other ways. It is in some ways a resurrection of the Pentium Pro philosophy--make a better processor all around instead of just a faster clocked one.

Is this next wave not yet even on the radar for consumers?
No, it's not. There might be some of these new processors on the market by year's end, but not the entire line, and they won't be introduced at the uncharacteristically low prices of the Core 2 Duo. The C2D launch has spoiled newcomers to the market--they weren't paying attention back when whole new architectures entered at $1000+ for the early adopters (compared to the low-end C2D launch price of $190). Since we've gone so long without any major overhauls of the process, materials, or design of processors, the price has fallen quite a bit. Further, the number of these announced technologies that ultimately don't pan out is quite high.

Yes, let's hope we get them by the end of the year. At 45nm,
these chips will accelerate speed exponentially.
Not likely. At 65nm, transistors are close to the mechanical limits of speed. Reducing to 45nm isn't going to open up a huge new range for speed. It will lower temperatures and power requirements, but the small size introduces a new set of problems with signal leakage. Switching to a different material might allow for faster transistor operation, but it will be some time before that is a real possibility.
 
65 nm process

Not likely. At 65nm, transistors are close to the mechanical limits of speed. Reducing to 45nm isn't going to open up a huge new range for speed. It will lower temperatures and power requirements, but the small size introduces a new set of problems with signal leakage. Switching to a different material might allow for faster transistor operation, but it will be some time before that is a real possibility.

Yes, I do understand. However, what I was referring to was having
the transistors reduced to 45 nm, in combination with the new insulating
material, would produce far less heat, thus enabling them to clock at much
higher frequencies. Heat and signal leakage have both made up the
Achilles Heal of progress on this front for some time.
 
Yes, I do understand. However, what I was referring to was having
the transistors reduced to 45 nm, in combination with the new insulating
material, would produce far less heat, thus enabling them to clock at much
higher frequencies. Heat and signal leakage have both made up the
Achilles Heal of progress on this front for some time.

Or the Achilles Heel of progress. ;)

From Intel's own pressroom:

Our 45nm technology will provide the foundation for delivering PCs with improved performance–per– watt that will enhance the user experience.

... and ...

Intel’s 45nm process technology will allow chips with more than five times less leakage power than those made today. This will improve battery life for mobile devices and increase opportunities for building smaller, more powerful platforms.

It's more about performance per watt and battery life than about cranking up the GHz.
 
1080p

Where do you get the impression that the Mac Mini won't do full 1080p?

My understand is that it does play it. I have a Macbook, pretty much the same internals as a Mini, and it plays 1080p without a problem. However if I'm running other things, even the latest Parallels beta (although its not supposed to be doing anything it still consumes 20% CPU) then the Macbook will stutter playing 1080p. Thats the reason I reverted back to the non-beta Parallels release, and the reason why I'm waiting for a Duo Core 2 upgrade before buying a Mac Mini to specifically use as a home theater system (and waiting for Leopard).

I am looking for it to push my projector. Which from the little voice of others I have heard that it has some GPU issues due to the intel video integration.

I hope that I am wrong but I can't have my 100" screen shuttering no matter what I am running in the background. I need my Democracy TV on my projector. Looks like I will be waiting for C2D before I purchase one. I am not much of a fan of the INTEL, DAMN YOU APPLE!:D
 
Ahhh, enough with the technical jibbber-jabber :D

I just want newer, faster, shiner!

Seriously, I think there are a bunch of people out there who are holding out for Leopard before they upgrade. I really hope the SR chips show up soon, I would love to get a MacBook with SR & Leopard this summer

Or a MBP

Or a new miniMac tower

Or all of the above!
 
Seriously, I think there are a bunch of people out there who are holding out for Leopard before they upgrade.
And I'm one of them. If Santa Rosa comes out before or only slightly after Leopard, I want a Santa Rosa MBP! :D
 
I am two of them...:D Santa Rosa + Leopard + Macbook pro in black (hopefully), Come on Steve! dont make me disappointed.

____________________________________________
PowerMac g5 2.0, 3G Ram, ipod video 60G in black
 
I believe his impression is correct. At this point in time there is not an Intel CPU that can fully handle 1080P content in ll its many forms. The fact that a few titles have ran successfully for you means nothing largely because of the considerable variability in what the content streams can contain.

There is a recent thread on this somewhere over on Arstechnica about this. A poster there that goes by the name MrNSX is deeply involved in the issues. The really bad part is that there does not appear to be an Intel processor on the horizon that will be able to handle all 1080P content completely. Thus the importance of being able to off load some processing to the GPU.

In any event anyone buying a Mini for 1080P content is making a mistake in my opinion. Even with a C2D upgrade it still is stressed with this content. The reality is that buying any PC hardware at this stage to support 1080P is a tough call. Yes you can have successes with such but feature rich content can quickly saturate the CPU.

Dave


Where do you get the impression that the Mac Mini won't do full 1080p?

My understand is that it does play it. I have a Macbook, pretty much the same internals as a Mini, and it plays 1080p without a problem. However if I'm running other things, even the latest Parallels beta (although its not supposed to be doing anything it still consumes 20% CPU) then the Macbook will stutter playing 1080p. Thats the reason I reverted back to the non-beta Parallels release, and the reason why I'm waiting for a Duo Core 2 upgrade before buying a Mac Mini to specifically use as a home theater system (and waiting for Leopard).
 
Not so much a contradiction as an attempt to show that they haven't been sitting on their hands for years. The transition to 65nm was just recently completed, but there were no significant speed gains. There certainly were other improvements (reduced TDP, power consumption, increased yields), but no breakthroughs. Circa 2004 CPU cores are still highly competitive three years later--the benchmark numbers are only inflated on new CPUs because of the introduction of multi-core technology.
The above statements can only be seen as garbage. First clock rates have increased due to the new technology. Intel hasn't had the need to introduce chips with significantly faster clock rates because they have vastly enhanced the core of their processors. In many cases instructions are doing twice the work per clock than they where before Core 2. The fact that these C2D's easily over clock to 3GHz completely disqualifies your statement. In any event manufactures such as IBM have been increasing clock rates with very good results, albeit slow to delivery.

The impact of multicore technology on benchmarks is variable at best. If the bench mark is single threaded and the Core 2 Dual executes it considerably faster than old CPU technologies then that is due to improvements in the core of the processor.
The simple fact is that we've gone three or more years without any real clock speed advances, and this has caused manufacturers to look to other measures of performance to improve. The Pentium 4 is the perfect example. It was created because it had a lot of room for high clock speeds, but it is in fact a rather poor performer on the whole.
The P4 was created because Intel thought that they where in a GHz race. While I agree that the processor was a poor performer, especially relative to the previous generation, it was designed to allow significant increases in clock rate as much for marketing as anything else.

In any event I have to disagree with your statement about clock rates. Just about every C2D can be run significantly faster than the speeds at which it is marketed. It does so while doing much more per clock. This is tied to the improvements gained at 65 nm. The one thing that is obvious is that if Intel wanted to they could market the Core 2 D based on clock rate if they wanted to. They don't need to because they have an effective hammer to pound AMD with.
The Core platform went back to the slower clocked, but higher performance Pentium III (Pentium M) design and looked to improve performance in other ways. It is in some ways a resurrection of the Pentium Pro philosophy--make a better processor all around instead of just a faster clocked one.
This is more or less the case. What you seem to be missing is that the Core 2's have a huge amount of head room as far as clock rate goes. This is related to 65nm.
No, it's not. There might be some of these new processors on the market by year's end, but not the entire line, and they won't be introduced at the uncharacteristically low prices of the Core 2 Duo. The C2D launch has spoiled newcomers to the market--they weren't paying attention back when whole new architectures entered at $1000+ for the early adopters (compared to the low-end C2D launch price of $190).
You are making unfounded assumptions here. No one knows how the new materials will impact processor construction and thus costs. Even then that has little to do with the price that the part is actually marketed at. That price will likely be the result of a number of things including competition.
Since we've gone so long without any major overhauls of the process, materials, or design of processors, the price has fallen quite a bit.
You don't think that the Core 2 was the result of a processor redesign or overhaul? Frankly I see it as a very credible effort on Intels part.
Further, the number of these announced technologies that ultimately don't pan out is quite high.
Isn't that th truth!!!!!!!!!!!!!

One the other hand it looks like IBM/AMD are on a very similar development path. So lets hope that this is real.
Not likely. At 65nm, transistors are close to the mechanical limits of speed. Reducing to 45nm isn't going to open up a huge new range for speed.
I would disagree with this.
It will lower temperatures and power requirements, but the small size introduces a new set of problems with signal leakage. Switching to a different material might allow for faster transistor operation, but it will be some time before that is a real possibility.

I believe that that statement is in opposition to what was being described by the manufactures. In any event I suspect that 2009 will be the time frame for such technologies. In any event I don't think we are long ways off from 4 gigahertz processors. It might not be a clock rate race like before, but clock rate will continue to impact processor performance.

Dave
 
Looking at intel's game compatability list for the x3000 there are still quite some issues with several games. For example Half Life 2 is not playable:



But of course; the drivers are still under development so I'm sure this will be resolved in time...
Which brings up the question of "when do driver leave development"? It seems t me that driver development pretty much stops once the chip is no longer a valid choice for a processor. Otherwise GPU driver development seems to be a continuous endeavor.

Dave
Does anybody know a good comparison chart comparing the x3000 to high-end mobile gpu's from ATI and NVIDIA? And how does the x3000 perform in comparisson with the current x1600?
 
Which brings up the question of "when do driver leave development"? It seems t me that driver development pretty much stops once the chip is no longer a valid choice for a processor. Otherwise GPU driver development seems to be a continuous endeavor.

Dave
Intel has updated the drivers recently for Vista/XP.

http://guides.macrumors.com/GMA_X3000

I've seen anywhere from ATi X550 to nVidia 7300 GS performance range. The best estimate I've heard is that the average is an nVidia 5600 Ultra.

That's a lot of card lingo. I've also posted this 2 times in this thread already...

http://deadmoo.com/articles/2006/09/28/intels-new-onboard-video-benchmarked
 
The above statements can only be seen as garbage. First clock rates have increased due to the new technology. Intel hasn't had the need to introduce chips with significantly faster clock rates because they have vastly enhanced the core of their processors.
Hardly. They've gone back and improved their processors precisely because they hit practical walls in clock speed. If they could have just kept up a clock speed race, they wouldn't have changed. Intel switched gears because their processors were being outperformed by cooler, more efficient, better designed processors from AMD.
The fact that these C2D's easily over clock to 3GHz completely disqualifies your statement.
3GHz has been attainable since 2004. The processors released today are slower in clock speed than the ones that were previously available, which is exactly what I referred to originally.
The impact of multicore technology on benchmarks is variable at best. If the bench mark is single threaded and the Core 2 Dual executes it considerably faster than old CPU technologies then that is due to improvements in the core of the processor.
It is directly attributable to the abandonment of NetBurst in the majority of cases, not to some new technology that wasn't available in 2004.
In any event I have to disagree with your statement about clock rates. Just about every C2D can be run significantly faster than the speeds at which it is marketed. It does so while doing much more per clock.
So can most Pentium D CPUs (air cooled to over 4GHz, water cooled to nearly 5) as well as all Athlons. Comparing potential performance beyond spec is at best a spurious way to make a claim. Stick to official numbers.

This is tied to the improvements gained at 65 nm. The one thing that is obvious is that if Intel wanted to they could market the Core 2 D based on clock rate if they wanted to.
Nonsense. The 65nm process doesn't make it any more or less easy to break 3GHz than the 90nm process. Size reduction lowers temperature, TDP, and power consumption. It does not increase clock speed room unless the constraining factor is heat. Heat has not been the primary constraining factor for quite some time now, so it's a moot point.
This is more or less the case. What you seem to be missing is that the Core 2's have a huge amount of head room as far as clock rate goes. This is related to 65nm.
They do, but it has very little to do with the 65nm process and a great deal to do with the refinements made to the Pentium M core.
You are making unfounded assumptions here. No one knows how the new materials will impact processor construction and thus costs.
Granted, but you miss the point. The point is that the Core lineup was introduced with nothing new. The fundamental architecture was preexisting, the 65nm process had already been debuted in the Pentium D, the chipsets had already been developed, the socket was preexisting, and the materials and methods were proven. This is unlike the launch of previous processors, which involved all-new cores or previously unused materials/methods/die sizes. The net effect is that the launch price of the Core 2 Duo is substantially less than a typical launch. Look no further than AMD, which cannot match Intel's prices with their X2 lineup despite having launched the A64 at a much lower price than Intel's Pentium 4. Intel can certainly afford to play pricing games, but that's an artificial effect that has no bearing on the fact that the next generation of Intel processors will be a great deal more expensive to produce than the C2D have been.
You don't think that the Core 2 was the result of a processor redesign or overhaul? Frankly I see it as a very credible effort on Intels part.
It is indeed a credible effort, but it is not the result of anything new. SSE3, NX, EM64T, dual core, LGA775, 65nm were all preexisting technologies. All Intel did was expertly choose the best attributes to fuse together into the Core architecture. They took the Pentium D's dual core design and EM64T instructions (stolen from AMD) and integrated it with a Pentium M low-pipelined architecture and applied the whole thing to Intel's highly successful 65nm process. The end result is an efficient, powerful, sophisticated processor. But almost nothing in it is groundbreaking technology.
I believe that that statement is in opposition to what was being described by the manufactures.
That would only be the case if the description referred to the status quo. As it stands, they're simply trumpeting the future because they've got nothing on the shelves to stir interest now that they weren't marketing 3 years ago, aside from dual core. They're very successful at marketing their products, but it takes more than marketing when you're trying to talk technology.
 
In regard to the last few posts above, one of the reasons that the clock speed has seemed to stabilize compared to past architectures is the increase in cache. For instance, the degree to which a E6600 can overclock compared to a E6300 is lower. Sure, with a good board and some LN2 you can do some small wonders, but the point still remains that, generally, the E6600 won't clock quite as well. As larger cache size and different materials like halfnium dioxide are introduced into the silicon, we may see even less overclockability, or at least a mitigation of the gains that a die shrink might entail...
 
Will I regret not waiting?

I've just ordered a mac mini - my ancient Dell finally died. If I waited until May I'd be stuck with just a laptop until then, which seems a long time... it's only going to get basic use anyway but someone, please, tell me I'm not making a mistake! :p
 
Will I regret not waiting?

I've just ordered a mac mini - my ancient Dell finally died. If I waited until May I'd be stuck with just a laptop until then, which seems a long time... it's only going to get basic use anyway but someone, please, tell me I'm not making a mistake! :p
Not to worry! Any of the hypothetical future improvements discussed here are quite a ways off and would impact the Mac mini last. The current crop of products offer more than adequate performance for just about any task. Of course, with the Mac mini, you do hit some ceilings with gaming and 1080P in stock configurations (more RAM basically takes care of any 1080P problems, and you shouldn't expect a cutting edge gaming system out of the mini).

I have a Pentium D system with the GMA950 which currently has just 512MB of total system RAM which feels just as fast on the whole as either of my two A64 3500+ computers with 1-2GB of RAM each (one has a GeForce 6600/256MB and the other has an older Radeon 9000 Pro/64MB). Photoshop does bog things down, as does having too many tabs open in Firefox. With a gig of RAM, though, you'd be good to go.
 
So my bonus arriving at the end of March could be just in time to get me a black Lepoardised iMac 24" :)

Come on the upgrades!!
 
C2D are not true 64bit chips

Yes, it is. To clarify what Rocketman said, Santa Rosa will allow the system to address more than 4GB of physical memory. Your C2D MBP does support 64-bit instructions and it does allow the addressing of more than 4GB of virtual memory, which are the essential ingredients of a 64-bit system.

C2D Chips are 64EMT chips, they take 32-bit instruction, 64- bit data. They are not true 64 bit processors.:(
 
Horse-hockey!

C2D Chips are 64EMT chips, they take 32-bit instruction, 64- bit data. They are not true 64 bit processors.:(

Intel® 64 architecture (formerly known as Intel® Extended Memory 64 Technology, or Intel® EM64T) (http://www.intel.com/technology/intel64/) certainly is true 64-bit - how do you figure that it's not?

In x64 mode, it operates on 8/16/32/64/128 bit data, using 64-bit virtual addressing.

Its 64-bit instructions operate on 64-bit integer/pointer registers. (A 64-bit instruction is one that specifies 64-bit operands - note that even a PPC970 uses 32-bit instructions. Intel® 64 instructions are from 1 to 15 bytes long, so maybe you'd claim that it's a 120-bit CPU? ;) )

Please explain your claim that Intel® 64 is not true 64-bit !
 
There was some talk on a Linux kernel dev trac that says the EMT64 chips only support up to 48 bit memory addressing or something, and a special patch had to be released to support Intel chips.

That was from back in the days when Opteron was still a top dog and the Itanic was still sailing so things might have changed.

Intel® 64 architecture (formerly known as Intel® Extended Memory 64 Technology, or Intel® EM64T) (http://www.intel.com/technology/intel64/) certainly is true 64-bit - how do you figure that it's not?

In x64 mode, it operates on 8/16/32/64/128 bit data, using 64-bit virtual addressing.

Its 64-bit instructions operate on 64-bit integer/pointer registers. (A 64-bit instruction is one that specifies 64-bit operands - note that even a PPC970 uses 32-bit instructions. Intel® 64 instructions are from 1 to 15 bytes long, so maybe you'd claim that it's a 120-bit CPU? ;) )

Please explain your claim that Intel® 64 is not true 64-bit !
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.