Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
What about the OS?

I run my merom MBP with Bootcamp and find a significant difference in battery life between Vista (Ultimate) and Leopard. When booting into Vista, the battery life depletes much more quickly than with Leopard, and the machine runs a few, noticeable degrees hotter.

I try not to run Vista unless I'm sitting at a desk, with the machine plugged in, otherwise, it's not worth it.

For the life of me, I can't figure out why MS can't get a service pack out there to address Vista performance...
 
Use of SSE4 in software would have to wait until the SSE4 hardware is widely available or else how could Apple run a beta test?

This is kinda like the chicken & the egg paradox: people don't want to make the hardware until the software is optimized for it. However, people don't don't want to make the software until the hardware is out. Each is basically waiting for the other to start. That's probably part of the wait.

I think it's cool but everyone needs to set realistic expectations for something like SSE4.

1. It will take a while (a good long while for some) apps to get recompiled with this support. And many will never get it for assorted reasons.

2. You can't rebuild an entire OS around a new instruction set, so see #1 when considering impact on OS X. Some parts will get a boost, many won't.

Very true. Writing (or rewriting) programs takes a LONG time. Though I do wonder how optimized software (and programming languages) really is. If I were omniscient, I'd create a totally new mid-level programming language that can take full advantage of multiple cores and can easily adjust to going in between 32 & 64 bit procs (and 128 and larger if/when they're available) and the write a totally new OS w/ that language.
 
This makes me feel much better about my iMac purchase of a few months ago. When the iMacs get these new Penryn processors, there will be very little difference in performance, from what I'm seeing (battery life is certainly not part of the equation). Also glad I got the 2.8GHz. It's going to hold its own for a while yet, when compared to upcoming iMacs.
 
in light of the foregoing, all else being equal it would almost make sense to buy a clearance item of the current soon to be old-version 2.6 (or even 2.4) and save some dough, given the only mild speed improvement.

that said, i am more curious than ever what other improvements will accompany the change in CPU if there are indeed new MBP's coming next week. particularly, i'd welcome a change to led backlighting for the 17", as is already in the current 15" models, and whatever other new improvements they care to drop on us. they'll need to do something to justify paying full price for a new model, no?

I agree totally. I'm ready to buy a 15" 2.4 refurb if the new Penryn's do not have something special to offer (user replaceable drive, HD screen).
 
No such thing as "Santa Rosa" chips. Merom@65nm Penryn@45nm

i know what penry is. Though santa rosa chips are already 45nm and use the high k method that im guessing was carried over to penry as well

Santa Rosa <-> Penry :: Apple <--> Oranges

There is no such thing as a "Santa Rosa' chip. If you meant to say the chips that have been shipping with the Santa Rosa platform, i.e. "Merom", no. Those are 65nm only.
Santa Rosa is just a name for a certain generation of the 'Centrino' mobile platform. It ONLY specifies certain components that have to be in a laptop to receive the 'Centrino' branding. This includes a certain Intel motherboard/chipset, WiFi card, and any Intel Core 2 Duo chip. This includes the current/previous codenamed Merom' chips @ 65nm or The very recently released codename 'Penryn' chips at 45nm.

The next Centrino platform is called 'Montevina' and will be out Mid-2008
 
something to share

I made this for myself but I thought I'd share to help people out..

Intel_P6_roadmap.png
 
I made this for myself but I thought I'd share to help people out..

Intel_P6_roadmap.png
This is overly simplified. For instance, the Santa Rosa Centrino platform has been (or soon will be) refreshed to support Penryn chips.

Laptop designers are not going to have to wait for Montevina in order to use a Penryn processor.
 
This is overly simplified. For instance, the Santa Rosa Centrino platform has been (or soon will be) refreshed to support Penryn chips.

Laptop designers are not going to have to wait for Montevina in order to use a Penryn processor.

Woohoo DDR3 here we come!!!
 
I made this for myself but I thought I'd share to help people out..

Intel_P6_roadmap.png

The P4 (Netburst) is also based on P6 architecture.

Later chips, of course, are not true follow-ons to P4, but some P4 enhancements (but not the super-deep pipelines) are in Pentium M and later chips.
 
I know that programmers can be lazy, looking for the easiest and quickest way to write code, and in that they are probably like any person of any other profession, trade, or activity.

I'm not a programmer (I don't even play one on TV), but I would think if one was going to optimize an app by doing direct calls, you'd want to write that as a module which you could then replace as needed in the future as new optimization sets came down the road.

That wouldn't mean killing off your MMX, SSE, SSE2, etc. optimizations every time a new one came out, since clearly there are other challenges involved, but I would simply re-write the module to include further tweaking to the existing optimizations, and then add in support for the new optimization sets, and ensure this was all abstracted so the rest of the code could be relatively "dumb" (that is, you'd get to the execution point in a program that was intended to benefit from, say, SSE3, and it would just sort of sub-routine out to the "blah blah blah optimizer" portion, which would then be written to know how to tell what optimization sets would work, and select the correct one, do the call, and then return the result.)

Would this be more work? Initially, sure. But it also means my software would probably stay ahead of the curve, and when it comes to being out there ahead of everyone, well... competition is the name of the game, baby.
 
Looking at that roadmap, is there a new core chipset to replace Montevina when Nehalem comes out? I also don't see a new core chipset scheduled for Sandy Bridge. Intel's CPU roadmap and core roadmap seem oddly out of sync with each other.
 
Let me give you an example of how and why I think later 45nm processors will get better. The first .65nm CPU that Intel brought out for Apple laptops was the Core Duo, which is in the system I'm typing on. By today's standards this is a low-end processor, because, by comparison the Core 2 Duo that is being sold now, and that Penryn is replacing, has all of the following. So these are all of the advancements that were able to be made within in one mfg generation, and you should see similar stuff happen for both future Penryns and Nahalem.

1) higher performance clock for clock (~10%)
2) larger die size and more transistors (almost 2X)
3) can be pushed to greater clock speeds (2.8ghz vs 2.16ghz)
4) Higher speed front-size bus
5) More "units" on the CPU that are able to perform work (like decoders, arithmetic units, etc)
6) Larger L2 cache
7) Better cache and memory subsystem
8) better power management
 
This is overly simplified. For instance, the Santa Rosa Centrino platform has been (or soon will be) refreshed to support Penryn chips.

Laptop designers are not going to have to wait for Montevina in order to use a Penryn processor.

Yea, I knew that, but I guess to keep it simple I wanted to show the "generations".
 
I know that programmers can be lazy, looking for the easiest and quickest way to write code, and in that they are probably like any person of any other profession, trade, or activity.

I'm not a programmer (I don't even play one on TV), but I would think if one was going to optimize an app by doing direct calls, you'd want to write that as a module which you could then replace as needed in the future as new optimization sets came down the road.

That wouldn't mean killing off your MMX, SSE, SSE2, etc. optimizations every time a new one came out, since clearly there are other challenges involved, but I would simply re-write the module to include further tweaking to the existing optimizations, and then add in support for the new optimization sets, and ensure this was all abstracted so the rest of the code could be relatively "dumb" (that is, you'd get to the execution point in a program that was intended to benefit from, say, SSE3, and it would just sort of sub-routine out to the "blah blah blah optimizer" portion, which would then be written to know how to tell what optimization sets would work, and select the correct one, do the call, and then return the result.)

Would this be more work? Initially, sure. But it also means my software would probably stay ahead of the curve, and when it comes to being out there ahead of everyone, well... competition is the name of the game, baby.

I'm not exactly sure how it's all implemented. Granted, I've only been programming for less than a year, and I surely don't go down to the level of implementing new processor specific instructions, but I would assume the major functionality changes would take place in the compiler, and the programmer would have to enact changes to the program algorithms to be sure to take advantage of the new instructions.


Looking at that roadmap, is there a new core chipset to replace Montevina when Nehalem comes out? I also don't see a new core chipset scheduled for Sandy Bridge. Intel's CPU roadmap and core roadmap seem oddly out of sync with each other.

I'm sure there are new chipsets and platforms for each generation, but I don't have the information. I didn't really look deeply into it, I just gathered info from a couple of sources including wikipedia. I'll have to update it as well.
 
I know that programmers can be lazy, looking for the easiest and quickest way to write code, and in that they are probably like any person of any other profession, trade, or activity.
Actually, programmers tend to be more like artists - constantly looking for the best and most elegant way to do something. Their biggest problem is deciding when enough is enough, and that it's time to actually ship something.

It's management and business realities that cause "easiest and quickest" issues. There are never enough developers available to make code "perfect" before the market window for a successful release has closed. So programmers have to compromise their instincts in order to meet a shipping deadline.

(This is why open source projects tend to be higher quality, but never seem to have a complete feature set.)
I'm not a programmer (I don't even play one on TV), but I would think if one was going to optimize an app by doing direct calls, you'd want to write that as a module which you could then replace as needed in the future as new optimization sets came down the road.
You can do this. That's the general concept behind Apple's Accelerate framework. It's exactly that sort of module, but shipped as a part of the OS, not part of apps. Other high-level OS subsystems (like CoreImage on Mac OS, and Direct X on Windows) do similar things with video chipsets.

But this will only get you so far. If a new processor introduced new capabilities (and not just faster ways of doing what you were doing before), then you will still need to update your app in order to take advantage of those capabilities. And if you do, you will be forced to either drop compatibility with older processors, or you'll be forced to emulate the capabilities (at a performance penalty, of course) when running on those older processors.

This concept is actually nothing new. Back in 1987, I had an 8088-based MS-DOS PC. A floating-point unit was an optional piece of hardware (the 8087 chip). The standard Microsoft compilers allowed you to choose from different floating-point libraries. One implemented all the operations in software - slowest, but exactly the same on all hardware. One required an 8087 chip - fastest, but forces users to have the (rarely-installed) hardware. The third included both, using an 8087 if available or software otherwise - giving the best performance and compatibility, but makes the program larger.
Looking at that roadmap, is there a new core chipset to replace Montevina when Nehalem comes out? I also don't see a new core chipset scheduled for Sandy Bridge. Intel's CPU roadmap and core roadmap seem oddly out of sync with each other.
I guarantee Intel has chipsets in development to support all of their CPUs under development.

For your specific question, the Wikipedia page for Centrino says that 2009 is expected to ship the "Calpella" Centino-suite to support Nehalem. But they don't yet have more information than just the name.
 
Whaaat, when was it added? When I bought mine (July 2007) it wasn't there! I feel.. stupid. :(
 
Since Macworld didn't bring a refresh of the MBP, when do you think it will happen?
 
Since Macworld didn't bring a refresh of the MBP, when do you think it will happen?

I'm also interested in the answer to this question. I'm ready to get a new MacBook Pro, but not sure if I should get one now or wait for a new release... Part of me wants to have the very latest and greatest hardware, including possible new chipset, outside case materials/look, and better standard hard drive (bigger size/7200rpm). With that said, another part of me says to simply bite the bullet and purchase one today, since there will always be updates on the horizon and I probably won't even notice a difference between the current chipset/hard drive offerings and what is coming out next...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.