Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I wonder if Apple is waiting on Haswell's integrated graphics capabilities so they can "retinize" their MBAs.

I don't think its graphics, I think its the battery space. The MBA doesn't have much room for the batteries needed to power a retina display.
 
Good to know. Its interesting though that Anandtech mentioned that Mountain Lion does improve the overall experience. So I wonder how much of it is software related...



I think you might be.

(I haven't experienced any lag)

but he retina MBP isn't running at 2880 x 1800 - it can do that fairly comfortably, it's when it's translating resolutions to a 2800 x 1800 screen it doubles the pixels and then reduces down to get the best quality - so the 1920 x 1200 is actually rendered at 3840 x 2400 before being translated to 2800 x 1800. That is apparently what taxes the hardware but also what makes the scaled resolutions look almost native for the most part.
 
Good to know. Its interesting though that Anandtech mentioned that Mountain Lion does improve the overall experience. So I wonder how much of it is software related...

If you go by the evidence presented, it's a software issue. The hardware itself is quite powerful. Of course, people here have their heads deadset that this is a hardware issue.

Not that I think that the software issues will be all fixed by Mountain Lion - I think that this is going to be an ongoing process for a while. But I think ML will solve quite a few of the issues.
 
I feel it might be better to wait for Haswell. Apple would have been better served putting 2 GB of VRAM to drive the retina display.
 
After trying out a buddy's (whos a developer) rMBP with GM ML, I'm now a convert. Some ui still stutters, but much smoother than lion. Also, haswell is only going to mostly update the CPU and igp, looking the discrete laptop gpu road maps, we're still going to be running 28nm dgpus. So you're only looking at a mild clock bump.
 
Hell no!

Intel is never going to release an architecture that is amazing, its always going to be incremental.

Jump on whenever you need a device. Once you jump on, you can probably skip one or two generation of updates by Intel.

M

Core/Core2 was pretty amazing in comparison to the Pentium 4... I don't expect Haswell to be near as large of a jump, but you never know.

With that said, CPUs have gotten to the point where almost nothing is actually limited by CPU power these days. So even if Haswell is a lot more powerful, I'm not sure if there will be much real world benefit. I'm not entirely sure the rMBP's issues are CPU... I suspect it's bad coding (single threaded apps in an age of multithreaded CPUs), and weak graphics.
 
Core/Core2 was pretty amazing in comparison to the Pentium 4... I don't expect Haswell to be near as large of a jump, but you never know.

With that said, CPUs have gotten to the point where almost nothing is actually limited by CPU power these days. So even if Haswell is a lot more powerful, I'm not sure if there will be much real world benefit. I'm not entirely sure the rMBP's issues are CPU... I suspect it's bad coding (single threaded apps in an age of multithreaded CPUs), and weak graphics.

Pentium 4 to Core/Core2 was a chipset arch change, like Core/Core2 to i3/i5/i7.

I keep telling people that it's the chipset arch changes that are going to be huge, regardless of tick or tock. And the next chipset, Multi-Chip Module (MCM), is not possible until we finally get 14nm logic gates (Broadwell). Of course, Broadwell is a "tick," so Intel might wait for the "tock" (Skylake) before it actually implements the change.

So, I don't see Haswell being a big deal, except for better power usage and battery life. Maybe slightly better graphics (think Sandy to Ivy), but nothing huge. If you're really into the waiting game, wait for the MCMs. They're the game-changer (imo).
 
Pentium 4 to Core/Core2 was a chipset arch change, like Core/Core2 to i3/i5/i7.

I keep telling people that it's the chipset arch changes that are going to be huge, regardless of tick or tock. And the next chipset, Multi-Chip Module (MCM), is not possible until we finally get 14nm logic gates (Broadwell). Of course, Broadwell is a "tick," so Intel might wait for the "tock" (Skylake) before it actually implements the change.

So, I don't see Haswell being a big deal, except for better power usage and battery life. Maybe slightly better graphics (think Sandy to Ivy), but nothing huge. If you're really into the waiting game, wait for the MCMs. They're the game-changer (imo).

Now that the memory controllers and (some) PCIe lanes are onboard the CPU, the chipset doesn't mean much. And Core/Core2 processors were able to run on the same mobos (and therefore chipsets) as the Pentium 4/D CPUs.

Unless by chipset arch you mean micro architecture.... But haswell *is* a new micro architecture. On the same level as Core2 -> i5/i7

Is it likely that Haswell will be earth shattering? No. Much like how the initial Nehalem CPUs were not much faster than the Kentsfield CPUs, Haswell,, will likely only be 10-15% faster than equivalent ivybridge SKUs. The main reason Conroe and Merom were so fast compared to their predecessors is because Intel needed to reclaim the crown from AMD (which had the enthusiast market cornered and was starting to encroach on the OEM market). Now that Intel has no real competition, they are able to take their time increasing performance. Very similar to how Apple operates actually

Haswell will probably bring power usage improvements. But honestly, 7 hours on a quad core CPU is more than enough for me. The GPU upgrades will probably be more significant.
 
Last edited:
It will be interesting to see with SSDs being available for all of the Mac line up as an option if the 'average' person feels the itch to upgrade as often anymore. I remember with most of my computers having a mechanical hard disk and then becoming 50% full my computer would take a drastic performance hit.

My intel Core 2 Duo Win 7 notebook with a SSD feels just as fast as my i5 quad core with an SSD for 'light' photo editing, surfing the net and listening to music. Of course there is a big difference when I edit 1080p video between the two machines...
 
Now that the memory controllers and (some) PCIe lanes are onboard the CPU, the chipset doesn't mean much. And Core/Core2 processors were able to run on the same mobos (and therefore chipsets) as the Pentium 4/D CPUs.

Unless by chipset arch you mean micro architecture.... But haswell *is* a new micro architecture. On the same level as Core2 -> i5/i7

No, I meant chipset architecture. Broadwell will allow vertical stacking of dies as opposed to only horizontal placement, even though it's supposedly "only" a die shrink. Chipset architectures do matter, and here's an article explaining how MCM (or system-on-a-chip, as Intel likes to say), will change things.

Until now, we've only been able to get northbridge and southbridge into the same package. MCM focuses on taking the PCH logic AND the existing processor, and making them into one small package.

http://www.fudzilla.com/home/item/26786-intel-migrates-to-desktop-multi-chip-module-mcm-with-14nm-broadwell
 
I feel it might be better to wait for Haswell. Apple would have been better served putting 2 GB of VRAM to drive the retina display.

"As an aside for those thinking Apple should have put in 2GB of VRAM: VRAM does not make the graphics system faster unless you're feeding in more textures than fit in the onboard VRAM for a single frame. If you can show a benchmark saying that we're using up all 1GB of VRAM and paging more textures in that didn't fit while scrolling facebook.com, then it would help. Otherwise, it'd just make the cost higher and the performance equal to now."

https://forums.macrumors.com/threads/1396188/
 
No, I meant chipset architecture. Broadwell will allow vertical stacking of dies as opposed to only horizontal placement, even though it's supposedly "only" a die shrink. Chipset architectures do matter, and here's an article explaining how MCM (or system-on-a-chip, as Intel likes to say), will change things.

Until now, we've only been able to get northbridge and southbridge into the same package. MCM focuses on taking the PCH logic AND the existing processor, and making them into one small package.

http://www.fudzilla.com/home/item/26786-intel-migrates-to-desktop-multi-chip-module-mcm-with-14nm-broadwell

I've already read that. That'll allow for miniaturization and cheaper motherboard, which will probably allow for making a retina 13" (or even retina airs)... But I don't see how it will dramatically effect the processing power. Bandwidth is not a huge issue for the PCH like it was for memory back in the day. And even then, the performance benefits of the integrated memory controller were pretty small relative to the benefits of improving the CPU architecture. Any performance benefits seen with the integrated PCH design will be incidental.

Notice how even with Haswell, the two core models are already getting the integrated PCH. Intel would have waited for the manufacturing process to catch up if it made a huge performance difference. They wouldn't want to put their flagship CPUs at a disadvantage in any form.

I doubt apple is so inclined, but the miniaturization may allow apple to create a user serviceable rMBP in the existing form factor. But knowing them, they'll try to shave another micrometer off the thickness.
 
I've already read that. That'll allow for miniaturization and cheaper motherboard, which will probably allow for making a retina 13" (or even retina airs)... But I don't see how it will dramatically effect the processing power. Bandwidth is not a huge issue for the PCH like it was for memory back in the day. And even then, the performance benefits of the integrated memory controller were pretty small relative to the benefits of improving the CPU architecture. Any performance benefits seen with the integrated PCH design will be incidental.

Notice how even with Haswell, the two core models are already getting the integrated PCH. Intel would have waited for the manufacturing process to catch up if it made a huge performance difference. They wouldn't want to put their flagship CPUs at a disadvantage in any form.

I doubt apple is so inclined, but the miniaturization may allow apple to create a user serviceable rMBP in the existing form factor. But knowing them, they'll try to shave another micrometer off the thickness.

Sorry, then. I must have overestimated the power of more integrated CPus, sorry.
 
Sorry, then. I must have overestimated the power of more integrated CPus, sorry.

I'm sure this is sarcasm... But yes, you have. The performance critical parts of the IOH (PCIe) are already integrated into the CPU. The rest is are to communicate effectively over a meager 2GB/s DMI link. I terracing that is no big deal for power. Heck, even integrating the PCIe lanes didn't make a tremendous impact since the QPI had more than enough bandwidth

So at least for now, integrating everything onto a single "chip" will not provide a tremendous power difference.
 
Last edited:
I'm sure this is sarcasm... But yes, you have. The existing PCH already can communicate with the CPU via hypertransport. There is no bottleneck here. The standard CPUs have the link set to 4.8GT/s and the "extreme" variants have the link at 6.4GT/s. when the CPUs are set to the same clockspeed (so that the hypertransport is the only difference), there is no measurable difference in performance. So at least for now, integrating everything onto a single "chip" will not provide a tremendous power difference.

It actually wasn't sarcasm. Most of the work I've done has been self-study, and I appreciate the information. You obviously know quite a bit more than I do on the subject.

Thanks ~
 
It actually wasn't sarcasm. Most of the work I've done has been self-study, and I appreciate the information. You obviously know quite a bit more than I do on the subject.

Thanks ~

Ah gotcha. I'm used to hostility on these forums lol. I actually updated my post to reflect the current status with sandy/ivy bridge CPUs. My original post was based on the Nehalems is since that's what I've currently got in my system. In essence, the performance impact of such integration is next to nothing for now. Though it will allow more powerful CPUs in smaller form factors, which is important too I suppose.
 
Ah gotcha. I'm used to hostility on these forums lol. I actually updated my post to reflect the current status with sandy/ivy bridge CPUs. My original post was based on the Nehalems is since that's what I've currently got in my system. In essence, the performance impact of such integration is next to nothing for now. Though it will allow more powerful CPUs in smaller form factors, which is important too I suppose.

Yeah, it seems that way. Since you seem to know a bit more about this then me, what do you think would actually improve processors at this point? I'm not sure if simple microarch would be enough to do it, but I could be wrong.

Or are processors just powerful enough at this point that we're not really going to see useful large performance boosts in the near future? Are we doomed to micro-upgrade after micro-upgrade, with each one being marketed as the cure-all to processor speed?
 
Yeah, it seems that way. Since you seem to know a bit more about this then me, what do you think would actually improve processors at this point? I'm not sure if simple microarch would be enough to do it, but I could be wrong.

Or are processors just powerful enough at this point that we're not really going to see useful large performance boosts in the near future? Are we doomed to micro-upgrade after micro-upgrade, with each one being marketed as the cure-all to processor speed?

I would say that the only major CPU upgrades will come from either increasing numbers of cores (made possible through die shrinks) or serious game changers, like changing materials (graphene and then nanotubes), or radical geometry changes, like 3D transistor layouts. Silicon transistors are not going to get too much smaller, or have much higher clock speeds. It's just not possible to push them too much further.

Also, many industries (mainly Science and Engineering) can always benefit from and use more processing power. It's quite easy to think of simulations to run (eg. n-body gravity simulations) that make even the fastest supercomputer clusters cry.
 
Yeah, it seems that way. Since you seem to know a bit more about this then me, what do you think would actually improve processors at this point? I'm not sure if simple microarch would be enough to do it, but I could be wrong.

Or are processors just powerful enough at this point that we're not really going to see useful large performance boosts in the near future? Are we doomed to micro-upgrade after micro-upgrade, with each one being marketed as the cure-all to processor speed?

I'm not an engineer (though I do have a decent understanding of many of the physics behind these things), but I suspect that the x86 platform is more or less about as efficient as it can get. So at this point I think the only real performance increases will be from just shoving more cores on or dramatically increasing the clock rate (both of which will require much smaller transistors). The smaller the transistors get, the greater the likelihood of running into strange quantum effects and whatnot. One must also consider that at some point, additional cores will not be helpful for consumer applications - many apps just aren't great for parallel processing.

Switching to a different architecture (like Itanium) could potentially allow for efficiency increases... but it would be very difficult to do since it would require a rewrite of pretty much all software. Actually Apple has done that a few times, so if anyone can do it, it'd be them. But I suspect Apple wants to retain x86 compatibility since they did get an influx of mac sales once Windows was compatible with the machines.

As the above poster said, very low level architectural changes (3d layout, diamond instead of silicon) can also lead to dramatic increases in speed/efficiency. But such improvements are a ways off yet. I think diamond is probably feasible, but as far as I'm aware, the current methods of developing synthetic diamonds reliably are patented to hell.

Quantum computing could also really change things, but that likely won't land in consumer hands for a while even if it does become feasible. Mostly because it will render all current forms of encryption useless.
 
Last edited:
Quantum computing could also really change things, but that likely won't land in consumer hands for a while even if it does become feasible. Mostly because it will render all current forms of encryption useless.

Waiting for buying a Quantum Computer as a consumer product would take decades away. Today even a quantum computer in the lab, with a few quantum bits uses up a whole big room, and even so the computing result is sometimes, 'wrong'. We don't even fully understand its concept yet.
 
Waiting for buying a Quantum Computer as a consumer product would take decades away. Today even a quantum computer in the lab, with a few quantum bits uses up a whole big room, and even so the computing result is sometimes, 'wrong'. We don't even fully understand its concept yet.
I realize this. Hence "even if it does become feasible"
 
I tend to agree. Even if intel has the capability of doubling the performance, they won't. They'd much rather milk a bunch of smaller upgrades. A lot like Apple's strategy. The only time either company offers a drastic upgrade is when the competition starts to become a proper threat. So realistically, the next rMBP will not be a huge departure from the current. At least not because of Haswell. The GPU may be another story since nvidia and AMD will probably start pushing for products with better high res capabilities.
 
"As an aside for those thinking Apple should have put in 2GB of VRAM: VRAM does not make the graphics system faster unless you're feeding in more textures than fit in the onboard VRAM for a single frame. If you can show a benchmark saying that we're using up all 1GB of VRAM and paging more textures in that didn't fit while scrolling facebook.com, then it would help. Otherwise, it'd just make the cost higher and the performance equal to now."

https://forums.macrumors.com/threads/1396188/

VRAM would help for 3D gaming at retina resolutions.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.