Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I work with some very large files, sometimes designing larger than life murals in adobe. I use almost no animation but that could change in the future. I have a higher end 27" mac purchased about 1 year ago but I do work a lot from my laptop also while on site or traveling, so its a work horse.

My current laptop in 5 years old and I hate the thought of waiting an entire year to upgrade, 2 months, even 6 would be an option but a year or more sounds like toucher, my late 2009 just can't keep up with the software I use any more.

Any thoughts my tech friends?

My thought is that the late 2013 Retina MBP with 2.6 GHz core i7, 16 GB RAM, Nvidia 750M, and 500 GB flash, is a very potent machine that would probably address your current needs, and, you might be able to get a good deal on one refurbished.

The unibody MacBook Pro was released in 2008.
.

I still prefer the 2008 unibody form factor. I hate these thinner-and-thinner machines of late that are not user-serviceable.

If there was ever a Mac product that would have an ARM chip, it would have been the new MacBook. But, no.

The Core M series is very competitive (performance-wise) with arm64, although the chips are pricey. The new MacBook is weird though. Instead of giving you the ports you need on the side of the body, you get one port and have to buy a dongle so you can plug in the power, a keyboard/mouse, and display at the same time. I find that odd and awkward compared to the old 2007 white plastic macbook.

--

On topic, I would assume that Apple would consider a Broadwell refresh of a few models as a minor refresh. A little faster, a little lower power.
 
indeed, adding a titan black + 64gig of ddr4 ram to the mix really boosted my workstation rig
quite happy with the efficiency of the system!

Agreed. I just needed CPU power, so I used a 5960x with an inexpensive GPU and 16GB. I was able to fit it in a semi-portable (which I need, since it travels) low profile case, and it cools great and runs stably at 4GHz (with a BIOS preset - no tweaking necessary).


It scores nearly 30,000 on Geekbench - similar to a 12-core Mac Pro, I believe. Just as importantly, it doesn't slow down under load - even when running for hours. Power draw is reasonable (sometimes I have to run it off a battery, so I care about this). Definitely efficient, as you say.

We haven't had access to this much power at this price before. Now a video app that stuttered on the best 4-core i7 runs smoothly, with extra headroom for additional (needed) features.


This particular app is Windows-only, but as I've said before, Apple should bring this nice CPU to the Mac Pro line too. Although a bit pricy, the Mac Pro brings an excellent amount of power to an even more portable system (than I have).


There's no way a notebook Skylake processor will ever reach this performance level, so I have trouble seeing how the 6th-gen CPU will be groundbreaking for many users. It may make some apps run more smoothly or nicely, and complete some tasks quicker, but I doubt it will open up many new capabilities users didn't have before. But my "luggable" system does, for less than the price of a base 15" MacBook Pro.
 
Last edited:
I'm still using my mid-2010 macbook pro, it's running slow, but I am not planning to upgrade until there's a space grey macbook pro... :p
 
I was hoping they could make it considerably thinner & lighter, smaller overall, maybe with a new kind of keyboard that will "take some getting used to" (per just about every reviewer that reviews it), dump all the ports but maybe one (but make it a new port with hardly any third party product support (yet)), force touch, maybe a much weaker processor also in support of thinner & lighter and maybe offer it in colors like the new :apple:Watch.

Hopefully, they could do all that while keeping the pricing about where it is so they can enjoy healthy profit margins while a good chunk of us call it "the future", "just wait 2 or 3 generations" and the old "the MBA started out much like this". Don't worry Apple, we'll just attack anyone that finds any fault with any of the above, spinning anything you want to do- or leave out- better than your own paid marketing team can.

Roll out some accessories to make the one port usable but price those accessories for healthy profits too (and don't count their weight when touting "lighter" at launch).

Lastly, hopefully they could launch it in very, very short supply so that perhaps there's not even any in stores for days after it's supposed to be available.

Sounds ideal to me... even like "the future" ;)

Hehe +1 . Simply agreed.

My biggest concern is exactly this; If Apple will have the decency to leave the "pro" machines (with or without quotes) outside this gadget/facebook route they seem to take for everything else. I still hope they will.
 
If the next rMBP 15" comes out with a strong GFX, I will go for the 15". Otherwise, I dont see the point of coping with the extra bulk and price tag and I will simply renew my rMBP 13".
 
It's sad, but the near stand-still in computational power has lowered both customer expectations and developers' innovation/creativity.

I don't understand what standstill you're talking about.

Based on leaked specs, in the 3DMark test, the i7-6700k is going to score about 15% higher than the i7-4790k that was the top end of the previous generation. The i7-3770k before that was another 20% slower.

Computational performance is still progressing at a decent rate out in the real world. It's just just Apple's part selection for low power and higher profits that is causing Apple to stagnate in power while the rest of he world leaves them in the dust.

The performance is there and the 95% of the market that doesn't use Macs is enjoying the benefit. It is amazing that my 2011 MBP is still almost at par with the latest laptops from Apple when you look at all the sub-$500 windows laptops that will mop the floor with my $1500 mac from 4 years ago.
 
I don't understand what standstill you're talking about.

Based on leaked specs, in the 3DMark test, the i7-6700k is going to score about 15% higher than the i7-4790k that was the top end of the previous generation. The i7-3770k before that was another 20% slower.

Computational performance is still progressing at a decent rate out in the real world. It's just just Apple's part selection for low power and higher profits that is causing Apple to stagnate in power while the rest of he world leaves them in the dust.

That's a standstill. Perhaps you've forgotten about Moore's Law, or weren't able to experience it.

In the past it would have been 100% faster. Worse yet, you're quoting me 3DMark, so apparently even Intel's integrated GPU improvements are now slowing to a crawl as well. I think this is the second generation in a row where they've done this now (marginal GPU performance increases). In fact even Intel's "Tick-Tock" model was born out of the need to explain how they can still be "advancing" without improving computing performance!


Most of the improvements recently (nearly a decade, it seems) have been over reducing power draw. This is useful, in that it allows us to take existing applications mobile. But it doesn't really enable any new applications, which need more computing power. They can created 18-core CPUs, which offer some improvement at the high end, but virtually nothing changes at the desktop (or laptop) level.


It used to be that I'd upgrade to a new machine every couple of years. But now I buy 4-year-old laptops and still run them for a few years. It used to be that a 5-year-old machine was useless. Now even laptops (which have less computing power to begin with) are usable after 7 years.


Apple's not being "left in the dust," because "the rest of the world" isn't going anywhere! In many cases they're weaker on the GPU side. But they're not a gaming platform, and they don't really aim to be. Plus few "non-Pro" apps need that much GPU. So they need better GPUs in some of the Pro models - primarily the MacBook Pros, and especially the 15" we're looking at here.
 
Last edited:
There is some truth to that. After all, my 7 year old 2008 2.8Gz Core 2 Duo/ 9600GT/8Gb 1067MHz/7200 rpm HD can still handle most of what I throw at it without crashing (be it at slower speeds/settings of coarse).

I know modern specs far exceed mine but given the same system demand would a new machine be that dramatically better?
 
I wish Apple would make the decision to go for Skylake. I've got a 15" MBP that I'm buying for my nephew who graduates tonight...I'm not spending $2500 for a top of the line 2 year old MBP, but he needs something before he goes to college. I know top of the line is going to cost the same whether it's a 2 year old Haswell or a new Broadwell/Skylake and the Skylake would see him thru 4 years of engineering studies.
 
i3 is entry level, i5 is middle level and i7 is high end. The differentiators are :

  • Number of cores (2 on i3, 4 on i5
    [*]Hyper-threading on i5 and i7
  • Turbo boost
  • Some features like some virtualization helpers that are not available on i3

So, in a given generation, i3 < i5 < i7 (at least most of the time). But of course a i7 from three years ago will be slower than an i5 from the current generation...


Bit wrong there. The separations between i3, i5 and i7 are different between desktop and laptop CPUs

Desktop:
i3 = Dual core hyper threading
i5 = Quad core no hyper threading
i7 = Quad core hyper threading .

Laptops are bit vague. All core i processors are hyper threaded, theres dual core i3, i5 and i7 processors. i3,i5,i7 generally just indicates the power for that class of processor.
 
That's a standstill. Perhaps you've forgotten about Moore's Law, or weren't able to experience it.

In the past it would have been 100% faster.

A CPU revision a year and a half later would have been "100% faster"? Not only is that not what Moore's Law says, but… when? In the 90s?
 
That's a standstill. Perhaps you've forgotten about Moore's Law, or weren't able to experience it.

In the past it would have been 100% faster.

It was never quite 100% faster. Moore's law itself refers to transistor size rather than performance. Performance increases were just something that resulted from it.

I wish Apple would make the decision to go for Skylake. I've got a 15" MBP that I'm buying for my nephew who graduates tonight...I'm not spending $2500 for a top of the line 2 year old MBP, but he needs something before he goes to college. I know top of the line is going to cost the same whether it's a 2 year old Haswell or a new Broadwell/Skylake and the Skylake would see him thru 4 years of engineering studies.

Broadwell would most likely do the same. So far we haven't seen any Broadwell quad cpus. They'll bring them out on one process or the other, and that will be the one that Apple implements.
 
That's a standstill. Perhaps you've forgotten about Moore's Law, or weren't able to experience it.

Moore's Law refers to transistor count doubling every 18 months. That is still going strong with their 14nm die size for Skylake and 10nm for cannonlake. What did you think Moore's Law was?

Apple's not being "left in the dust," because "the rest of the world" isn't going anywhere!

Skylake has a pretty impressive list of new features most of us will be enjoying in a few months. You can wait a couple of years for Apple to put a base model tablet chip in the next retina iMac.

In many cases they're weaker on the GPU side. But they're not a gaming platform, and they don't really aim to be.

Tell that to the people on the Oculus Rift thread who or torn between saying Oculus is DOA for not supporting Mac and how great their macs really are for gaming. The entire mac line is so weak on graphics no mac can even run Rift properly. How about Timmy bragging about that little fact at WWDC. He keeps claiming iToys are reaching desktop level performance, but Apple's really gimping their own desktops to close the gap.

I'll be enjoying my i7-6700k + GTX 970 this fall. Whisper quiet with a water cooler and 200mm fans. With 16 gig of socketed ram, 1TB of SSD, and 10TB of HDD. All for less than the price a shiny new 27" iMac that's actually slower than the computer I'll be dumping on my in-laws.
 
It was never quite 100% faster. Moore's law itself refers to transistor size rather than performance. Performance increases were just something that resulted from it.

A CPU revision a year and a half later would have been "100% faster"? Not only is that not what Moore's Law says, but… when? In the 90s?

Moore's Law refers to transistor count doubling every 18 months. That is still going strong with their 14nm die size for Skylake and 10nm for cannonlake. What did you think Moore's Law was?

Obviously you weren't there to experience it, or you wouldn't be objecting so much. Moore's Law may have technically been about transistor count, but this idea commonly went along with it - especially since that's what we actually saw happening as developers and users.

Performance did go up about 100% every two years ago. In fact clock speed alone went up 100% about every 2.5 years from ~1985-2005. If you know anything about processor design of the era, you know the number of clock cycles an x86 instruction took kept going down. Eventually they had to go to multiple cores, when most of the other optimizations had already been implemented.

Since Core 2 I think we've only seen about 100% performance increase over 10 years, not ~2!
 
I wish Apple would make the decision to go for Skylake. I've got a 15" MBP that I'm buying for my nephew who graduates tonight...I'm not spending $2500 for a top of the line 2 year old MBP, but he needs something before he goes to college. I know top of the line is going to cost the same whether it's a 2 year old Haswell or a new Broadwell/Skylake and the Skylake would see him thru 4 years of engineering studies.

Whatever they come out with next - be it Broadwell or Skylake - will see him through the next four years.

If he needs more of a performance boost in the interim, he'll be highly motivated to find a way to get it at that time. But I doubt that will happen unless he needs more of a boost than a laptop can provide anyway. Skylake is merely a (relatively small) incremental upgrade. Just because it may be slightly more of a boost than recent generations doesn't make it groundbreaking! (Perhaps in spirit you're confusing it with the iPhone processor, which is showing significant leaps in performance, and the power of which most developers haven't yet managed to harness).
 
Obviously you weren't there to experience it, or you wouldn't be objecting so much. Moore's Law may have technically been about transistor count, but this idea commonly went along with it - especially since that's what we actually saw happening as developers and users.

Performance did go up about 100% every two years ago. In fact clock speed alone went up 100% about every 2.5 years from ~1985-2005. If you know anything about processor design of the era, you know the number of clock cycles an x86 instruction took kept going down. Eventually they had to go to multiple cores, when most of the other optimizations had already been implemented.

Since Core 2 I think we've only seen about 100% performance increase over 10 years, not ~2!

Well considering I started with a 6800 (not 68000) based computer, I think I was there for a pretty big period of Moores Law. The idea you're talking about was just talked about by a small minority of tech writers most of us wrote off as ignorant asses.

When you talk about clock speed going up 100% every 2.5 years, you just exude ignorance. I actually too a graduate course in CPU architecture that can best be summed up as clock speed has nothing at all to do with CPU performance. I actually know a lot about CPU design, so when you say the cycles per instruction dropped, you're right, but that proves how little clock speed means. VLIW was also a huge boost to performance without increasing clock speed.
 
Whatever they come out with next - be it Broadwell or Skylake - will see him through the next four years.

If he needs more of a performance boost in the interim, he'll be highly motivated to find a way to get it at that time. But I doubt that will happen unless he needs more of a boost than a laptop can provide anyway. Skylake is merely a (relatively small) incremental upgrade. Just because it may be slightly more of a boost than recent generations doesn't make it groundbreaking! (Perhaps in spirit you're confusing it with the iPhone processor, which is showing significant leaps in performance, and the power of which most developers haven't yet managed to harness).
What does any of that have to do with not wanting to pay top dollar for 2 year old hardware? Skylake is the tock improvement over the Broadwell. That's why I would rather have the Skylake since Intel is so far behind on this quad core chips, but either would be superior to the Broadwell.
 
Last edited:
Well considering I started with a 6800 (not 68000) based computer, I think I was there for a pretty big period of Moores Law. The idea you're talking about was just talked about by a small minority of tech writers most of us wrote off as ignorant asses.

When you talk about clock speed going up 100% every 2.5 years, you just exude ignorance. I actually too a graduate course in CPU architecture that can best be summed up as clock speed has nothing at all to do with CPU performance. I actually know a lot about CPU design, so when you say the cycles per instruction dropped, you're right, but that proves how little clock speed means. VLIW was also a huge boost to performance without increasing clock speed.

I actually taught graduate courses in computer architecture and assembly language programming.

Your "facts" are either wrong or misapplied. Clock speed has a lot to do with CPU performance; clock speed increases and architecture improvements combine to provide generational boosts in performance. However, both of these factors have virtually "hit the wall" in modern times.

Please check your facts and apply them correctly. The performance increases of the past are a matter of record and hard to deny. For those who weren't there it's a matter of ignorance. For those who were - I don't know what you call it.
 
Obviously you weren't there to experience it, or you wouldn't be objecting so much. [..]

I actually taught graduate courses in computer architecture and assembly language programming.

Well, aren't you hardcore.

At attacking people, anyway.

Since you seem unwilling to actually show any data (while asking people to "check their facts"), here's some: 12k MIPS in a 2005 AMD Athlon FX; 298k MIPS in a 2014 Intel Core i7 Extreme. Talk about some growth.
 
What does any of that have to do with not wanting to pay top dollar for 2 year old hardware? Skylake is the tock improvement over the Broadwell. That's why I would rather have the Skylake since Intel is so far behind on this quad core chips, but either would be superior to the Broadwell.

First of all, congratulations to your nephew! I'd say he's got a great uncle who would buy him a gift like that!


I share your concern about not wanting to buy the current 15" model when an update is imminent. I tried one myself, then took it back. But it was only because I (just barely) decided I could get by with quad-core Mac Mini plus my older MacBook Pro 17" (rather than consolidating the two into a 15" rMBP plus a dock). But it was far better than the 13" Air (which I recently sold) or the new 13" rMBP I tried - not just for raw processing speed, but even for general ergonomics and fluid browsing. And this was the base model!

Since CPU performance isn't going anywhere, and integrated GPU performance improvements are practically crawling, I think battery life is the only tangible Skylake benefit most people will see. That is, unless you need 5 monitors, or your Thunderbolt 2 is overloaded. Even so, good luck on finding a 4-monitor Thunderbolt 3 docking station anytime soon! :p


I think Apple can make practically all other improvements regardless of which processor generation is inside. So let's just hope they release a new model!
 
Well, aren't you hardcore.

At attacking people, anyway.

Since you seem unwilling to actually show any data (while asking people to "check their facts"), here's some: 12k MIPS in a 2005 AMD Athlon FX; 298k MIPS in a 2014 Intel Core i7 Extreme. Talk about some growth.

In other words, I was right. You have minimal knowledge of the history. You also think the 90's was a long time ago.

Being younger isn't a fault, but spreading ignorance is.


If you care about the facts, here's an article to get you started. It doesn't give you performance numbers, but at least it gives you dates and clock speeds. It also fails to provide a good perspective on when faster variants of a processor were released, or the fact that some changed sockets during their lifetime. Keep in mind that in the earlier years of personal computing it could sometimes take up to a year (maybe even two) between a processor's official "release date" and when we could actually purchase (or assemble) a usable, working machine with that CPU:

http://www.maximumpc.com/article/features/cpu_retrospective_the_life_and_times_x86?page=0,0
 
In other words, I was right.

Nope.

Your claims again:

"In the past it would have been 100% faster."

And:

"It's sad, but the near stand-still in computational power has lowered both customer expectations and developers' innovation/creativity."

If you care about the facts, here's an article to get you started. It doesn't give you performance numbers, but at least it gives you dates and clock speeds.

It is therefore entirely irrelevant to your claims.
 
Nope.

Your claims again:

"In the past it would have been 100% faster."

And:

"It's sad, but the near stand-still in computational power has lowered both customer expectations and developers' innovation/creativity."



It is therefore entirely irrelevant to your claims.

Just as I was afraid ... you don't care. Maybe it's too much work.


I keep getting quoted on the "100% faster" as if it's wrong. Someone framed my comment as being "100% faster in 1.5 years." I was actually thinking 2.5 years (when Haswell began with the 4770K). But the fact is it doesn't matter. As if by getting me down to 1.5 years and somehow finding only an 80% increase I'm a liar, and we can ignore how pathetically slow today's progress is by comparison.

The fact is there's no comparison, which is why no one can refute it with facts.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.