You said I was blantantly wrong, and then agreed with me. Haswell is a chip that has been around for 2 years, hence "2 years old". The GPU chip has been around for 3 years, whether or not they've slightly optimized it via software, hence "3 years old".
It is what it is. I've actually purchased a new 15" because I can't wait for Skylake. But I'm under no pretensions that Apple is using the "latest and greatest" or "highest-end" chips.
As with FCP7, Aperture and Shake, they've done the math and realized the leading edge performance-wise is not where they need to be. And judging by their market cap (and my purchase!), they're exactly right.
Oh, in that way. I misunderstood you. Well, what would you have them do instead? Use non-existing Broadwell chips? And how do you get the GPU to be three years old? As far as I am aware, no Ivy Bridge part shipped with a 40 EU GPU (or more). Or no, wait, you mean the R9 there, right? Well, it's not only software optimisations that've gone into that chip. The GPU arc itself is three years old, fair enough, but the hardware around the primary die has been optimised. As far as I know, there has actually been tweaks to the core GCN architecture branch as well, even though they aren't shown in the GPU numbering. The biggest optimisations have been to heat generation and power consumption (two sides of the same coin), but there has also been things, especially on the memory front that have improved performance. This is evident just from looking at benchmarks (and software, to prove it isn't all in soft optimising), but we sadly have little detail on what they've actually done (trade secrets)
FCPX has more than caught up with FCP7 at this point, and is absolutely wonderful. It's muh faster than 7 on similar hardware, and, going back to our earlier topic, has muh better GPU acceleration .
Shake has somewhat been replaced by Motion (somewhat).
Aperture died because Apple felt like it wouldn't compete with Lightroom anyway, and Photos could be the hobby replacement.
I want to believe.
This is wrong and just upside down. Apple's 750M is clocked at 925Mhz with Turbo disabled. Default clock is 967Mhz and the Turbo adds 15% on top of that. Which means the default clock in action is around 1100Mhz. That is the complete opposite of overclocked but 20% underclocked.
There really is no greater focus on GPU performance. A 15W CPU + 840M/940M perform much better than the 28W chip in the 13" MBP if bother to compare notebooks that come in both options.
The latest AMD M370X is quite clearly not top of the line but just cheaper. Maxwell is the most efficient architecture at 28nm and Apple does not see the need for best efficiency.
Apple once found GPUs too bad but even an HD 4400 Intel is faster than the 320M form once upon a time. These days Apple seems to consider GPUs good enough and not focus on it anymore than necessary.
Where did you get the data on the clocks? I'm now not so certain anymore, but fairly sure I read the Apple 750m had a base of a Ghz, with turbo able to get it to 1250.
A 15W CPU + a dGPU may result in more graphics perf, but the 940m has a TDP of 30W. 30+15=way more than 28W. And the 28W also results in a faster CPU, when the CPU isn't tapped on the GPU side as well. If we're fair though, the Iris Pro 6100 is really, really good. The biggest problem it has is memory speed, but the rMBP uses fast system memory, which works to it's advantage, making the difference smaller than you make it out to be.
Granted. The 370X isn't top of the line. But neither was the 750m before it. Or even the highest end BTO options before the retina. AMD's 370X is however a really good card. And unifying all Macs under the AMD GPU (except intel only machines), isn't that bad an idea, since AMD will have a lead with Vulkan, as it's build around Mantle, and their GPUs were first around the platform. Furthermore, they're still better at OpenCL than nVidia, which is a big one for Apple. If we're talking FP 64, nVidia doesn't stand a chance actually.
Remember, when I say a focus on GPU, I don't mean on MacBook Airs. I mean on the machines that have GPY work as a focus, such as high-end rMBPs and Mac Pros. And the higher end iMacs for that matter. And especially software.
Which is why they don't offer dGPUs in all the rMBPs -- you can already buy a 15" without one. But for the pros who actually need it -- like can't work without it -- they will absolutely continue to offer dGPUs in their rMBPs. Telling pros they can either work from a station or not work at all would kill their pro line of software and hardware in one fell swoop.
And this is exactly the point I was trying to make.
Just about every game out there. 3dmark fire strike, cinebench r15 opengl, also in compute it is on most tasks faster.
An 850M beats a M370X in all tasks and a 950/960M is even faster.
AMD reduced power consumption enough to beat Kepler now but there are no significant architecture changes that put it into the same ballpark as what nvidia did with Maxwell.
Not openCL compute. DirectCompue (not on OS X), and CUDA, fair enough, but not OpenCL. And not FP64... Not at all.
Now, I won't make it sound like I completely disagree with you, as I would've preferred a Maxwell card, but it's not like the AMD GPU is bad. Far from.
It could also be that Apple thought AMD was a better fit. At this point, we are talking 35-40W GPUs. The much faster Nvidia GPUs that often get compared are 50-70-100-120W. Not comparable at all.
Gamers are obsessed with Nvidia, so, I would like to see an Nvidia option and an AMD option. For a lot of non-game benchmarks, AMD has generally been competitive to better. (At least, until the Titan X ($1000) came along on the desktop.)**
**(This is a total aside, but, Nvidia needs to get over its dumb idea that people should pay extra for 64-bit FP. In the engineering/scientific world, 32-bit (only) FP went out with tricorn hats. 32-bits is often still used when there is a specific performance (esp. memory) requirement, but, typical numerical analysis defaults to 64-bits.)
Uhmmm. This is a sad thing, but the R9 370X, has a max TDP of 75W. Like the Scenario TDPs from Intel, it won't really reach that, but it's definitely not a 35W unit.
AMD has a better foot in with the professional market, correct. Quadro aren't half bad though. If you're comparing Fire Pro with Quadro, in most cases, nVidia still wins, although by way less. If compare Radeon with Geforce on pro tasks however, Radeon comes out favourably, which is why I agree somewhat with your first quoted paragraph.
Although I must correct you that the Titan X delivers way worse FP 64 perf than the older Titan. Titan Z would be your best bet if that's what you're going for. Titan X is completely graphics oriented.
How can Apple not upgrade to the Broadwell chip for the 15" when the 13" has better specs(newer chip, faster RAM) than the 15" flagship rMBP. The upgrade to Skylake is probably going to be a complete redesign of the MBP, but doing the upgrade to the Broadwell chip would make it comparable to the 13" and could be accomplished with the current form.
The 15" is already way better than the 13". The die shrink didn't make that massive a difference. The GPUs are way better in the 15" (if not Radeon, the Iris Pro eDRAM makes a huge difference. More than makes up for
As others have stated, Skylake also won't require a design change. Just a new logic board.
OMG no. No more "thinner" crap please. I want MORE POWER (especially GPU) not THINNER. Apple, please keep your thin notebooks in a crapperrific thin lineup like that new ghastly 12-inch Macbook with one connector (including power)....god awful POS that it is. That notebook might as well be a glorified iPad crossover and given how useless the keyboard is, they might as well have included touch screen control. No 15"+ Macbook Pro should resemble that monstrosity, IMO.
And yet Ethernet is still faster and more reliable than ANY WiFI connection in existence and a desktop simply doesn't need WiFi. It needs speed and reliability. I can't get the theoretical limits of WiFi in the same room as the router even. The car is over 100 years old. That doesn't make it obsolete. It's called UPDATES to technology. You might as well say the transistor is obsolete since it's OLD OLD OLD. Show me something better.
Adapters for a dead technology is fine. Show me the money. You'll still need whatever replacement connectors on the new machine. That Macbook has ONE connector for EVERYTHING including power and that is not only stupid, it's ASININE. At the very least, they should have included more USB-C connectors (dead minimum of two to even be functional in the slightest). WTF is the point of having the thinnest lightest most compact notebook around if the design is stymied by having to carry around a whole fracking BAG of adapters to do anything with it? It defeats the point of portable.
My 2008 MBP was portable. It had every connector a person could want on it including a removable battery! (2 USB, 1 FW800, 1 FW400, 1 Gigabit Ethernet, full-size DVI, Audio IN & OUT on separate plugs that also did digital and a full blown expansion port that let me add USB3 to it years later that later models couldn't utilize no matter what). THAT was a GREAT design. Everything that has come since has been a compromise in one or more areas.
iPhone sized computers? Hell, a Mac Mini isn't much larger than an iPhone already (I think an AppleTV Gen2/3 is already smaller overall). Google glasses you wear. We already have the iPhone for that matter (it's a computer believe it or not). The fact is an equivalent desktop of the same time frame will ALWAYS be FASTER, often MUCH FASTER. While Joe "I don't care about computers" may be happy with just a tiny POS pocket computer, no serious computer "nerd/hobbyist/enthusiast" would ever find having JUST one of those acceptable. The fastest thing out there isn't fast enough and probably never will be.
Have you seen the Apple TV? It's way bigger than an iPhone! (Although it doesn't have to be). Although I agree with the rest of your last paragraph.
On the peripheral discussion:
When did you ever plug anything into that 2008 MBP? Cause I know it's very rare I plug things into mine. My iMac neither. Only thing I miss from older models is Line-in audio.
Don't understand your wi-fi situation. I have 60mbps at home, and I can get all of that through 802.11., and I have tried achieving 250mbps on 802.11n at another location. I could not make use of that bandwith with the hardware I have right now anyway. (on the sending data I could saturate it, but the receiver wouldn't be able to keep up).
And my desktops definitely need wi-fi, because I can't drag wires through my house. (as in the people I live with won't have it).
Ports I personally would need:
Card reader, audio (preferably both ways), charging, 1USB (type A), and a thunderbolt port or two. Actually, two Thunders. One TB2 and one TB3. Ever thought about how TB is a confusing acronym? It could also be terrabyte. Anyhow, I realise others may need other ports, but for my sake, you may remove every other port. Or if you give me the adapter, take the USBs from me as well, and let's do it over TB. Only plug in things like twice a month at most.
I still haven't gotten why people hate the keyboard so much. I haven't tried it yet though. Although I agree the Pros shouldn't look like it. I'd say the silver one looks alright, but for crying out loud the gold is gross.
Can't imagine I'd like space grey either, although I haven''t seen it.
Well, there's more than one Broadwell GPU, and there's more than one Haswell GPU. There are obviously things you can't do, but if you just wait for a minute, I'll fix it for you...So can anybody tell me ,
Would I be able to upgrade my haswell mbp
To broadwell gpu.
There. Your MacBook Pro now has the Broadwell GPU. Well, almost, it's still on a 22nm process, but the GPU is essentially the same. 48EUs on both Haswell and Broadwell (GT3 and GT3e - aka. Iris and Iris Pro)