But that would mean even more MBP SKUs and inventory complexity. Soon enough "DongleGate" and "CableGate" will be in the past as everyone - user and manufacturer - settle on USB-C and the transition will be less painful then it was in the TB1 and TB2 eras since the common port works for everything and not just displays. And while TB3 docks are just as expensive as the TB2 docks, USB-C docks are significantly less (our MBPs all have a USB-C dock for peripheral and primary display connection and they are like $50 vs the $300 for a TB2/TB3 dock).
We've now had donglegate for 2 years, and it shows no signs to slowing down. How long do you think it's going to last?
The USB-C docks are about as expensive as the USB-A docks, and offer the same level of functionality, too.
The problem with TB3 is precisely that it tries to be everything, so that means it's got to be a super powerful AND numerious port. This is very expensive to make. It's awesome, but expensive, and because it's so expensive it just won't get adopted, because the vast majority of consumers just cannot afford it OR they need to go donglegate.
It's also useful for many workstation configurations which is why workstations like the iMac Pro and Mac Pro have them as well as PC workstations like the HP Z-series and the Dell Precision line.
As a software engineer working with CAD tools and scientific computing, I absolutely could not care less about ECC memory. Please give me one non-server use-case where ECC is important. Maybe if your calculations take days I suppose? But in those circumstances I'd offload it to the cloud or a supercomputer, which does use ECC memory, and is a server.
In single-core benchmarks. Once you go to multi-core benchmarks that scale, the iMac Pro (or other Xeon-equipped PC) will crush them.
Yes, it does depend on the workload a little, but even then...
A 8750H in a Windows laptop scores 5037 SC, 21582 MC in geekbench, while an iMac Pro scores 5009 SC and 30541 MC - clearly higher on the MC front - however a 2018 Aero 15X with a 6 core + GTX 1070 Max-Q will export 4K video ~15% faster than the baseline iMac Pro in Premiere Pro according to Dave2D. Also the fact that such a cheap laptop can even approach the iMac Pro, a super expensive desktop, really says everything that needs to be said.
Well first, nobody is using an iMac Pro as a server.
The ones who are using it for software development may or may not need the ECC memory, but they do need the extra cores. iOS developer Marco Arment went from an iMac 5K to an iMac Pro and he notes that the extra cores make a tangible difference in his workflow and efficiency.
I think we agree that everyone wants more cores. The core i9 exists for that purpose. It's significantly cheaper and just as fast.
Xeon is for you if you want ECC memory or more than 16 cores. There is no other reason to get it.
Of course, Apple offers an iMac without a Xeon chip at a significantly lower entry price point so it's not like your only choice is Xeon.
It is if I want an iMac release this year with a decently modern chipset and graphics that don't suck.
If the iMac had been upgraded to the 8000 series CPU's and VEGA GPU's, I'd be a lot more lenient on Apple - but they haven't done that.
[doublepost=1531173650][/doublepost]
Sure, who cares when their business excel calculation has an error? Who cares when their scientific calculations have an error? Who cares when their taxes are off? Who cares about their data being corrupted?
Correctness should be king, and it's an abysmal state of affairs that it isn't. Just like it's abysmal that APFS checksums the meta data, but not the actual data. It cares about itself more than your it does about your data.
The chances of your Excel calculation going wrong because of a bit-flip is basically 0. Not only does the data need to corrupt in the first place, which is very rare, but it then also needs to corrupt in such a way that the data is changed but the system and code is not and remains stable. The chance of this is so low I can't even tell you what it is, but we're definitely going to need scientific notation. xD
For longer running jobs like scientific calculations and tax calculations, both of which I have worked with, are offloaded to a server and done there, into which you do use ECC.
The problem with this absolutist view is that it just isn't practical. ECC memory and Xeon chips are very expensive, and they save you from an extremely, extremely small amount of pain in a workstation. Doing checksums on the actual files being written is certainly a good idea, but it NEEDS a co-processor. The main CPU should not be tasked with thiis, as it would radically slow down the computer. I'm not against the idea though - it's actually a pretty good idea. Certainly much more useful than ECC memory for the average computer user.