Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Agreed! As it stands Apple’s MBP are now 2 generations do COU’s behind.

Maybe let that new hardware VP that showcased the iMac Pro have a turn on the MacBook Pros ... yet maybe have him hang out with the former VP of hardware engineering Mansfield
[doublepost=1531094595][/doublepost]

Lmao which desktops?!

IMac and iMac Pro are they same product category or lineup just different skus highlighting performance scale.
[doublepost=1531095086][/doublepost]

The iMac pro is a new hardware released or shipped early 2018 you forgot about that.

MacBook will be the August announcement for Back To School along with educational sku for iMac and current iPad. All will get minor updates OR sell as existing hardware for slight discounted price. Then September or October updates.

Not that I disagree, but the iMac Pro was announced at WWDC 2017 and went on sale as a late 2017 model in December right?
 
ECC memory is basically useful if you're running server applications, and that's it. This is also virtually the only space in which Xeon is used in the PC space, and with good reason. It's really expensive and doesn't make sense.
Sure, who cares when their business excel calculation has an error? Who cares when their scientific calculations have an error? Who cares when their taxes are off? Who cares about their data being corrupted?

Correctness should be king, and it's an abysmal state of affairs that it isn't. Just like it's abysmal that APFS checksums the meta data, but not the actual data. It cares about itself more than your it does about your data.
 
  • Like
Reactions: RandomDSdevel
Not likely. The only windows users switching is their becoming programmers for iOS, WatchOS, TVOS.

Well Mac sales keep climbing and PC sales (as an aggregate) keep falling, so maybe it's all just people new to computers migrating from smartphones and tablets (most likely running iOS).


Trust this: if Apple XCode SDK was freely avail and supported on Windows completely... you’d see a huge drop in Mac sales and services revenue in less than 12mth and continue to dwindle.

Which of course is why XCode will never be ported to Windows. ;)
 
  • Like
Reactions: DeepIn2U
Just introduce a MBP 13" with a new keyboard and I'll be the first one in line. Some extra ports next to the thunderbolt 3 ports would be grate but not a must. I'm waiting for over a year to upgrade my MB Air to a new Pro model, if this year introduction has no better keyboard I'll just have to go for it.
 
  • Like
Reactions: dkracken95
But that would mean even more MBP SKUs and inventory complexity. Soon enough "DongleGate" and "CableGate" will be in the past as everyone - user and manufacturer - settle on USB-C and the transition will be less painful then it was in the TB1 and TB2 eras since the common port works for everything and not just displays. And while TB3 docks are just as expensive as the TB2 docks, USB-C docks are significantly less (our MBPs all have a USB-C dock for peripheral and primary display connection and they are like $50 vs the $300 for a TB2/TB3 dock).
We've now had donglegate for 2 years, and it shows no signs to slowing down. How long do you think it's going to last?

The USB-C docks are about as expensive as the USB-A docks, and offer the same level of functionality, too.

The problem with TB3 is precisely that it tries to be everything, so that means it's got to be a super powerful AND numerious port. This is very expensive to make. It's awesome, but expensive, and because it's so expensive it just won't get adopted, because the vast majority of consumers just cannot afford it OR they need to go donglegate.

It's also useful for many workstation configurations which is why workstations like the iMac Pro and Mac Pro have them as well as PC workstations like the HP Z-series and the Dell Precision line.
As a software engineer working with CAD tools and scientific computing, I absolutely could not care less about ECC memory. Please give me one non-server use-case where ECC is important. Maybe if your calculations take days I suppose? But in those circumstances I'd offload it to the cloud or a supercomputer, which does use ECC memory, and is a server.

In single-core benchmarks. Once you go to multi-core benchmarks that scale, the iMac Pro (or other Xeon-equipped PC) will crush them.
Yes, it does depend on the workload a little, but even then...

A 8750H in a Windows laptop scores 5037 SC, 21582 MC in geekbench, while an iMac Pro scores 5009 SC and 30541 MC - clearly higher on the MC front - however a 2018 Aero 15X with a 6 core + GTX 1070 Max-Q will export 4K video ~15% faster than the baseline iMac Pro in Premiere Pro according to Dave2D. Also the fact that such a cheap laptop can even approach the iMac Pro, a super expensive desktop, really says everything that needs to be said.

Well first, nobody is using an iMac Pro as a server.

The ones who are using it for software development may or may not need the ECC memory, but they do need the extra cores. iOS developer Marco Arment went from an iMac 5K to an iMac Pro and he notes that the extra cores make a tangible difference in his workflow and efficiency.
I think we agree that everyone wants more cores. The core i9 exists for that purpose. It's significantly cheaper and just as fast.

Xeon is for you if you want ECC memory or more than 16 cores. There is no other reason to get it.

Of course, Apple offers an iMac without a Xeon chip at a significantly lower entry price point so it's not like your only choice is Xeon.
It is if I want an iMac release this year with a decently modern chipset and graphics that don't suck.

If the iMac had been upgraded to the 8000 series CPU's and VEGA GPU's, I'd be a lot more lenient on Apple - but they haven't done that.
[doublepost=1531173650][/doublepost]
Sure, who cares when their business excel calculation has an error? Who cares when their scientific calculations have an error? Who cares when their taxes are off? Who cares about their data being corrupted?

Correctness should be king, and it's an abysmal state of affairs that it isn't. Just like it's abysmal that APFS checksums the meta data, but not the actual data. It cares about itself more than your it does about your data.
The chances of your Excel calculation going wrong because of a bit-flip is basically 0. Not only does the data need to corrupt in the first place, which is very rare, but it then also needs to corrupt in such a way that the data is changed but the system and code is not and remains stable. The chance of this is so low I can't even tell you what it is, but we're definitely going to need scientific notation. xD

For longer running jobs like scientific calculations and tax calculations, both of which I have worked with, are offloaded to a server and done there, into which you do use ECC.

The problem with this absolutist view is that it just isn't practical. ECC memory and Xeon chips are very expensive, and they save you from an extremely, extremely small amount of pain in a workstation. Doing checksums on the actual files being written is certainly a good idea, but it NEEDS a co-processor. The main CPU should not be tasked with thiis, as it would radically slow down the computer. I'm not against the idea though - it's actually a pretty good idea. Certainly much more useful than ECC memory for the average computer user.
 
We've now had donglegate for 2 years, and it shows no signs to slowing down. How long do you think it's going to last?

As long as obstinate people remain obstinate? I replaced all my USB-A to Lightning / USB-A / USB-B / MicroUSB / MiniUSB cables with USB-C versions and now everything works with USB-C just as it did with USB-A. Cost me literally peanuts via Monoprice and Amazon Basics.

Ironically, the only dongle I have is the Apple Thunderbolt 2 to 3 adapter to plug my TB2 Drobo 5D in because nobody makes a TB2 to TB3 cable.


The problem with TB3 is precisely that it tries to be everything, so that means it's got to be a super powerful AND numerious port. This is very expensive to make. It's awesome, but expensive, and because it's so expensive it just won't get adopted, because the vast majority of consumers just cannot afford it OR they need to go donglegate.

They need to go to DongleGate with USB-C, too. My company is a mix of Dells (all with USB-C in addition to USB-A) and MacBook Pro 13s and 15s. Some of the Dells (the 7300 series) only have one USB-A and two USB-C ports (and recharge via USB-C like the MBPs) and no wired Ethernet plus only micro-HDMI for video output so they need docks or dongles just like the MBPs.

And you don't have to use expensive Thunderbolt peripherals. You can use vastly cheaper USB-C peripherals in the same port. So while you are paying more for the port, you do not have to pay more for what you plug into the port unless you want the performance / flexibility boost TB offers over USB.


A 8750H in a Windows laptop scores 5037 SC, 21582 MC in geekbench, while an iMac Pro scores 5009 SC and 30541 MC - clearly higher on the MC front - however a 2018 Aero 15X with a 6 core + GTX 1070 Max-Q will export 4K video ~15% faster than the baseline iMac Pro in Premiere Pro according to Dave2D. Also the fact that such a cheap laptop can even approach the iMac Pro, a super expensive desktop, really says everything that needs to be said.

Premiere Pro leverages nVidia CUDA so the GTX 1070 is why it can export 4K video so quickly, not the i7-8750H. Pair an i7-8750H with an AMD Vega 56/64 and the performance should drop a fair bit I would imagine.

Of course, the iMac Pro isn't really aimed at Premiere Pro for video editing as much as it is Final Cut Pro, which can leverage the AMD's GPU architecture as it's optimized for it.
 
Last edited:
  • Like
Reactions: RandomDSdevel
Good for you honest. Yet why are you still on a Mac forum just to tell everyone replies to a thread wanting hoping about Mac hardware. If you’ve moved on to another desktop platform by all means use it, join related forum and enjoy. Why show off another platform and prance as if you’re better? Surely your ego doesn’t need this low level of gratitude or petting, right? Not trying to be rude but it’s not helpful to this thread. Battery life is relevant to capacity and use of applications and cpu thread use and the core OS managing power delivery when demanded.

So if anyone says that something is better than a Mac (or even hopes the features of that other thing come to a Mac) they have to immediately leave the forum and never mention it to other users :D talking about the low level of gratitude
 
  • Like
Reactions: RandomDSdevel
As a software engineer working with CAD tools and scientific computing, I absolutely could not care less about ECC memory. Please give me one non-server use-case where ECC is important. Maybe if your calculations take days I suppose?

Example: Your a PhD student writing your dissertation. You've spent hundreds of hours editing it in Word. During this time, the text has of course been sitting in RAM. Unbeknownst to you, a single bit error flips a "<" into a ">". This dramatically changes the meaning of your dissertation. Nobody catches the error - they just assume you're arguing the opposite case. Only years later is the erroneous assertion disproven.

ECC memory shouldn't cost a lot - just 12.5% more for a "9th" RAM chip. AMD includes ECC functionality on every Ryzen CPU. It is just Intel that makes it available only as a premium feature.
 
Example: Your a PhD student writing your dissertation. You've spent hundreds of hours editing it in Word. During this time, the text has of course been sitting in RAM. Unbeknownst to you, a single bit error flips a "<" into a ">". This dramatically changes the meaning of your dissertation. Nobody catches the error - they just assume you're arguing the opposite case. Only years later is the erroneous assertion disproven.

ECC memory shouldn't cost a lot - just 12.5% more for a "9th" RAM chip. AMD includes ECC functionality on every Ryzen CPU. It is just Intel that makes it available only as a premium feature.
Any self-respecting PhD student should be writing their thesis in LaTeX. :p

The chances of a random bit flip resulting in this happening is well beyond negligible. Add to this, there would be software level checks to make sure this would never happen. You can what hardware ECC does in software, its just going to be imperceptibly slower for this kind of stuff.
[doublepost=1531196364][/doublepost]
ECC memory shouldn't cost a lot - just 12.5% more for a "9th" RAM chip. AMD includes ECC functionality on every Ryzen CPU. It is just Intel that makes it available only as a premium feature.
Also, I don't think you can do this with just a 9th ram chip. If you find an error, which bit in the incorrect byte is wrong? I can see it working with two extra modules, that way you can checksum both horizontally and vertically across blocks of 8 bytes. (Assuming only one error per 8 byte block)

Of course, I could just wiki how ECC works, but I'm too lazy.
 
Last edited:
  • Like
Reactions: RandomDSdevel
Premiere Pro leverages nVidia CUDA so the GTX 1070 is why it can export 4K video so quickly, not the i7-8750H.

Exactly.

And not only does Premiere Pro utilize CUDA... the April 2018 update to Premiere Pro added the capability of using the unused integrated Intel GPU that is built into the i7-8750H.

So you can use the processor's 6 cores and 12 threads... the Intel iGPU... and the NVidia GTX 1070.

Windows video editing in Premiere just got a lot more interesting! :)

Hell... even on my 4th-gen Haswell desktop... this Adobe update drastically improved my Premiere Pro experience. Whereas simply scrubbing the timeline used to make my computer stutter and my fans spin up... now it's buttery smooth. And H.264 MP4 exports can be up to 30% faster depending on the file. And this is an "old" processor!

Imagine what a modern processor will do!

For once... an Adobe update made a real difference!
 
Last edited:
  • Like
Reactions: RandomDSdevel
Then by definition that should be a Mac Book. A Mac Book Pro should be aimed at power users and therefor not be restrained (so severely) by dimensions and weight.

IMHO as an outsider, I don't think they have thickness/weight requirements other than "not significantly worse than the previous generation". They chose parts based on power requirements (including runtime of 10h for certain tasks), then made the smallest case those parts fit in.

That said, the rumors were that they were originally shooting for a 100Wh battery in the current 15 by using the MacBook battery tech, and for some reason pulled it late in the design process. I've been curious if this refresh also boosts the total battery size (even if that increases weight).

I also have personally speculated that they want to move more functionality into the ARM chip for "Power Nap", so that they can hibernate the machine to allow them to use the more power-hungry RAM tech intel currently mandates for hitting 32 GB. However, even if that is a feature they are planning to roll out this refresh, I don't think that is why they are a little behind their annual refresh target. Instead, I think that is related to controlling messaging for the keyboard replacement program. If thats the case, these machines are probably ready to go any Tuesday now.
 
  • Like
Reactions: RandomDSdevel
IMHO as an outsider, I don't think they have thickness/weight requirements other than "not significantly worse than the previous generation". They chose parts based on power requirements (including runtime of 10h for certain tasks), then made the smallest case those parts fit in.

I think you're right, however it's not as simple as 'making the smallest case that fits'. If the case was bigger, there would be more room for better cooling, which would mean even the same internal hardware would perform better because of Turbo Boost. So even the choice to make the case 'as small as possible' is a choice to not have the computer run as fast as it could, or last as long as it could (heat degradation).

That said, the rumors were that they were originally shooting for a 100Wh battery in the current 15 by using the MacBook battery tech, and for some reason pulled it late in the design process. I've been curious if this refresh also boosts the total battery size (even if that increases weight).

Darn, I hope so. The battery size in the modern MBPs are pretty pitiful compared to what they used to be in W.hr terms, even ignoring the weight benefits that have come from advancing technology.

I also have personally speculated that they want to move more functionality into the ARM chip for "Power Nap", so that they can hibernate the machine to allow them to use the more power-hungry RAM tech intel currently mandates for hitting 32 GB. However, even if that is a feature they are planning to roll out this refresh, I don't think that is why they are a little behind their annual refresh target. Instead, I think that is related to controlling messaging for the keyboard replacement program. If thats the case, these machines are probably ready to go any Tuesday now.

Although I'm a fan of using hibernate to avoid high standby power draw of DDR4 RAM, I'm not sure if Apple would be willing to go this way because of the time taken to wake from sleep. Maybe if Apple can put a faster SSD in somehow (although its already very fast) loading 32GB of RAM backup into RAM would take like 10 seconds. Of course, that should only happen if the RAM is filled to the brim with open documents, and the MBP has been left sleeping off battery power for days, so not a common occurrence.

I hope they do this though.
 
  • Like
Reactions: RandomDSdevel
The chances of a random bit flip resulting in this happening is well beyond negligible.

Way back in my Mac 128K/512K/Plus days, I believe I saw two random memory flips. In the first, the screen should have been all black, yet one white pixel appeared. In the second, the program had zeroed out a block of memory and yet there was a bit set in the block. A random memory error, or a software bug that poked a random memory location?

Also, I don't think you can do this with just a 9th ram chip. If you find an error, which bit in the incorrect byte is wrong? I can see it working with two extra modules, that way you can checksum both horizontally and vertically across blocks of 8 bytes. (Assuming only one error per 8 byte block)

To fix single bit errors and detect double bit errors in a 16 bit word requires 6 extra bits. For a 32 bit word, the same requires 7 extra bits. For a 64 bit word, 8 bits are needed. This is why one can readily buy 72 bit wide DIMMs.

(I'm wondering when ECC correction will be built into DRAM chips: The extra bits required to ECC a 1024 bit row are negligible. The only real issue is speed - the required XOR tree is very deep.)
 
  • Like
Reactions: RandomDSdevel
As long as obstinate people remain obstinate? I replaced all my USB-A to Lightning / USB-A / USB-B / MicroUSB / MiniUSB cables with USB-C versions and now everything works with USB-C just as it did with USB-A. Cost me literally peanuts via Monoprice and Amazon Basics.

Ironically, the only dongle I have is the Apple Thunderbolt 2 to 3 adapter to plug my TB2 Drobo 5D in because nobody makes a TB2 to TB3 cable.
Or you can just not do that and have all the cables for free bundled with the phone or accessory?

But you see, you're evading the point. You paid an $800 premium for those ports, and then you paid for all the cables as well. For those that don't want to pay such a colossal premium, they don't get to use TB3 or even USB-C. And the thing is - most people don't.

In fairness you can just use regular USB-3.1 in the USB-C format, but these ports are really, really rare on desktops and laptops.

They need to go to DongleGate with USB-C, too. My company is a mix of Dells (all with USB-C in addition to USB-A) and MacBook Pro 13s and 15s. Some of the Dells (the 7300 series) only have one USB-A and two USB-C ports (and recharge via USB-C like the MBPs) and no wired Ethernet plus only micro-HDMI for video output so they need docks or dongles just like the MBPs.

And you don't have to use expensive Thunderbolt peripherals. You can use vastly cheaper USB-C peripherals in the same port. So while you are paying more for the port, you do not have to pay more for what you plug into the port unless you want the performance / flexibility boost TB offers over USB.
Correct, you can plug cheap stuff into your expensive port - but if that's what you're going to do, why get the expensive port?

My point is TB3 needs to about the same price as USB 3.1 before it receives any adoption whatsoever. Right now it's used for things that nothing else can do, such as eGPU's, and that's it.

Intel also needs to allow AMD and ARM and others to use it. Before that happens it'll never be a standard.

And you may very well sell a lot of donglegate Dells - I wouldn't know about those, I'm not interested.

Premiere Pro leverages nVidia CUDA so the GTX 1070 is why it can export 4K video so quickly, not the i7-8750H. Pair an i7-8750H with an AMD Vega 56/64 and the performance should drop a fair bit I would imagine.

Of course, the iMac Pro isn't really aimed at Premiere Pro for video editing as much as it is Final Cut Pro, which can leverage the AMD's GPU architecture as it's optimized for it.
Well it'll drop by ~25% at most. If you genuinely believe that getting a desktop that costs over twice as much for a 20% improvement is worth it, you're an idiot. I know that's very harsh, but it's the reality. There are much better options for you.

If you do it because you prefer Final Cut Pro and MacOS, however, you're certainly not an idiot. I can respect that, 100%. But think about how much better Final Cut Pro and MacOS could be if Apple actually made decent computers again. (Speaking strictly about the hardware side here)

I suppose there's a reason the Hackintosh community has grown to such an enormous size.

Example: Your a PhD student writing your dissertation. You've spent hundreds of hours editing it in Word. During this time, the text has of course been sitting in RAM. Unbeknownst to you, a single bit error flips a "<" into a ">". This dramatically changes the meaning of your dissertation. Nobody catches the error - they just assume you're arguing the opposite case. Only years later is the erroneous assertion disproven.

ECC memory shouldn't cost a lot - just 12.5% more for a "9th" RAM chip. AMD includes ECC functionality on every Ryzen CPU. It is just Intel that makes it available only as a premium feature.
This exactly is utterly ridiculous. The chance of a bitflip happening at all is extremely small. The chance of that hitting your dissertation is dozens of orders of magnitude smaller. The chance of you not catching it during proofreading, and then also having the logic flipped in your head when you have to defend it instead of just being able to say "wait wtf? How did that get in there?" after that is 0. Your example, or anything like it, has LITERALLY NEVER EVER HAPPENED. It's about as likely as you spontaneously teleporting outside your window and falling down because of quantum mechanics - it can happen, but the possibility is so remote it'd be stupid to consider it.

And by the way, ECC can fail too. The chance is just lower.
 
My point is TB3 needs to about the same price as USB 3.1 before it receives any adoption whatsoever. Right now it's used for things that nothing else can do, such as eGPU's, and that's it.

Intel also needs to allow AMD and ARM and others to use it. Before that happens it'll never be a standard.

Well Intel has made the license open so if AMD and ARM want to adopt it, they now can.
 
  • Like
Reactions: RandomDSdevel
Way back in my Mac 128K/512K/Plus days, I believe I saw two random memory flips. In the first, the screen should have been all black, yet one white pixel appeared. In the second, the program had zeroed out a block of memory and yet there was a bit set in the block. A random memory error, or a software bug that poked a random memory location?

I think most bit flips are caused by cosmic rays (apart from actual hardware faults, and I'm sure there are also other ways too) Some quick googling suggested that bit flip errors are on the order of 3 per year, per memory stick (when used 24x7x365.25). So for your old computer with 1 million bits of RAM, each bit is physically large, and easy to be hit by a speeding helium nucleus. In a modern computer where we have RAM sticks with 128 billion bits, the chances of that cosmic ray hitting a bit that actually matters is orders of magnitude lower. Also, there's also a lot more spare RAM for software based checksums.

To fix single bit errors and detect double bit errors in a 16 bit word requires 6 extra bits. For a 32 bit word, the same requires 7 extra bits. For a 64 bit word, 8 bits are needed. This is why one can readily buy 72 bit wide DIMMs.
Can you explain how this works? I'm not convinced these numbers are right.
 
  • Like
Reactions: RandomDSdevel
So for your old computer with 1 million bits of RAM, each bit is physically large, and easy to be hit by a speeding helium nucleus. In a modern computer where we have RAM sticks with 128 billion bits, the chances of that cosmic ray hitting a bit that actually matters is orders of magnitude lower.

At the same time, the 128 billion are each much smaller, easier to flip.

With recent MacOS'es using every spare bit of RAM to cache files, the chance that there's something important in any particular RAM location has gone way up.

Can you explain how this works? I'm not convinced these numbers are right.

Time to go down to the dungeon, er, archives... (creak)(cough)(cough)(grrr)(ouch!)(git!)(poke) 1981 Texas Instruments TTL Data Book (slam)(whew). The 74LS630 is an Error Detection And Correction chip for 16 bit memory. It uses 6 check bits. Each is generated by computing the parity of a carefully selected subset of 8 data bits. Each of the 16 data bits is used in three check bits.

To verify memory read back data, newly computed check bits are compared with the retrieved bits. If three don't match, then one data bit has flipped (the inverted bit affects three check bits). If one doesn't match, then that one check bit was flipped. Other mismatches mean a more complex error.

For today's 64 bit memory, you'd take the parity of 8 subsets of 32 bits, with each data bit appearing in 4 parity trees.

To be convinced that 8 bits is enough to protect 64: How many bits wide is a number selecting one of the 72 bits? The answer is 7. Then add an extra bit so that two bit errors can be detected.
 
But I guess that's not all the filed numbers, is it? And BTW, how many filed numbers are still left? I mean, do the new releases today count as 4 models, as 3 models, or as 2 models? If the new MBPs are 2 models, then we have a real surprise here, as everybody thought that the 3 last numbers were MBPs, but in this case it would be the opposite: the 2 first numbers were MBPs, and we would have 3 new Macs awaiting their release...
 
  • Like
Reactions: RandomDSdevel
But I guess that's not all the filed numbers, is it? And BTW, how many filed numbers are still left? I mean, do the new releases today count as 4 models, as 3 models, or as 2 models? If the new MBPs are 2 models, then we have a real surprise here, as everybody thought that the 3 last numbers were MBPs, but in this case it would be the opposite: the 2 first numbers were MBPs, and we would have 3 new Macs awaiting their release...

Well we do have Kuo's remarks of a refreshed Mac Mini and a new MacBook Air / larger MacBook on the way.

Which I guess are also going to drop via mid-week PR like the new MBP did this morning because why have an event for them and not include the new MBPs?
 
  • Like
Reactions: RandomDSdevel
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.