Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I took a glance at the thread and the echoes you mention, do you mean those people that have absolutely no clue what a workstation is, what a Mac Pro is, what HP Z-series is? Like this guy in the thread that posted this comment:

"X299 motherboard with Thunderbolt $500
i7-7820X (8 core, 3.6GHz, 4.3GHz boost, faster than 8 core in Mac Pro) $600
32GB RAM $250
256GB NVME SSD $80
Radeon VII $680
PSU $200
Case $100
CPU cooler $150
Additional cooling $100
Keyboard $100
Mouse $100

TOTAL = $2,860

Another example:

Z390 motherboard with Thunderbolt $300
i7-9700K (8-core, 3.6GHz, 4.9GHz boost, faster than 8 core in Mac Pro) $400
32GB RAM $200
256GB NVME SSD $80
Radeon VII $680
PSU $100
Case $80
CPU cooler $100
Additional cooling $60
Keyboard $50
Mouse $50

TOTAL = $2,100"
 
ECC is great for scientific users.. but.. this product isn’t priced for them.

You'd think there would be a creative field - the kind they always market these machines to - where this stuff would really matter and had people refusing to work without it.
 
  • Like
Reactions: ct2k7
ECC is great for scientific users.. but.. this product isn’t priced for them.

You'd think there would be a creative field - the kind they always market these machines to - where this stuff would really matter and had people refusing to work without it.

I read a short article about that somewhere. It said that ECC is incredibly important and effective for protecting long video renders from getting really botched by an error creeping in towards the end of a long laborious job. It apparently great work to be ruined if not present. I’ll look to see if I can find that read. I can’t attest for this myself as I don’t have any workflows to compare. So apparently it’s a good insurance policy for some creative workflows.
 
I read a short article about that somewhere. It said that ECC is incredibly important and effective for protecting long video renders from getting really botched by an error creeping in towards the end of a long laborious job. It apparently can cause great work to be ruined. I’ll look to see if I can find that read. I can’t attest for this myself as I don’t have any workflows to compare. So apparently it’s a good insurance policy for some workflows.

Ah that’s interesting, I wonder what error it would be? I can’t imagine bit erroring can screw things up that badly? Most of the media creation pros I know don’t bother with ECC, but if this is true, I’d love to share it with them.

One thing I can see is in high data transfer scenarios where ECC could protect, however, it’s more of a case of hard errors and soft errors, with the former being prevalent, I’d think. An erroneous bit isn’t likely to be noticed but is likely to be corrected anyway.
 
Last edited:
Ah that’s interesting, I wonder what error it would be? I can’t imagine bit erroring can screw things up that badly? Most of the media creation pros I know don’t bother with ECC, but if this is true, I’d love to share it with them.
For audio/video, it's unlikely that a single bit error in the audio/video streams will be noticeable. There's a lot of meta-data in the container, however, which can be a disaster if it is corrupted. There's also a lot of volatile state in the encoders and players that can cause problems - even if the flipped bit is never written to persistent store.

I have ECC on my home PC for a different reason - I never want to wonder if an app or system crash is due to a memory error. I want it to either work, or crash with a blue-screen that says "uncorrectable memory error". No bizarre events, just good or dead.
 
Ah that’s interesting, I wonder what error it would be? I can’t imagine bit erroring can screw things up that badly?

I really don’t know, I just skimmed through while looking for something else, I’ll see if I can find it later. Like anything else on the internet, it may be incorrect, but it sounded credible.
 
  • Like
Reactions: ct2k7
Also don't a lot of scientific users want NVIDIA for CUDA?

Depends on what they’re doing, some newer applications can run on GPUs, a lot of things still can only run on one CPU thread (looking at you, Unified Model). ML and data science can benefit from GPU or at least new ones can. ETL can’t.
 
It said that ECC is incredibly important and effective for protecting long video renders from getting really botched by an error creeping in towards the end of a long laborious job. It apparently great work to be ruined if not present. I’ll look to see if I can find that read. I can’t attest for this myself as I don’t have any workflows to compare. So apparently it’s a good insurance policy for some creative workflows.
It's not about "errors" creeping in as much as just losing the time if the computer crashes due to a memory error. This goes for any workflow where computations (or 24x7 uptime is required) are taking place over an extended period of time. The longer the time, the more valuable it becomes. But unless you have workflows regularly extending over 24 continuous hours or more, it's not particularly useful. (it should also be noted that location can play a part... high altitudes are more susceptible to memory errors due to increased background radiation)

The idea that a bit will flip and change an Excel spreadsheet or the rendered output, etc. is, practically speaking, nonsense.
 
Lot's of people want CUDA.... They're on Windows or Linux now.

I’m not exactly in the know, but my impression is that Metal is supposed be Apple’s alternative to CUDA? It’s really a matter of the developers adopting it. Judging from Apple’s PR blurbs, it sounds like most of them are on board.

If AMD GPU’s are equally powerful, Metal is just as efficient, and developers adopt it, what’s the difference?
 
Last edited:
I’m not exactly in the know, but my impression is that Metal is supposed be Apple’s alternative to CUDA? It’s really a matter of the developers adopting it. Judging from Apple’s PR blurbs, it sounds like most of them are on board.

If AMD GPU’s are equally powerful, Metal is just as efficient, and developers adopt it, what’s the difference?
There's just not much incentive for developers to adopt it (for that type of software) when the user base is almost non-existent on the Mac. It's a chicken or egg kind of problem.
 
I’m not exactly in the know, but my impression is that Metal is supposed be Apple’s alternative to CUDA? It’s really a matter of the developers adopting it. Judging from Apple’s PR blurbs, it sounds like most of them are on board.

If AMD GPU’s are equally powerful, Metal is just as efficient, and developers adopt it, what’s the difference?
The application libraries use CUDA, not Metal. There's a huge ecosystem built around CUDA libraries - and none of that runs on today's Apples.
 
  • Like
Reactions: ssgbryan
It's not about "errors" creeping in as much as just losing the time if the computer crashes due to a memory error. This goes for any workflow where computations (or 24x7 uptime is required) are taking place over an extended period of time. The longer the time, the more valuable it becomes. But unless you have workflows regularly extending over 24 continuous hours or more, it's not particularly useful. (it should also be noted that location can play a part... high altitudes are more susceptible to memory errors due to increased background radiation)

The idea that a bit will flip and change an Excel spreadsheet or the rendered output, etc. is, practically speaking, nonsense.

The chance of a hard error happening which would cause data loss is minuscule these days. Theoretically, ECC would prevent it, but it doesn’t prevent most of the newer cases.

Soft errors are much more common (ask Google’s server hardware design teams).

That said, I am guessing most pros don’t process a whole file, and do it in sections?
 
The chance of a hard error happening which would cause data loss is minuscule these days. Theoretically, ECC would prevent it, but it doesn’t prevent most of the newer cases.

Soft errors are much more common (ask Google’s server hardware design teams).

That said, I am guessing most pros don’t process a whole file, and do it in sections?
My server error logs would disagree with you. (I run about 100 servers with around 60 TiB of RAM.) I read a Sun service advisory that said to ignore ECC corrected errors if they occur less than once per hour (24 per day). I have one 72 core server with 1.5 TiB of RAM that's getting a few thousand per day. I'm scheduled a maintenance window to shut it down, disable ECC in the BIOS, and run memory tests to find the bad DIMM(s).

And ECC errors can be soft errors ("soft" meaning that sometimes accessing a particular section of RAM generates an error, where "hard" means that you get an error every time you access that section).

And what does "a file" have to do with RAM?
 
My server error logs would disagree with you. (I run about 100 servers with around 60 TiB of RAM.) I read a Sun service advisory that said to ignore ECC corrected errors if they occur less than once per hour (24 per day). I have one 72 core server with 1.5 TiB of RAM that's getting a few thousand per day. I'm scheduled a maintenance window to shut it down, disable ECC in the BIOS, and run memory tests to find the bad DIMM(s).

And ECC errors can be soft errors ("soft" meaning that sometimes accessing a particular section of RAM generates an error, where "hard" means that you get an error every time you access that section).

And what does "a file" have to do with RAM?

I haven’t had any ECC issues before so I can’t maya much on it.

By file, I meant, they’re not processing a whole .ts file in one go, but rather chunks before stitching it up?
 
And ECC errors can be soft errors ("soft" meaning that sometimes accessing a particular section of RAM generates an error, where "hard" means that you get an error every time you access that section).
Note that ECC errors are logged, but not visible to applications. A "soft" ECC error means that some references to the section add an "error corrected" entry to the log. "Hard" ECC errors mean that every reference adds an "error corrected" entry to the log.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.