Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

benoitc

macrumors member
Original poster
Aug 18, 2015
35
2
What could we expect from both cards? Are there any tests that could show the differences for a usage in machine learning dev?
 
you will want to look at sustained tests within the unit itself. Burst tests will probably be impressive but I would hold off buying until people have owned these machines long enough to be able to report on how much they throttle under heavy load.
 
right , I kind of agree for the usage in hard conditions though they say they made a lot of progress in heat management. Now I'm also wondering what the extra RAM can give. Maybe it's better on the long term to build the 64, not sure yet :)
 
One thing to keep in mind is that the 64 might be running substantially more hot, eating into the available safe zone for the CPU.

I remember Anandtech’s notes about the top end Vega running very hot whereas the 56 part did much better. Apparently there is a threshold where the chip just takes off. Both are lower clocked on the iMac Pro so they may both operate under this nasty threshold, but we don’t know yet.
[doublepost=1513319996][/doublepost]Here is the review from Anandtech I was talking about:

https://www.anandtech.com/show/11717/the-amd-radeon-rx-vega-64-and-56-review/20
 
Is it just me, or is the $600 premium for upgrading from Vega 56 to Vega 64 over the top? Apple is making a lot of money on those upgrade options...
 
  • Like
Reactions: Karnicopia
Hard to say until consumers test them in hand. Most of Mac desktop GPUs are downclocked from the regul
Is it just me, or is the $600 premium for upgrading from Vega 56 to Vega 64 over the top? Apple is making a lot of money on those upgrade options...
You're absolutely right. I'm thinking about reordering with the Vega 56 and grabbing an external GPU enclosure with a GTX 1080i or wait until the 2080 comes in early 2018. I'm also not liking what I'm hearing that these GPUs are down clocked versions of the consumer ones and the power output for the Vega 56 and 64 are 210W and 295W TDP, respectively.
 
Hi ! I'm about to order it! :) But I still have a doubt between 56 & 64 for FCPX. I studied some bench here, I dont' know if there reliable, the 720€ option seems to be expensive for only 20% more no ? What do you think ? Thanks for your help.
http://gpu.userbenchmark.com/Compare/AMD-Vega-Frontier-Edition-vs-AMD-RX-Vega-56/3929vs3938
what may be interesting in that upgrade is the RAM available. More RAM means that more rendering or calcul data can be put inside the card next to the cores with a fast bandwidth.

On the other hand 8GB are enough for most task for today and probably the next 2 years. Also if you need more you will still have the possibility to plug an eGPU to use just for the rendering or calculs...
 
Thank you Benoitc, so I'm gonna order the 56! I will stay in 32go RAM because It can be upgrade later in Apple Center for 50€, so the best thing is to wait for a price decrease of the Ram. Also I'm gonna order the 2TO because it's necessary to have enough internal space to work 4.2.2 10bits video or Row; it's a better value for 3GB/s than external rack in 3GB/S.
 
What could we expect from both cards? Are there any tests that could show the differences for a usage in machine learning dev?
RX Vega 56 owner here (not on mac but on windows desktop). There is very very little difference between the 56 and the 64. The RX Vega uArch is very memory bound by the HBM and any serious gains in performance is from it not the core clock as evidenced by mining.

If these are anything like the desktop variety it should be easy to either overclock or hard-flash these into each other and is definitely *not* worth $600. The only way it would be is if you're getting 16GB of HBM vs 8GB but I doubt that.

One thing to keep in mind is that the 64 might be running substantially more hot, eating into the available safe zone for the CPU.

I remember Anandtech’s notes about the top end Vega running very hot whereas the 56 part did much better. Apparently there is a threshold where the chip just takes off. Both are lower clocked on the iMac Pro so they may both operate under this nasty threshold, but we don’t know yet.
[doublepost=1513319996][/doublepost]Here is the review from Anandtech I was talking about:

https://www.anandtech.com/show/11717/the-amd-radeon-rx-vega-64-and-56-review/20

Vega is basically a factory overclocked board, once you get these into the 1100-1200 core clock range they use almost no power at all, well below an RX580 and start to approach RX560 territory. My guess is that the clock speeds in the iMac Pro will be closer to 1200 not 1500+ like it is on the FE/XT variants. The amount of compute units isn't the issue but rather the clock speeds they run at, plus it's meaningless for the cards anywhere; you'll see far more gains from pushing the HBM than you will the core speed.

Most of us undervolted the GPU and used an (64)XT BIOS to push the HBM voltage up. My RX56 did about 1650mhz or so at +50% power and 960mhz HBM loaded with 100% fan speed to about 80C* and dropping down to -20% was around 1345mhz and 950mhz HBM @ 30% fan speed loaded at 55C*

Once the HBM approached 60C* (core and memory are tied to each other on the same slug) the HBM throttled massively so it was more important to keep the chip cool than powered if that makes sense.

Ideal clocks for most Vegas were -20% power target @ whatever clock it chose to run at and HBM speeds north of 1100mhz and below 60C* fan control target; this is how you hit the holy grail of 40MH/s+ mining ethereum and was the best overall balance for the chip overall. Good luck everyone!

Hard to say until consumers test them in hand. Most of Mac desktop GPUs are downclocked from the regul

You're absolutely right. I'm thinking about reordering with the Vega 56 and grabbing an external GPU enclosure with a GTX 1080i or wait until the 2080 comes in early 2018. I'm also not liking what I'm hearing that these GPUs are down clocked versions of the consumer ones and the power output for the Vega 56 and 64 are 210W and 295W TDP, respectively.

Over-reaction, everyone downclocked their Vegas, go look at Overclockers or Overclock.net etc. A "downclocked" Vega 56 with the config I mentioned was on par or close to the 1070Ti in many occasions. It all comes down to the HBM maintaining it's most aggressive timings at the highest speed possible and staying below 60C*. When using my RX Vega56 it would routinely operate in the 115-130W levels loaded and was consistently cooler running than my RX580.
 
Last edited:
RX Vega 56 owner here (not on mac but on windows desktop). There is very very little difference between the 56 and the 64. The RX Vega uArch is very memory bound by the HBM and any serious gains in performance is from it not the core clock as evidenced by mining.

If these are anything like the desktop variety it should be easy to either overclock or hard-flash these into each other and is definitely *not* worth $600. The only way it would be is if you're getting 16GB of HBM vs 8GB but I doubt that.

Thanks ! Indeed the VEGA64 has 16GB on the ImacPro (versus 8GO for the 56). Do you think it will offer a nice bump in FCPX to use plugins as LUT, FLARE ?
 
If these are anything like the desktop variety it should be easy to either overclock or hard-flash these into each other and is definitely *not* worth $600. The only way it would be is if you're getting 16GB of HBM vs 8GB but I doubt that.
I'm not sure whether you meant you doubt that 16GB would make any difference over 8GB or you doubt that the 64 on the iMac Pro will actually come with 16GB. If it was the latter, according to the tech specs for the iMac Pro the 56 comes with 8GB and the 64 comes with 16GB of HBM2 memory.
[doublepost=1513701507][/doublepost]
Thanks ! Indeed the VEGA64 has 16GB on the ImacPro (versus 8GO for the 56).
Drat - you beat me to it!
 
My 2cent, 8GB is enough for Video Editing, maybe 16GB of VRAM more for 3D app.
if someone could confirm my thought ? :)
 
I'm not sure whether you meant you doubt that 16GB would make any difference over 8GB or you doubt that the 64 on the iMac Pro will actually come with 16GB. If it was the latter, according to the tech specs for the iMac Pro the 56 comes with 8GB and the 64 comes with 16GB of HBM2 memory.
[doublepost=1513701507][/doublepost]
Drat - you beat me to it!
Well color me suprised, I guess it is going to depend on how the latencies are configured. If you want a safe bet the 56 will be fine. If and I do mean IF the latencies are equal between the 56 and the 64 HBM stack then the 64 will be much better in the long run however given how hot it runs I am betting the two will be more or less equal and the only difference will be the extra video memory for CAD applications.
 
Well color me suprised, I guess it is going to depend on how the latencies are configured. If you want a safe bet the 56 will be fine. If and I do mean IF the latencies are equal between the 56 and the 64 HBM stack then the 64 will be much better in the long run however given how hot it runs I am betting the two will be more or less equal and the only difference will be the extra video memory for CAD applications.

I am wondering if HBCC will allow system ram to compensate?

I am sure the 64 is the better performer in ideal conditions, and I am sure HBCC ram is less performance than onboard VRAM, but I wonder: if the Vega performance does drop significantly as a function of temps, would it be theoretically possible that the 56's 8gb VRAM + 12gb System Ram using HBCC at a clock speed that allows max operation under 60*C might actually outperform the 64's 16gb VRAM + any amount of System Ram due to throttling under sustained load?

The theory is based on better sustained performance at a lower temp, if the 64 has a 75W higher TDP, perhaps similar performance can be attained from investing in more ECC Ram that both the GPU and the rest of the system can benefit from, rather than just putting the hottest gpu in a limited enclosure?

Alternatively, I'd like to see fan control software that can monitor the CPU and GPU temps and control their individual fans on more aggressive curves than apple provides for silent operation.
 
I cannot speak to the actual results that the Vega 64 will bring with its increased power and RAM, but I can say this: you cannot replace the graphics card in the iMac Pro and the graphics card is usually the showstopper as time marches on. For me, it's a no brainer: I'm already spending $10,000'ish on a powerful system...I'm getting the upgraded graphics card too!
 
I cannot speak to the actual results that the Vega 64 will bring with its increased power and RAM, but I can say this: you cannot replace the graphics card in the iMac Pro and the graphics card is usually the showstopper as time marches on. For me, it's a no brainer: I'm already spending $10,000'ish on a powerful system...I'm getting the upgraded graphics card too!

This is entirely true in a graphics-limited application. For MacBooks, Mac mini, I agree, not being able to run 4k due to the cheaper graphics option is a huge pain! :(

For the iMac pro though, we are considering the possibility that with HBCC (the ability to leverage the ECC system ram as virtual VideoRAM and increase overall video memory), the Vega 56 is actually as effective as the 64 when operating below "throttle threshold"

In this scenario, the Vega 64 might provide 'less' performance over the course of a full workload IF the 64 throttles and the 56 doesn't.

Given that eGPU can extend external screens and upgrade performance over time, the value of having a faster onboard video card is decreased if you plan to run multiple screens, or future proof for gaming.

In any other machine, I would 100% agree and spring for the best of the best, but the thermal behavior of the iMac5k 2017 demonstrates temperatures affect average performance more, while base and turbo clocks affect peak performance more.

In an iMac Pro, I would be more concerned with average/sustained performance than peak performance. This is why I'm curious if the HBCC feature actually enables the 56 to scale VRAM effectively, or if it's just a marketing gimmick that doesn't actually provide any useful real-world results.

P.S. I am also curious if anyone has actually seen a 4.5ghz 10-core iMac Pro clock... The 10 core is advertised as the fastest boost speed, but no one seems to see above 4.2 ghz in any scenario... if that's the case, the 14-core may actually be the highest performance option, in both single and multi-core loads (for the same reason that if the other cores are lower temperature due to lower base clocks and inactive threads, the single boost core may throttle less due to temperature).
 
Is it just me, or is the $600 premium for upgrading from Vega 56 to Vega 64 over the top? Apple is making a lot of money on those upgrade options...
You are so right. AMD's suggested prices for the retail versions are $400 for the Vega 56 and $500 for Vega 64.
 
What could we expect from both cards? Are there any tests that could show the differences for a usage in machine learning dev?

I can only tell you that with my Vega64 my GPU gets around 4-5k more in Geekbench and Cinebench than an Vega56 equivalent machine.

In real world tests like FCX, DaVinci Resolve and Premiere Pro I don’t think I can see much difference from the tests I’ve seen on the Vega56 version of the iMac Pro.

I will do some tests on Monday if that helps, but I’m not sure they would be of use to you.

You are so right. AMD's suggested prices for the retail versions are $400 for the Vega 56 and $500 for Vega 64.

The Vega64 in the iMac is the 16GB Vega Frontier edition which is a 1k gpu.
 
This is entirely true in a graphics-limited application. For MacBooks, Mac mini, I agree, not being able to run 4k due to the cheaper graphics option is a huge pain! :(

For the iMac pro though, we are considering the possibility that with HBCC (the ability to leverage the ECC system ram as virtual VideoRAM and increase overall video memory), the Vega 56 is actually as effective as the 64 when operating below "throttle threshold"

In this scenario, the Vega 64 might provide 'less' performance over the course of a full workload IF the 64 throttles and the 56 doesn't.

Given that eGPU can extend external screens and upgrade performance over time, the value of having a faster onboard video card is decreased if you plan to run multiple screens, or future proof for gaming.

In any other machine, I would 100% agree and spring for the best of the best, but the thermal behavior of the iMac5k 2017 demonstrates temperatures affect average performance more, while base and turbo clocks affect peak performance more.

In an iMac Pro, I would be more concerned with average/sustained performance than peak performance. This is why I'm curious if the HBCC feature actually enables the 56 to scale VRAM effectively, or if it's just a marketing gimmick that doesn't actually provide any useful real-world results.

P.S. I am also curious if anyone has actually seen a 4.5ghz 10-core iMac Pro clock... The 10 core is advertised as the fastest boost speed, but no one seems to see above 4.2 ghz in any scenario... if that's the case, the 14-core may actually be the highest performance option, in both single and multi-core loads (for the same reason that if the other cores are lower temperature due to lower base clocks and inactive threads, the single boost core may throttle less due to temperature).

Thank you for the great explanation!
 
...snip...

P.S. I am also curious if anyone has actually seen a 4.5ghz 10-core iMac Pro clock... The 10 core is advertised as the fastest boost speed, but no one seems to see above 4.2 ghz in any scenario... if that's the case, the 14-core may actually be the highest performance option, in both single and multi-core loads (for the same reason that if the other cores are lower temperature due to lower base clocks and inactive threads, the single boost core may throttle less due to temperature).

The only way you will get 4.5 GHz on a single core in the 10-core iMac Pro will be to run it with 9 of its cores disabled. Of course this is not a practical solution and I suspect the max frequency in any core will be the 4.2 GHz that people are reporting seeing.

I have the late 2013 MP6,1 6-core with a rated Processor frequency of 3.5 GHz. It has a max Turbo boost of 3.9 GHz and the only way I've seen this 3.9 GHz is with all but one core disabled. When I run production workload using all cores and all hyper-threads the typical core frequency is a solid steady 3.6 GHz.

For the 10-core iMac Pro (which I've ordered) I expect to see a 4.2 GHz steady core frequency for my workload when using all cores. The 4.5 GHz for all intents and purposes is a myth IMO.
 
Last edited:
  • Like
Reactions: Bryan Bowler
The only way you will get 4.5 GHz on a single core in the 10-core iMac Pro will be to run it with 9 of its cores disabled. Of course this is not a practical solution and I suspect the max frequency in any core will be the 4.2 GHz that people are reporting seeing.

I have the late 2013 MP6,1 6-core with a rated Processor frequency of 3.5 GHz. It has a max Turbo boost of 3.9 GHz and the only way I've seen this 3.9 GHz is with all but one core disabled. When I run production workload using all cores and all hyper-threads the typical core frequency is a solid steady 3.6 GHz.

For the 10-core iMac Pro (which I've ordered) I expect to see a 4.2 GHz steady core frequency for my workload when using all cores. The 4.5 GHz for all intents and purposes is a myth IMO.
This is exactly what I feared... Does that mean the actual cap is lower than (or hopefully at least equal to?) 4.2 for the 14 and 18 cores as well? :(

No. You need to look at the Vega 56 and 64 Frontier edition to get comparable pricing.

Valid point. I agree that the pro 64 is a better processor, but for gaming the RXVega is a more application-focused solution, that's why most people are confused when comparing to consumer card benchmarks and pricing, since most of it (and the associated hype) is driven by the gaming community. This is a good thing, because gaming demand has improved workstation productivity as well :)

That said, I do agree that apple is making more money from the upgrade tax than from the base model margins. In fact, I'd almost expect apple loses money on the base model (in bulk) just to advertise the lowest entry price possible, expecting that the vast majority of users will select at least 1 upgrade and recoup the losses for every user that doesn't. The more upgrades you select, the larger the profit margins for apple. I suspect this is where they make the real returns.

Unfortunately, that model encourages (and is perpetuated by) the lack of user-upgradeable parts. The ram-door was sacrificed for cooling purposes, ok, but let's not forget that the lack of a ram door allows apple to charge 2x or 3x actual ram cost for an upgrade.

Is it just me, or is the $600 premium for upgrading from Vega 56 to Vega 64 over the top? Apple is making a lot of money on those upgrade options...


Either that, or they just forget to subtract the cost of the original ram and gpu they remove when you upgrade ;)
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.