Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
This should be the base config GPU, along with the 1TB SSD. Change my mind.

base config GPU driving the base price up another $600 . That is not going to be helpful for vast majority would will buy the lower end model. For examples:

i. building "headless' containers/VMs on the box the 5700X is going to buy what over the 580X ? No. ( from the main logic board and add-in card accessories perspective they are the same system. So those 'rack zone' needs space are entirely legitimate. )

ii. Need 7 PCI-e slots for several Audio and/or Video capture cards and supporting storage augments? Running out of slots is helpful how? The 580X takes up half the MPX bay width the W5700X does. If there is a replacement for the 580X it would need to be a half width one ( i.e., lower power envelope. Otherwise not meeting the same criteria. )


iii. Folks have already done some experiments putting off the shelf cards into slots 3 and 5 ( slot 3 'stealing' power from MPX Bay 1's 8-pin connectors. Can mostly get away with that with minor hiccups as the 580X is drawing less on the shared power provisioning. ) [ The "TB ports are evil" crowd ( like the "T2 is evil" crowd) have a path to go around the more Mac Pro specific solutions Apple has with the 580X (and 512GB) options. ]


Eventually should apple gets a derivative build with a RX 5600XT ( maybe 5500v2/6500 ) class GPU at a much lower price point? Yes. Is it going to happen soon? Probably not ( AMD is likely going to start RDNA2 at the high end and work their way down the line up and not finish until late 2021. ) . But the base GPU is always going to be a bit low when there are rack and audio only user base part of the mix here. The 580X is overkill for a number of those workloads. Something even more higher is just all that more 'wasteful'. The Polaris drivers are also relatively very stable since they have been worked on for a while.


Should the base price being around $4,999 and the 5700X/1TB $1k upgrade raise it to 5,999 ? That probably won't happen. At least as long as the iMac and iMac Pro are in their standard configuration price ranges. First , there is a "low volume" tax on the Mac Pro. Some of that is Apple doing overall system complexity over a even smaller user base cost recovery. ( the larger number of MPX GPU models more costs there also 3-4 in flight at any one time is probably the max . Off the shelf will fill in more workload diversity at other price points. ).

Second, there should be a price drop once competitive heats up on the next iteration for CPU and GPU component options. ( Intel may try to low bid for the entry GPU in 2021-22 timeframe. AMD may have a more complete CPU+Chipset offering for the next Mac Pro design bake-off.). That may bring some base price relief, but doubtful it would be full $1k worth. $5,699 - $5,799 .
 
Fortnite 😂
[automerge]1587037206[/automerge]

Flight simulation on a Mac ?
I had to buy a PC for that...
X-plane 11 is the only Mac compatible flight sim. In fact it’s developed on Macs. You can also run Prepared 3D or the upcoming Flight Sim 2020 in BootCamp.
 
1. I believe Final Cut kinda depends on AVX-512. AMD passed on adding that instruction set to Eypc/Threadripper/Ryzen, because only a handful of products use it and they didn't want to waste die space on it. That would not prevent Apple from contracting with AMD for a semi-custom chip. A semi-custom Ryzen is what is going into the next-gen consoles. OTOH, would Apple be willing to pay for a semi-custom chip? They aren't selling very many 7,1s, so it probably wasn't cost effective.

2. Probably still have an ongoing contract. That is why the Ryzen 1700AF exists - AMD has an ongoing contract with Global Foundries (SP) for 12nm parts.

3. No one ever got fired for buying IBM, Intel.

4. Apple would be completely dependent on AMD for both CPUs and GPUs.

I didn't realize Final Cut used AVX-512. In fact that's the first mainstream program I've come across that uses it.

I'm sure next-gen Epic/TR will feature AVX-512. I don't think there's an issue being reliant on AMD for both GPU/CPU, with someone like Lisa Su in charge AMD won't start resting on their laurels like another large chip manufacturer did.
Plus with their laptops likely going full in-house ARM using all AMD for their small number of desktop sales really isn't an issue.
 
Also the people posting about AMD being cheap brand obviously have not seen the prices for the workstation and server cards. AMD still currently has the fastest single card, as far as compute goes.

CUDA isn't really anything that special. Every single app that uses CUDA could be coded to use Metal. It is just a question on how stubborn the developer is. CUDA was good because of cross platform use though; but that bridge has crumbled so if you must use a Mac, might as well learn how to code using Metal APIs if you want things to be fast.

Or the developer has done the analysis and realizes that it isn't worth their time to port to the Metal API. The 7,1 is the size of the audience. It isn't a large amount of computers.
[automerge]1587067375[/automerge]
I didn't realize Final Cut used AVX-512. In fact that's the first mainstream program I've come across that uses it.

I'm sure next-gen Epic/TR will feature AVX-512. I don't think there's an issue being reliant on AMD for both GPU/CPU, with someone like Lisa Su in charge AMD won't start resting on their laurels like another large chip manufacturer did.
Plus with their laptops likely going full in-house ARM using all AMD for their small number of desktop sales really isn't an issue.

Which is why I said "I believe" - there is literally no other reason to use an Intel CPU since 2017. And for quicksync.

Intel had a slight performance advantage on single thread applications (although be real - a boost clock from 5.1Ghz to 5.2Ghz isn't worth the silicon.). That has been blown away by double digit IPC increases with each generation on Zen, and all for less money.

AMD has stated that they passed on AVX-512 because they aren't used by more than a literal handful of programs. They can brute force it with AVX-256 (which is used by a lot of programs) with their core counts and IPC increases.
 
Last edited:
Is there another 16gb 5700 series that might be cheaper and work with the 7,1?

Of workstation cards: the W5700 is ~800$ US.

Consumer 5700 and 5700XT cards will work too. There may be quirks with specific models, so check the forums.

These are all 8GB cards though, IIRC.
 
these are the speeds I'm getting. I have the Dual Vega II Pro Duo (so essentially 4 gfx cards). The system is a beast.
I use Resolve for professional Color Grading and haven't been able to make these gfx cards sweat (yet).

View attachment 906222

Thanks for sharing, I thought PROs might not be happy with Radeon as its under powered compared to Nvidia
 
I didn't realize Final Cut used AVX-512. In fact that's the first mainstream program I've come across that uses it.

I'm sure next-gen Epic/TR will feature AVX-512.

Others substantively not so sure.

"... Lastly, we do actually have a small Zen 4 leak, and it’s mostly just confirmation of logical speculation and assumptions. Zen 4 will feature more cores, a new socket, 1 MB L2, AVX 512 support, and improved IPC on the 5nm node. Again, nothing groundbreaking, but the enlargement of the L2 cache and addition of AVX 512 support is interesting. ..."

AMD is trying to beat Intel at speed of (and number of ) updates. Which means they have punted some things. AVX-512 is likely one of them. Zen 2 somewhat kludged around the NUMA problem by making all the cores equally as slow getting to memory and off chip I/O. Zen3 is appears to be unwinding some of those latencies and more effective optimizations for 7nm ( which probably will be mostly thrown at some clock speed bumps in addition to the IPC bumps here (avg IPC up due in part to getting rid of some latency bubbles. )

Reliable, predictable incremental progress will get them more high end customers.


This is getting back to the underlying motivation Intel had when more deeply on the "Tick Tock" updates. Primarily it is to control complexity. Don't try to change everything all in one big chunk update. When AMD gets to 5nm they'll pick up a substantive transistor budget increase bump. That will more easily allow them to throw some of that budget at AVX-512 in a way that could be more than competitive. But AVX-512 is substantive memory pressure it crank up the throughput so the ripple out is not jus the instruction decode and direct function units.


I don't think there's an issue being reliant on AMD for both GPU/CPU, with someone like Lisa Su in charge AMD won't start resting on their laurels like another large chip manufacturer did.

There is when Apple somewhat knows that AMD is second in line at the TSMC bleeding edge fab. If Apple and a few other high revenue players reserve huge chunks of the capacity AMD won't be able to meet large volume shifts ( or will dribble out their offerings over longer timelines. e.g. Threadripper way after EYPC and Ryzen. ). AMD has more money in hand these days as to not get totally squeezed out but they aren't going to take over the bulk of the x86 market. So AMD is picking out specific subsets of the market to pursue with their pragmatically limited production capacity. If that overlaps with Apple's Mac goals then there is a good fit. Where it doesn't then not so much.


Plus with their laptops likely going full in-house ARM using all AMD for their small number of desktop sales really isn't an issue.

If Apple is going into a multiple year phase with ARM and x86 macOS splitting strictly on laptop/desktop lines doesn't make much sense. The top end of the MBP line up probably has just as many inertia weighting toward x86 as the 27" iMac would. The lowest end of the desktop line ( the "edu" , non Retina iMac and perhaps return of far more affordable Mac Mini like variant ).
 
Last edited:
Or the developer has done the analysis and realizes that it isn't worth their time to port to the Metal API. The 7,1 is the size of the audience. It isn't a large amount of computers.

The number of computational devices that use Metal includes the iOS ones. That is a large number of computers.
The notion that Metal is not vast deployed is a 3D graphics rendering system is a bit myopic. Yes, if do the intersection of narrow high end desktop apps with Metal the number gets very small but that's because the former is small pond, not the latter.

The notion that can be a serious Mac app developer and ignore Metal non sense. Complete non sense. Apple has deprecated OpenGL. Future Apps will have to deal with Metal directly or with a graphics abstration library that does.

The quirky issue is that Apple has wrapped subsets of both OpenGL and OpenCL into Metal. To get more generalized computational work done on the GPU Metal also comes into play. There that is more crufty as Apple has this more "shader computation" skew to Metal then the more generalized GPGPU libraries.

No sane developer should be writing a Mac app that only works on the Mac Pro 2019 (7,1). That's way past goofy. The app should run on other Macs. It may not be able to handle as large of jobs on the other systems, but if someone has to have a Mac Pro to even attempt to learn how to use the application, then that app pragmatically has major problems with adoption resistance. Almost nobody deliberately this on the Windows or Linux side either.

[automerge]1587067375[/automerge]
.....
AMD has stated that they passed on AVX-512 because they aren't used by more than a literal handful of programs. They can brute force it with AVX-256 (which is used by a lot of programs) with their core counts and IPC increases.

Which in the context of macOS is pragmatically a substantive problem. Can't grossly just throw cores to brute force through just through triple digit numbers. It won't work. The OS doesn't scale to those numbers and Apple core count average median number for their kernel schedulers ( remember the basic kernel is shared across all these xxOS mild fork developments ) isn't going that high any time soon.

Linux ( highly skewed to server rooms ) and Windows ( much larger server footprint) have the scale to have forked OS schedulers for triple digits number of effective cores ( i.e., core count after account for SMT (Hyperthreading). ) This is a clear place where AMD's playbook is at odds with being "best match" to Apple's overall roadmaps.
 
....
CUDA isn't really anything that special. Every single app that uses CUDA could be coded to use Metal. It is just a question on how stubborn the developer is. CUDA was good because of cross platform use though; but that bridge has crumbled so if you must use a Mac, might as well learn how to code using Metal APIs if you want things to be fast.

The bridge hasn't crumbled. This is far more like huge continental drift. At one point South America and Africa were pragmatically connected. Apple and Nvidia have drifted far more apart.

Apple's "print money" business is the iPhone and other devices running something closer to a iOS deviate than a macOS one. Nvidia's "print money" business is server and AI-ML workstations. They both has a large cash reserve to engage in a dust up without any major substantive loss for either company.

Nvidia's move to push Metal to second class status and to monetize Apple's iOS devices is at odds with being seen as a responsible macOS kernel driver partner. Likewise Apple macOS don't have a large footprint in training and server deployments so Apple isn't to pressed about the CUDA dependent apps being 100% critical to the macOS ecosystem. ( and in iOS space there never were anything at all. ). Neither party can posture like they are highly critical to the profitability of the other.

Apple implementing GPUs and Tensor cores pulls workloads away from Nvidia GPUs. Both are more independent, "continental" ecosystems with their own dynamics.

It really is far more so about money/investment to port than "would-a , could-a , should-a" or time. That's is true on multiple fronts. Apple not paying to port more open source libraries to be layered on top of Metal (vs CUDA ). Also not tooling migration from OpenCL to Metal to make that process more affordable/easy.

And there are port costs to get onto macOS/iOS for apps far more established on other OS implementations.
 
It seems a lot of NVIDIA owners are not happy with the performance of their newest GPUs, even the 2080 RTX, NVIDIA's flagship GPU.

This video talks about how NVIDIA performance has not improved since the GTX 1080Ti which came out years ago!

GTX 1080Ti: The Anomaly That's Killing RTX!

There are a lot of other videos from gamers who feel ripped off by the NVIDIA 2080 RTX.

Was RTX a big scam? – Performance & image quality analysis

NVIDIA RTX 2080 Super Review: We Get It, NVIDIA, You Can Make a 1080 Ti
 
The GPU options on this otherwise incredible machine are embarrassing. If Apple would support Nvidia, you could put a $300 2060 in this machine that would make it respectable. Certainly better than the 580X. I think the 1660 outperforms the 580X.

But having to pay +$1800 just to get something contemporary is kind of ridiculous. The DOA graphics are what's preventing me from buying the iMac Pro.
You can already use Nvidia GPUs on the Mac Pro, just not on MacOS. Many use Nvidia GPUs for their Windows parition.
[automerge]1587157523[/automerge]
It seems a lot of NVIDIA owners are not happy with the performance of their newest GPUs, even the 2080 RTX, NVIDIA's flagship GPU.

This video talks about how NVIDIA performance has not improved since the GTX 1080Ti which came out years ago!

GTX 1080Ti: The Anomaly That's Killing RTX!

There are a lot of other videos from gamers who feel ripped off by the NVIDIA 2080 RTX.

Was RTX a big scam? – Performance & image quality analysis

NVIDIA RTX 2080 Super Review: We Get It, NVIDIA, You Can Make a 1080 Ti
That's misleading & sensationalism primary by those who want to play old games rendered inefficiency w/ unrealistic graphics faster in a way that's being phased out for far more efficient & realistic results.

RTX is optimized for deep-learning, ray-tracing, & next-gen gaming APIs being a DX12 Ultimate GPU. These GPUs were absolutely invaluable & necessary for professionals, including gaming professionals who needed to make games for next-gen GPUs with an actual production-ready GPU that can do ray-tracing & support DX12U at a higher level than the upcoming next-gen consoles.

Privileged game developers took on adding DX12U features on their current games–or the games they released the past 2 years. Nvidia's bets were correct with their high-end & prosumer RTX cards today not exceeded by next-gen consoles.

DLSS after Nvidia tweaked it to not trust the laziness of game developers (or the red tape of their publishers) fulfilled its mission. Ray-tracing is the difference between sleeping comfortably home at night with family or staying overnight in the office at 2am as stated by 3D professionals for gaming & movie work.

It was non-neogiatable to not have ray-tracing w/ next-gen games. AMD waited till it was cheap enough for them and they were able to catch up in a sensible matter to Nvidia's core competence in deep-learning.

All major 3D tools have adopted RTX as a no-brainer. The RTX careds are the definitive cards for deep-learning than a 1080TI. The 2080TI has more compute power than a $5000 Titan V.
 
Last edited:
You can already use Nvidia GPUs on the Mac Pro, just not on MacOS. Many use Nvidia GPUs for their Windows parition.
I’ve been thinking about this a lot. I’ve just ran into a software issue where I have to use a GPU w CUDA cores.

Is there a practical way to keep an Nvidia GPU in your Mac Pro without powering it up while in MacOS?
 
Better.

The AMD ProRender engine is GPU neutral. Team Red, Team Green, it doesn't care.
You could already use that as a plug-in w/ Blender, or it's being upgraded significantly? For core & things like ray-tracing in *real-time*, Vulkan makes them ost sense w/ Nvidia/AMD/Microsoft still needing to convene on that. AMD & Nvidia have both contributed to that, w/ Nvidia's contributions suits real-time ray-tracing in games better vs. AMD's contributions to Vulkan on ray-tracing being more optimized for pro use not related to optimal real-time ray-tracing for fast-paced interactivity like games.
 
I’ve been thinking about this a lot. I’ve just ran into a software issue where I have to use a GPU w CUDA cores.

Is there a practical way to keep an Nvidia GPU in your Mac Pro without powering it up while in MacOS?

There's Youtube vids on the matter, Linus gave it a try to success:

I would have to ask colleagues that do this; I have a completely separate PC (2080TIs using NVLINK I occasionally swap Quadro cards in for) for my Windows use even though I have a 2019 Mac Pro (192GB Ram, 4TB, & 580X soon to be upgraded to a 5700x); I use my Pro Display XDR on both.
 
Last edited:
  • Like
Reactions: worldburger
It seems a lot of NVIDIA owners are not happy with the performance of their newest GPUs, even the 2080 RTX, NVIDIA's flagship GPU.

This video talks about how NVIDIA performance has not improved since the GTX 1080Ti which came out years ago!

GTX 1080Ti: The Anomaly That's Killing RTX!

There are a lot of other videos from gamers who feel ripped off by the NVIDIA 2080 RTX.

Was RTX a big scam? – Performance & image quality analysis

NVIDIA RTX 2080 Super Review: We Get It, NVIDIA, You Can Make a 1080 Ti

They are talking about game performance - a multi-billion dollar market that Apple has no presence in. There isn't a reason to move to the 2000 series for most - raytracing kills performance on any variant of the 2060, and the 2070s. The price of the 2080's is insane and their still isn't much in the way of games that can use it. The gaming community rejected them to the point that Nvidia came out with the 1600 series, which is the 2000 series minus Tensor cores for ray tracing.

The exact same thing can be said about the RX 480/580 - same performance as the 5500XTs and 5600XTs (albeit at a higher power cost.) These are still getting driver updates and optimizations, unlike the Nvidia 1000 series of cards.

It is why I am still on Polaris (RX560 4Gb on mac, RX 570 8Gb on my Windows boxen). To move up means replacing both the video card and the monitor - and while I am sure I will move to 1440p at some point, it won't be tomorrow, and the day after isn't looking good either.
[automerge]1587162620[/automerge]
You could already use that as a plug-in w/ Blender, or it's being upgraded significantly? For core & things like ray-tracing in *real-time*, Vulkan makes them ost sense w/ Nvidia/AMD/Microsoft still needing to convene on that. AMD & Nvidia have both contributed to that, w/ Nvidia's contributions suits real-time ray-tracing in games better vs. AMD's contributions to Vulkan on ray-tracing being more optimized for pro use not related to optimal real-time ray-tracing for fast-paced interactivity like games.

Yes, you can - I have the 2.0 version for my copy of Blender. Vulkan is a better API from what I have seen, but getting folks to move to it is tough sledding, due to legacy code.

I really wish Bondware would work with AMD to get the Cycles plug-in working with Poser 12. The render engine in it is based on Cycles.
 
X-plane 11 is the only Mac compatible flight sim. In fact it’s developed on Macs. You can also run Prepared 3D or the upcoming Flight Sim 2020 in BootCamp.
I’m using XPlane and DCS, and I’m waiting for the new Microsoft flight simulator... considering the poor GPU choice on Mac I had to buy an high end PC instead...
 
Tim Cook leaving Apple isn’t going to get NVIDIA GPUs in Macs. NVIDIA burned that bridge and it’s not ever going to be rebuilt. I wouldn’t be surprised if there is something in Steve’s Will about never doing business with NVIDIA ever again.
you might be right, many people bring the old problem about the bad gpus in the past, that happened many year ago but i don’t think that’s the problem, also qualcomm and apple had a problem when qualcomm took apple to the court for millions of dollars, both parts reached snd agreement or it seems that way, they shake hands and started to do buiness again, i can only think of 3 possibilities with apple, 1st is personal 2nd is about money, 3rd tim is clueless , no one is asking apple to includes nvidia gpus in their macs but why stopped their usetsfrom buying an nvidia card and use it as an egpu or use them in the new mac pro, that’s why they made that new pci-e port on the new mac pro, i don’t think nvidia is incompentent to don’t know how to make metal web drivers. apple simply refuse to sign them because they just want AMD so they can make more money, greedy tim
 
you might be right, many people bring the old problem about the bad gpus in the past, that happened many year ago but i don’t think that’s the problem, also qualcomm and apple had a problem when qualcomm took apple to the court for millions of dollars, both parts reached snd agreement or it seems that way, they shake hands and started to do buiness again, i can only think of 3 possibilities with apple, 1st is personal 2nd is about money, 3rd tim is clueless , no one is asking apple to includes nvidia gpus in their macs but why stopped their usetsfrom buying an nvidia card and use it as an egpu or use them in the new mac pro, that’s why they made that new pci-e port on the new mac pro, i don’t think nvidia is incompentent to don’t know how to make metal web drivers. apple simply refuse to sign them because they just want AMD so they can make more money, greedy tim

nVidia GPUs do support Metal in High Sierra -- this has already been said multiple times. nVidia has ceased development of its WebDrivers; so in the current times, nVidia is a dead end on the Mac running OS X; but there is no reason Mac Pro 7,1 owners can't run Windows 10 and use any nVidia GPU they want; with a bootcamp partition.

The PCIe slots on the new MacPro can use regular GPUs just fine (in fact they probably can use any PCIe card you want), you just need to buy the Belkin cable kit so you can get power from the motherboard if the card requires it. The "extra" PCIe looking slots are for MPX cards where the extra power required is done via the extra slot plus the routing of Display Port signals to the onboard Thunderbolt 3 ports.

** Edit, the below pic is from my Hackintosh; info on it in my sig.

Metal_HighSierra.png
 
Last edited:
you might be right, many people bring the old problem about the bad gpus in the past, that happened many year ago but i don’t think that’s the problem, also qualcomm and apple had a problem when qualcomm took apple to the court for millions of dollars, both parts reached snd agreement or it seems that way, they shake hands and started to do buiness again, i can only think of 3 possibilities with apple, 1st is personal 2nd is about money, 3rd tim is clueless , no one is asking apple to includes nvidia gpus in their macs but why stopped their usetsfrom buying an nvidia card and use it as an egpu or use them in the new mac pro, that’s why they made that new pci-e port on the new mac pro, i don’t think nvidia is incompentent to don’t know how to make metal web drivers. apple simply refuse to sign them because they just want AMD so they can make more money, greedy tim

Apple not using NVIDIA isn’t greed, it’s simply self defense against a company that has worked at odds with Apple’s goals to control its own stack from top to bottom. Like it or not, Apple is simply tired of depending on other companies that don’t have Apple’s interests at heart, but their own. Yes, that’s a weird way for Apple to think, but you can see this over and over in Apple’s history.

NVIDIA has an over-inflated opinion of their importance in computing, to the point where they believe their GPUs to be more important than the CPU, AMD or Intel. That kind hubris is off-putting, especially to a company like Apple that has a similar opinion of itself.

Pro users ask Apple all the time about being able to use NVIDIA GPUs in the Mac, but Apple has decided to go it’s own way with regards to Metal and Machine Learning, which don’t jive with NVIDIA’s goals. NVIDIA’s GPP, “Embrace, Extend, Extinguish” and their CUDA (AI and ML) ambitions just don’t line up with Apple’s at all from a business strategy standpoint. NVIDIA is more of a roadblock and a competitor in this area, whereas AMD is a much better partner. Guess which Apple wants.

There is a chasm between the two companies and neither side has any interest in building a bridge, for a lot of different reasons.
 
  • Like
Reactions: timduck and avkills
nVidia GPUs do support Metal in High Sierra -- this has already been said multiple times. nVidia has ceased development of its WebDrivers; so in the current times, nVidia is a dead end on the Mac running OS X; but there is no reason Mac Pro 7,1 owners can't run Windows 10 and use any nVidia GPU they want; with a bootcamp partition.

The PCIe slots on the new MacPro can use regular GPUs just fine (in fact they probably can use any PCIe card you want), you just need to buy the Belkin cable kit so you can get power from the motherboard if the card requires it. The "extra" PCIe looking slots are for MPX cards where the extra power required is done via the extra slot plus the routing of Display Port signals to the onboard Thunderbolt 3 ports.

** Edit, the below pic is from my Hackintosh; info on it in my sig.

View attachment 907078
thanks for the info, is good to know that the new mac pro can use an nvidia card at least for windows

there is a diference between metal 1 and metal 2, metal 1 is up to high sierra, mojave and catalina uses metal version 2 that’s why the web drivers from metal 1 do not work on metal 2 even if you change the build version string in the web drivers

also the web drivers only support up to pascal in high sierra, nvidia never realeased any web driver for mojave or catalina, turing never had any web driver support not even for high sierra

i also have a hackintosh, currently waiting on intel 10980xe or the next best thing

i have an aorus 2080 ti extreme edition, yes totally useless in mac os and freaking awesome in windows

👍
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.