Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Reading about them is the closest I'll ever get to one. Unless...

(Do they have display units out? Will they? Will they be chained to the floor? Asking for a, um, 'friend':cool:)
[automerge]1582664066[/automerge]
I'm surprised Apple didn't charge an additional $499 to download the PDFs.



I think that was the intent behind the hi-end 2018 Mac Mini. It was focused on the pro (or prosumer) market. It's an okay machine, but they really should have had an option for a better internal GPU. I'm finding that an eGPU just doesn't cut it.

Mac Mini Pro? Will it be Atom processors in the old trash can housings? Hmm... :oops:😂
 
Nothing wrong with that. I've seen one at the local Apple Store in Palo Alto and it's very nice. You do need a Mac that can adequately drive it, though. There's a guy on the forum that now has and uses two, and has been a good resource for firsthand information.
I’m planning on getting the new 13” MacBook Pro (when it gets released with 32M). Hopefully that can drive it.
 
  • Like
Reactions: citysnaps
Pragmatically it really isn't about the slots 2 and 4 being disabled. You physically can't get to them in a practical way when the full size MPX module is installed.
The PCIe switch has 96 lanes. 32 are connected to the CPU, 64 are connected to the slots. That leaves none for the 4 Thunderbolt 3 controllers that can be installed in the MPX slots. Therefore, they probably share lanes with slot 2 and slot 4. Slot 4 has 16 lanes, so 8 lanes are wasted if you use a full height MPX module there (unless you have a super low profile PCIe x8 riser cable that can fit under the MPX module - assuming that the Thunderbolt controllers use the upper 8 lanes).

$6000 and you have to settle for shuffling lanes through a PCIe switch. Sad.
The setup is like having two PCIe 3.0 x16 DMI connections to the CPU compared to a normal PC that only has PCIe 3.0 x4 DMI connection. The actual DMI connection of the Mac Pro to the PCH and the devices connected to the PCH are not shown by the overview (SSDs, Ethernet, SATA, USB, Audio, etc.).
 
Why? It's standard and allow MORE lanes and slots on the whole system. What would you be doing that would ever max out EVERY PCIE Slot in one go. EVEN 2x DUAL Vega 2s going at full pelt wouldn't max that out.

Also blame Intel and AMD for not going as fast as your untenable expectations. Maybe you want Infinite bandwidth?

Dual CPU system would have more than enough lanes to handle it. And I'm thinking longer term here. Sure a Vega 2 won't fully utilize that... But what about in a couple years when we're all trying to shoehorn Vega 8s into them because Apple hasnt updated the machine in years and years again?

Lastly it's not about maximum throughput. It's about additional latency that inherent to any time you're adding a hardware layer between two end points. For some folks thats a very legitimate problem.
 
Dual CPU system would have more than enough lanes to handle it. And I'm thinking longer term here. Sure a Vega 2 won't fully utilize that... But what about in a couple years when we're all trying to shoehorn Vega 8s into them because Apple hasnt updated the machine in years and years again?
....

Lots of folks "shoehorned" PCI-e v3.0 GPU cards into PCI-e v2.0 Mac Pro 2009-2012 systems just fine. They'll still worked better than the older cards that were there previously.

Growth in VRAM capacity can for many workloads offset the bandwidth to the card. ( takes longer to get started on calculation to load up memory but if largely computing on what is in VRAM then that now local traffic makes up the difference for the newer cards with dramatically better internal bandwidth. )

Slots 1 and 3 have direct connections to the CPU. There is no switch for those. If try to do 4 GPU then switches come into play, but with just two 2-4 year distant future GPUs, you could hook them directly to the CPU.

For a Dual CPU system then run out of internal space and power for the PCI-e card options. The Mac Pro is pragmatically at wall plug limits. They also put the CPUs into a higher price tier (which only increases overall system costs even more ). The Mac Pro seriously do not need to be priced even higher. ( can throw away single thread clock speed to save a buck but for primarily single user workstation that isn't a good move over a wide set of workloads. )
[automerge]1584557374[/automerge]
The PCIe switch has 96 lanes. 32 are connected to the CPU, 64 are connected to the slots. That leaves none for the 4 Thunderbolt 3 controllers that can be installed in the MPX slots. Therefore, they probably share lanes with slot 2 and slot 4. Slot 4 has 16 lanes, so 8 lanes are wasted if you use a full height MPX module there (unless you have a super low profile PCIe x8 riser cable that can fit under the MPX module - assuming that the Thunderbolt controllers use the upper 8 lanes).

Slot 4 has a 16 lane physical connection It does not necessarily have 16 lanes of bandwidth. If can't physically get tot he slot in no way is 8 lanes of bandwidth doing "down the drain". If have populated slot 1 with a full height MPX module then 8 lanes of the bandwidth that that slot 4 was sharing a pool with is going there. That is already gone before even populated MPX bay 2.

What slot 4 pragmatically allows is to hook a phys x16 card to a slot that probably provides less than x16 worth of bandwidth. How much less depends upon what is in the other slots but nominally is the less than x16. If populate most of the slots in the Mac Pro ( or just put a heavy x16 consumer into slot 5 ) and the bandwidth of slot 4 simply just goes down in vast majority of configurations.


The notion of "snaking" cables under MPX modules is misguided. The design here doesn't particularly support that at all. I suppose someone will try to make it work in some happens to work hackery demonstration. It will happen to work, but not a particularly good 'best practice". if the MPX module covers it then don't use it.
 
Last edited:
Slot 4 has a 16 lane physical connection It does not necessarily have 16 lanes of bandwidth.
All the slots (except slot 8) are physically x16. Slot 1,3,4,5 are electrically x16.

If can't physically get tot he slot in no way is 8 lanes of bandwidth doing "down the drain". If have populated slot 1 with a full height MPX module then 8 lanes of the bandwidth that that slot 4 was sharing a pool with is going there. That is already gone before even populated MPX bay 2.

What slot 4 pragmatically allows is to hook a phys x16 card to a slot that probably provides less than x16 worth of bandwidth. How much less depends upon what is in the other slots but nominally is the less than x16. If populate most of the slots in the Mac Pro ( or just put a heavy x16 consumer into slot 5 ) and the bandwidth of slot 4 simply just goes down in vast majority of configurations.
You are saying that slot 4 looses lanes when using a full height MPX module in slot 1. That is not logical.

Compare pictures on page 11 of the Mac Pro Technical Overview and the apple support document at https://support.apple.com/en-us/HT210104 to see how the lanes of slot 2 and slot 4 change when a full height MPX module is installed in each slot. The Apple support document shows a half height MPX module in slot 1 (slot 2 gets 8 lanes and slot 4 gets 16 lanes). The Mac Pro Technical Overview shows two full height MPX modules in slot 1 (slot 2 gets zero lanes) and slot 3 (slot 4 gets 8 lanes).

1) A full height MPX module (Vega II or Vega II Duo) in slot 1 uses all 8 lanes of slot 2 for the two Titan Ridge Thunderbolt controllers of the MPX module. A half height MPX module (580 Pro) doesn't have Titan Ridge Thunderbolt controllers and therefore allows 8 lanes for slot 2.

2) A full height MPX module in slot 3 uses 8 lanes of slot 4 for the two Titan Ridge Thunderbolt controllers of the MPX module. But slot 4 has 16 lanes, that leaves 8 lanes for slot 4. If the module were half height, then slot 4 would be usable as an x8. A full height MPX module might allow installing a riser cable if there is enough clearance. The cables at https://www.adt.link/product/R83.html or https://www.adt.link/x16.html are probably not low profile enough. https://www.adt.link/product/R12SL-FL.html is lower profile but might also not be low profile enough (and it's only x1). Here's a post that discusses a similar problem (and has many pictures):

The notion of "snaking" cables under MPX modules is misguided. The design here doesn't particularly support that at all. I suppose someone will try to make it work in some happens to work hackery demonstration. It will happen to work, but not a particularly good 'best practice". if the MPX module covers it then don't use it.
I'm not saying one should do that. I'm just saying that it is possible. If a person needed those slots, then they should choose GPUs that are not full height MPX modules. If additional slots are needed, then a PCIe expansion box can be connected.

Maybe someone will create a method to convert a full height MPX module into a half height MPX module using a different cooling method.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.