Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Jupeman

macrumors regular
Original poster
Jan 13, 2008
145
122
I have a base config MP 7,1 so it comes with just the two Thunderbolt 3 ports on the back and the two on top. I find the two on top to be less useful for long term connections, such as additional arrays or monitors. I'd rather have those connections simply coming out the back.

I will admit that coming from a 6,1, it has been a while since my old 3,1 MP days that I worried about PCI upgrade cards, so I need some help understanding what I can install and cannot. I see options for PCIe Thunderbolt 3 cards out there but many seem to have limitations on what board they require, whether they work with the Mac, etc.

Can anyone point me in the right direction?
 
  • Like
Reactions: chfilm
There are a few vendors working on a right angle adapters for the top ports so they can basically go to the back of the case instead of straight up. Still unclear if they'll meet the 40Gbps standard with those.
 
Hmmm adding a WX5700 when it comes out? 🤓 I honestly was expecting Apple would sell rather I/O card separately..
 
There are a few vendors working on a right angle adapters for the top ports so they can basically go to the back of the case instead of straight up. Still unclear if they'll meet the 40Gbps standard with those.

I figure I'm using the top ports for basic things, like iPad/iPhone charging or other "come and go" peripherals. Having something that angles the cables doesn't change my basic complaint and need for more ports. I appreciate your response, but I'm looking for an internal card-based addition. Like an additional Apple Network card that already came with the machine (but given that they restrict that card to a specific port, I doubt we will see them sold separately?).
 
It appears that so far your only option is adding more GPU.

Other than that... let the daisy chaining begin!
 
It appears that so far your only option is adding more GPU.

Other than that... let the daisy chaining begin!

I am not opposed to adding a new GPU or replacing the 580X in the machine now, at least at some point. But your response is what I'm worried about: all these great slots but "not much" to put in them. New machine, of course, so perhaps product makers will be rolling stuff out.
 
Your basic things are USB/USB-3/USB-C, they're not TB3. If you need TB3 bandwidth, you might need to use them for more than basic things.
 
  • Like
Reactions: chabig
I am not opposed to adding a new GPU or replacing the 580X in the machine now, at least at some point. But your response is what I'm worried about: all these great slots but "not much" to put in them. New machine, of course, so perhaps product makers will be rolling stuff out.
Yea, got you... what do you plan do attach through those tb3 ports?
 
Not making light of your situation, but I find this amusing. You have all the PCIe slots in the world and want more Thunderbolt expansion options. 6,1 owners had all the Thunderbolt in the world (6), but wanted PCIe expansion.
I guess years with the trashcan require some adjustment now. I understand hin though, I never missed any PCIe on the 6.1, except for better GPUs.
 
  • Like
Reactions: OkiRun
Would be nice to see vendors find a way for TB1/TB2 storage devices to properly connect via direct USB-C (not TB3) to solve some of these setup issues. Many people with external RAID's that are not going anywhere when replacing the machine itself. No need to tie-up TB3 40Gbps bandwidth for those if they're not already TB3 devices.

MANY manufacturers abandoned the PCIe expansion card style RAID setups (like HDPro/HDPro2/HDOne) in favor of the Promise/Pegasus style solutions via TB3/USB3.1/USB3.2. Not sure going back to those PCIe versions even makes sense. Less compatibility across the board with iMac/iMacPro/MacMini/MBP, so less sales, less revenue, and more "R&D costs" being passed along for little to no real benefit longterm for a niche product.
 
  • Like
Reactions: chfilm
Would be nice to see vendors find a way for TB1/TB2 storage devices to properly connect via direct USB-C (not TB3) to solve some of these setup issues. Many people with external RAID's that are not going anywhere when replacing the machine itself. No need to tie-up TB3 40Gbps bandwidth for those if they're not already TB3 devices.

MANY manufacturers abandoned the PCIe expansion card style RAID setups (like HDPro/HDPro2/HDOne) in favor of the Promise/Pegasus style solutions via TB3/USB3.1/USB3.2. Not sure going back to those PCIe versions even makes sense. Less compatibility across the board with iMac/iMacPro/MacMini/MBP, so less sales, less revenue, and more "R&D costs" being passed along for little to no real benefit longterm for a niche product.
I for myself am super happy that I had almost nothing internally on the trashcan, and can just hook up my Promise Pegasus to the nMP and everything will be as it was.
Sure, it sucks a bit to give up a TB3 port for these older Tb1/TB 2 devices, but it should mean that I can really daisychain 2 or 3 of these TB2 raids in a row and still get maximum bandwidth or?
 
True 40Gbps passthrough requires all devices to be 40Gbps, and even then there are drop-offs along the way with many variables at play. Most of the time they are not noticed since you're not saturating the 40Gbps bandwidth, even with things like "high speed" RAIDs. You'll notice more with NVMe RAID via TB enclosure as that's actually getting closer to the theoretical maximum throughput.

Then there's the question of does it matter. Especially after ~3500MB/s+ with NVMe in RAID, for most workflows the answer is no. If it does matter, you know it and you're going PCIe anyway.

When testing GPU via eGPU in TB3 you can also start to see where this trails off depending on location in the chain. Closest to the host gets the best scores 90%+ of the time in my testing.
 
If you just need more USB-C ports and don't need thunderbolt speeds then put a sonnet 4 port USB-C card in a PCIe slot.

If you buy the XDR monitor or use LG 5K monitor they both also come with extra USB-C ports. Though the XDR when run at full 6k resolution are basic USB 2.0 USB-C ports for charging or slow syncing.

Sonnet 4-port USB-C PCIe card

or less ports but more wattage into the USB-C ports

Sonnet 2-port USB-C PCIe card 15w charging
 
True 40Gbps passthrough requires all devices to be 40Gbps, and even then there are drop-offs along the way with many variables at play. Most of the time they are not noticed since you're not saturating the 40Gbps bandwidth, even with things like "high speed" RAIDs. You'll notice more with NVMe RAID via TB enclosure as that's actually getting closer to the theoretical maximum throughput.

Then there's the question of does it matter. Especially after ~3500MB/s+ with NVMe in RAID, for most workflows the answer is no. If it does matter, you know it and you're going PCIe anyway.

When testing GPU via eGPU in TB3 you can also start to see where this trails off depending on location in the chain. Closest to the host gets the best scores 90%+ of the time in my testing.

well I have two promise Pegasus raids, one Tb2, one TB1 and a lacie little big disk2 TB2. And a HDMI projector that is currently daisy chained on the cMP.
It would be wonderful If I could just daisy chain all four of them together behind a single TB3 port instead of buying multiple adapters. But not sure about bandwidth.
The Lacie and Pegasus2 can both reach about 1000mb/s, the other Pegasus1 just about 600.
 
well I have two promise Pegasus raids, one Tb2, one TB1 and a lacie little big disk2 TB2. And a HDMI projector that is currently daisy chained on the cMP.
It would be wonderful If I could just daisy chain all four of them together behind a single TB3 port instead of buying multiple adapters. But not sure about bandwidth.

Need multiple adapters why? Something that is TB v1 is pretty hobbled so coupling that to 1-2 TBv2 decices ( especially with the initial controller in them won't be much if they are HDDs.

If an SSD would want it out of a TBv2 chain anyway.

The Lacie and Pegasus2 can both reach about 1000mb/s, the other Pegasus1 just about 600.

PCI-e v2 x2 link is 10Gb/s ( you have two 1.0's and a 0.6 ). Even at switched scales to MB/s only would need two (and still would be case moving SSDs over to TBv3 enclosures would leave less bandwidth on the floor). The older the TB controller the more hamstrung they are with PCI-e bandwidth allocation. The first two generations assigned to HDDs is a far better match for their partially baked implementations of Thunderbolt.
 
Need multiple adapters why? Something that is TB v1 is pretty hobbled so coupling that to 1-2 TBv2 decices ( especially with the initial controller in them won't be much if they are HDDs.

If an SSD would want it out of a TBv2 chain anyway.



PCI-e v2 x2 link is 10Gb/s ( you have two 1.0's and a 0.6 ). Even at switched scales to MB/s only would need two (and still would be case moving SSDs over to TBv3 enclosures would leave less bandwidth on the floor). The older the TB controller the more hamstrung they are with PCI-e bandwidth allocation. The first two generations assigned to HDDs is a far better match for their partially baked implementations of Thunderbolt.
If I go Mac Pro->TB 2 SSD->TB2 HHD raid->TB1 HDD Raid
the SSDs would not be able to run at 1gb/S anymore? like all devices would be slowed down by the TB1 raid?

I would really like to move the two NVMe cards from the LaCie little big disk SDD raid internally, but I couldnt find any information on how to open this thing without damaging it.
 
Not making light of your situation, but I find this amusing. You have all the PCIe slots in the world and want more Thunderbolt expansion options. 6,1 owners had all the Thunderbolt in the world (6), but wanted PCIe expansion.
I guess years with the trashcan require some adjustment now. I understand hin though, I never missed any PCIe on the 6.1, except for better GPUs.

This is basically it: I'm in transition to the new machine as many of us will be. I lived in the 6,1 world for 6 years and now am happy to have flexibility (particularly for new GPUs in future months/years). But with those 6 years comes a TB2-based disk array and a fibre channel-based array that I used Promise's thunderbolt converter for. I now have those daisy chained together into one of the back TB3 ports. My 5K monitor is in the other. An immediate "problem" is now hooking up my old Thunderbolt Apple display as a 2nd monitor, but do I want to run a converter off the top of the nMP? Not really. The proximity of my arrays is such that I can't practically daisy-chain the monitor in line with those arrays, either. This led me to the "I need more TB3 ports" and am not sure how to do it without an expensive GPU update.

I do appreciate the option of dropping the Sonnet USB-C PCIe card in there, but I think I need TB3... If the four native ports on the nMP were just on the back card, I'd be good. For now I will just use the top ports but there needs to be a better solution. The top ports are just kind of awkward, imo.
 
This is basically it: I'm in transition to the new machine as many of us will be. I lived in the 6,1 world for 6 years and now am happy to have flexibility (particularly for new GPUs in future months/years). But with those 6 years comes a TB2-based disk array and a fibre channel-based array that I used Promise's thunderbolt converter for. I now have those daisy chained together into one of the back TB3 ports. My 5K monitor is in the other. An immediate "problem" is now hooking up my old Thunderbolt Apple display as a 2nd monitor, but do I want to run a converter off the top of the nMP? Not really. The proximity of my arrays is such that I can't practically daisy-chain the monitor in line with those arrays, either. This led me to the "I need more TB3 ports" and am not sure how to do it without an expensive GPU update.

I do appreciate the option of dropping the Sonnet USB-C PCIe card in there, but I think I need TB3... If the four native ports on the nMP were just on the back card, I'd be good. For now I will just use the top ports but there needs to be a better solution. The top ports are just kind of awkward, imo.
What kind of 5K display do you have attached to a single TB2 Port on your trashcan??
 
I have a base config MP 7,1...

...so it comes with just the two Thunderbolt 3 ports on the back and the two on top.

What? Did you receive your Mac Pro 7,1 already? Photos, or it did not happen! :)

You cannot order the new Mac Pro without a graphics card. All MPX modules, including the Radeon Pro 580X come with four Thunderbolt 3 ports. Why do you not use those?
 
If I go Mac Pro->TB 2 SSD->TB2 HHD raid->TB1 HDD Raid
the SSDs would not be able to run at 1gb/S anymore? like all devices would be slowed down by the TB1 raid?

Well

Mac Pro -> T3-2-T2 -> TB 2 SSD -> TB2 HDD -> TB1 HDD RAiD.

You'd drop down the TBv2 after the adapter. :) Then TBv2 between the two TBv2 devices. The last TBv2's controller should buffer against the TBv1 leaking back upstream. TBv1 just physically works different so that all has to be "lifted" and redistributed.

So yeah that is right. Slowest and oldest stuff at the end of the line. ( oldest being the 'tie breaker" there to further back. ).

Old and slow video though probably would want on a shorter, different chain. The video flow controls have changed over the evolution and the controllers in the chain all decide to go "lowest common denominator" there then probably will waste substantive bandwidth. The video data will hog more bandwidth ( even if maybe not using it. ) so pragmatically that would be a drop in PCI-e flow because they get what's left. (video has lower latencies tolerances so gets priority).


I would really like to move the two NVMe cards from the LaCie little big disk SDD raid internally, but I couldnt find any information on how to open this thing without damaging it.

Ah. those are a bit of a pain.
[automerge]1576537446[/automerge]
I appreciate your response, but I'm looking for an internal card-based addition. Like an additional Apple Network card that already came with the machine (but given that they restrict that card to a specific port, I doubt we will see them sold separately?).

The Mac Pro I/O card? it will be sold but it only fits and is supported in slot 8 on the Mac Pro. Not place else (even other slots in the Mac Pro 2019).

But isn't a 'hack' that happens to work card? Probably not. The same kluge that folks are using ton the 5,1 could probably be used on the MP 2019 but it would still probably be just a kludge since not hooking up the GPIO connector.

Outside of the 580X , I don't think there is a big demand here. All the other GPU cards get you another 4 TBv3 ports. Eight is more than the six anyone who has been sitting on MP 2013 is used to. If you 'toss' the top two... right back to six as before.

When apple replaces the 580X with a lowest cost GPU card I would suspect that it will probably come with two TB ports. ( depends upon if AMD adds some ports to the base minimal so had 8 to work with. Or Apple will just drop one of the HDMI ports. two TBv3/USB4 ports and one HDMI 'shared' with those two. ).
 
Last edited:
  • Like
Reactions: chfilm
What? Did you receive your Mac Pro 7,1 already? Photos, or it did not happen! :)

You cannot order the new Mac Pro without a graphics card. All MPX modules, including the Radeon Pro 580X come with four Thunderbolt 3 ports. Why do you not use those?
Wrong, the 580X has zero TB ports-only two HDMI ports.
[automerge]1576537753[/automerge]
Well

Mac Pro -> T3-2-T2 -> TB 2 SSD -> TB2 HDD -> TB1 HDD RAiD.

You'd drop down the TBv2 after the adapter. :) Then TBv2 between the two TBv2 devices. The last TBv2's controller should buffer against the TBv1 leaking back upstream. TBv1 just physically works different so that all has to be "lifted" and redistributed.

So yeah that is right. Slowest and oldest stuff at the end of the line. ( oldest being the 'tie breaker" there to further back. ).

Old and slow video though probably would want on a shorter, different chain. The video flow controls have changed over the evolution and the controllers in the chain all decide to go "lowest common denominator" there then probably will waste substantive bandwidth. The video data will hog more bandwidth ( even if maybe not using it. ) so pragmatically that would be a drop in PCI-e flow because they get what's left. (video has lower latencies tolerances so gets priority).




Ah. those are a bit of a pain.
[automerge]1576537446[/automerge]


The Mac Pro I/O card? it will be sold but it only fits and is supported in slot 8 on the Mac Pro. Not place else (even other slots in the Mac Pro 2019).

But isn't a 'hack' that happens to work card? Probably not. The same kluge that folks are using ton the 5,1 could probably be used on the MP 2019 but it would still probably be just a kludge since not hooking up the GPIO connector.

Outside of the 580X , I don't think there is a big demand here. All the other GPU cards get you another 4 TBv3 ports. Eight is more than the six anyone who has been sitting on MP 2013 is used to. If you 'toss' the top two... right back to six as before.

When apple replaces the 580X with a lowest cost GPU card I would suspect that it will probably come with two TB ports. ( depends upon if AMD adds some ports to the base minimal so had 8 to work with. Or Apple will just drop one of the HDMI ports. two TBv3/USB4 ports and one HDMI 'shared' with those two. ).
Thank you again, you have your information together sir!
by video you mean if I hook up that projector at the end of the chain it might have implications on the data connection of the raids?
 
.... All MPX modules, including the Radeon Pro 580X come with four Thunderbolt 3 ports. Why do you not use those?

Not the 580X . Use the Radeon Pro 580X MPX Module with your Mac Pro ( on the Mac Pro tech specs page too).

The the GPU MPX modules that Apple sells separately in a kit come with four TB ports. The 580X doesn't come in a kit. It is basically thre for folks who want to buy a Mac Pro but don't particularly care about the GPU (or at least the Apple one. Pretty good chance they will buy another ).

Most audio work the 580X is likely overkill and likely don't have super color gamut monitors. Two HDMI connectors work with any relatively modern mainstream/general usage monitors. ( there are folks still clinging to VGA and DVI but )

Similarly the rack version that isn't assigned to heavyweight GPU computational work.... 580X will probably do just fine ( basically no monitor hooked up ).

The W5700X will eventually be a "cheaper than Vega II" option, but that too is probably going to be relatively pretty steep just for TBv3 ports ( that isn't going to be a good value proposition).
[automerge]1576557095[/automerge]
Wrong, the 580X has zero TB ports-only two HDMI ports.
....

It "feeds" the four TB ports on the Mac Pro DisplayPort video streams. It just doesn't add more TB ports to the pile. ( it is coupled indirectly ). I think some folks get sidetracked when Apple's docs mention the "four ports of the Mac Port" in the 580X docs and driving 2 XDRs.
 
  • Like
Reactions: chfilm and OkiRun
Would be nice to see vendors find a way for TB1/TB2 storage devices to properly connect via direct USB-C (not TB3) to solve some of these setup issues. Many people with external RAID's that are not going anywhere when replacing the machine itself. No need to tie-up TB3 40Gbps bandwidth for those if they're not already TB3 devices.

This will change over time as USB4 rolls out (with Thunderbolt as a major primary option. Folks can still tap dance around including it in USB4 in some context. ). Part of this too has to do with new products still clinging to older Thunderbolt controllers. For example OWC just dropped a new enclosure Thunderbay 4 mini. https://blog.macsales.com/56826-introducing-the-thunderbay-4-mini/
Jump to the specs .. "Thunderbolt: Intel JHL6540". Newest? Not really

https://ark.intel.com/content/www/us/en/ark/products/series/87742/thunderbolt-3-controllers.html

a circa 2016 "Alpine Ridge" one instead of circa 2018 "Titan Ridge". ( The latter one added USB-C host capability. https://www.anandtech.com/show/12228/intel-titan-ridge-thunderbolt-3 ). The first half of 2019 would be the time a TB controller bump would typically come along. I would expect more "integrated with USB" than anything 'faster'. At some point closer to 2021 may get some 3rd party controllers that are specialized to SATA storage. PCI-e storage and USB-C doesn't really make much sense for Thunderbolt.

I suspect some peripheral vendors have stall hoping that some more 3rd parties will show up with controllers. They can either play Intel (and new vendor) for better prices and get something that makes their implementation more cost effective ( or some combination of those two. ) USB4 slowly getting to finalization put that play on the "very slow boat" path.





MANY manufacturers abandoned the PCIe expansion card style RAID setups (like HDPro/HDPro2/HDOne) in favor of the Promise/Pegasus style solutions via TB3/USB3.1/USB3.2. Not sure going back to those PCIe versions even makes sense.

I think more several vendors kept making "real" RAID cards and just dropped doing macOS drivers. VMWare , Llnux , Unix , Windows Server ... yes. macOS no. There were other folks who piggybacked off that core group with somewhat rebadged stuff who faded more than the core.

The quirky thing for Apple at the moment is that they are about to make folks who have drivers do lots of work adapting to the kernel interface overhaul they are rolling out. Getting folks to come back when it is close to starting over from scratch for a very small user pool is probably going to be a challenge.

it makes sense for the more expensive Add-in-Card which were typically coupled to other Macs in the line up via a Thunderbolt PCI-e enclosure. ( there are more than several vendors of those ).

The Sonnet Express-III D has a PCI-e card compatibility chart in the tech specs. (a pdf file link at that page). That is a long list for "almost no products in that space".
 
True 40Gbps passthrough requires all devices to be 40Gbps, and even then there are drop-offs along the way with many variables at play. Most of the time they are not noticed since you're not saturating the 40Gbps bandwidth, even with things like "high speed" RAIDs. You'll notice more with NVMe RAID via TB enclosure as that's actually getting closer to the theoretical maximum throughput.

The PCI-e "on/off" ramps to the Thunderbolt network are a bit under x4 PCI-e v3 ( ~30G/s < 32Gb/s ) . So there is is only corner case ways for PCI-e only traffic to saturate the Thunderbolt backhaul. You'd have to have multiple pairs of devices pointing at two different exit/end points. All long as everyone points a the host ( or consumes writes from the host ) then the bi-section bandwidth is limited to the host's "on/off" ramp. The number of devices that do non host DMA data transfers isn't very large though.

If you throw in any DisplayPort data traffic and the PCI-e data gets de-prioritized.

Anyone running an XDR and then puts something interesting on its USB-C ports and push data output can get close to saturating 40Gb/s. but that is only with the newest TB controllers ( 'Titan Ridge' class). The ones before won't.

The evolution of TB controllers have incrementally enabled more PCI-e data throughput. If get all new devices with all latest gen controller can do better than previous generations. But as pointed out .... vast majority of peripherals vendors aren't moving up right now.


When testing GPU via eGPU in TB3 you can also start to see where this trails off depending on location in the chain. Closest to the host gets the best scores 90%+ of the time in my testing.

Thunderbolt isn't latency free. The more hops you take the more "overhead" going to incur. This could possibly get worse with USB4 because they seem to want to enable "hubs" instead of far more simple switch routing straight daisy chain. ( fo the same cost I think that is highly unlikely to be faster or lower latency. USB's topology isn't conducive to highest speed at lower cost. ).

if you put outbound display traffic on the same daisy chain and try to write the slow dow would hit the closest on too though. the flow control could be noisier down stream until the video gets off the chain.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.