Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
The first post of this thread is a WikiPost and can be edited by anyone with the appropiate permissions. Your edits will be public.
PCIE switch is a PEX 8724
That's 24 lanes.
- 8 for upstream
- 8 for two nvme
- 4 for ethernet
- 2 for usb
The leaves two unused? I am missing something or that is a waste of 2 lanes. Maybe they use two separate USB controllers so that they can each do 10 Gbps simultaneously?

The FAQ says the ethernet only uses 2 lanes so there could be 4 unused lanes?

I consider all such documentation suspect until someone dumps a pcitree or ioreg or whatever that shows link width and link rate for all devices on the PCIe card.
 
Maybe they use two separate USB controllers so that they can each do 10 Gbps simultaneously?
Tecnical specs imply only 1 controller that delivers simulataneous 10 Gbps per port.

Picture seems to confirm this:
Screen Shot 2023-11-13 at 16.38.32.jpg
 
If I am not mistakent the 10Gbe Ethernet port needs OpenCore to work. I think this is not an issue now an has been integrated to OpenCore. But good to keep in mind.

The other issue is you can't boot Windows/Linux with this card. Only macOS.
I really like to see someone utilizing DANTE with some feedback, and that would really open a lot of options for recording and monitoring. Mac only I see.
 
Last edited:
Tecnical specs imply only 1 controller that delivers simulataneous 10 Gbps per port.

Picture seems to confirm this:
That picture is from the video? Single USB controller means 10 Gbps per port but < 15.75 Gbps for both ports simalteneously.
 
SK HynixPC401NVMe256GB, 512GB, and 1TB~2,700 MB/s read
~1,450 MB/s write
512 B (emulated), 4 KB physicalYesUnknown

I used a 1TB PC401 in a 2015 MacBookPro 15 for waay too long. The temps were consistently over 60. Now that SSPOLARIS prices have dropped a bit, upgraded to a 1TB SSPOLARIS and the temps are 30-40 which also makes the MBP feel cooler to the touch and like it's living longer on battery than before.
 
  • Haha
  • Like
Reactions: fnwbr and NC12
I used a 1TB PC401 in a 2015 MacBookPro 15 for waay too long. The temps were consistently over 60. Now that SSPOLARIS prices have dropped a bit, upgraded to a 1TB SSPOLARIS and the temps are 30-40 which also makes the MBP feel cooler to the touch and like it's living longer on battery than before.
Nice,
But this is about desktop Mac Pros...
 
  • Like
Reactions: NC12
Hi all
Firstly a big thanks to everyone contributing to this great forum.
Look
I've been looking into getting a PCIe switch card for my 4,1>5,1 Preferably a quad card.
My biggest confusion is about the physical connectors and their importance.
For instance the Sabrent 4 disk card has a physical x4 connector that can be placed in a x4 x8 or x16 slot and is said in this forum to be able to reach 3000 MB/sec.

The Ceacent ANM24PE16 has a physical x16 connector and can only be run in a x16 slot.
Having a look on the pics it's clear that there are 2x16 pins connected.
Ceacent on eBay

Another card (unbranded) has a physical x16 connector and is supposed to run in a x16 slot. This card has 2x8 connections which is half of the lenght of the physical connector.
Link to eBay

Is there just different ways of "wiring" the cards or do the differ in terms of disks sharing the same lanes which would mean slower performance. Does it impact speed per individual disk etc?

Lastly does it matter at all in these older beasts or would it be important to consider these specs to get the most out of the read/write performance of the disks?
I have a music studio mainly working in ProTools.

Many thanks in advance for any useful input.
 
Hi all
Firstly a big thanks to everyone contributing to this great forum.
Look
I've been looking into getting a PCIe switch card for my 4,1>5,1 Preferably a quad card.
My biggest confusion is about the physical connectors and their importance.
For instance the Sabrent 4 disk card has a physical x4 connector that can be placed in a x4 x8 or x16 slot and is said in this forum to be able to reach 3000 MB/sec.

The Ceacent ANM24PE16 has a physical x16 connector and can only be run in a x16 slot.
Having a look on the pics it's clear that there are 2x16 pins connected.
Ceacent on eBay

Another card (unbranded) has a physical x16 connector and is supposed to run in a x16 slot. This card has 2x8 connections which is half of the lenght of the physical connector.
Link to eBay

Is there just different ways of "wiring" the cards or do the differ in terms of disks sharing the same lanes which would mean slower performance. Does it impact speed per individual disk etc?

Lastly does it matter at all in these older beasts or would it be important to consider these specs to get the most out of the read/write performance of the disks?
I have a music studio mainly working in ProTools.

Many thanks in advance for any useful input.
Really simple to look at it this, 4x8=16 lanes. full speed for four sticks of NMVe.

Certainly don't want four sticks in a 4x8 slot.
 
Really simple to look at it like this, 4x8=16 lanes. full speed for four sticks of NMVe. which you only have one slot left after video card.

Other slots won't yield the full speed.

Now the OWC Accelsior 4M2, will get close to 6,000

OWC Accelsior 4M2​

  • Supercharges Mac and PC: ideal for Mac Pro 2019, Mac Pro 2012 or 2010, and PC towers
  • Work faster: over 6,000MB/s real-world speed in RAID 0
 
Last edited:
My biggest confusion is about the physical connectors and their importance.
For instance the Sabrent 4 disk card has a physical x4 connector that can be placed in a x4 x8 or x16 slot and is said in this forum to be able to reach 3000 MB/sec.
Two different thing: electrical connection and physical connection.
The Power Mac has four x16 physical slots but only two of them are electrical x16 slots. The other two slots are electrically x4 slots.

x4, x8, x16 are link widths indicating the number of lanes. A x16 slot can connect PCIe cards with x16, x8, x4, x2, or x1 lanes (electrically and physically).

x4 can achieve 3000 MB/s only if the slot is PCIe 3.0, or gen 3, or 8 GT/s. Those are different ways to indicate the link rate per lane.
The MacPro4,1, and MacPro5,1 is limited to PCIe gen 2 link rate (5 GT/s).
The MacPro1,1 and MacPro2,1 is limited to gen 1 (2.5 GT/s).
The MacPro3,1 has two x16 gen 2 slots and two x4 gen 1 slots. The x4 slots can connect PCIe cards with x4 or x1 lanes. A PCIe card with x2 lanes will connect as x1.

You can connect PCIe cards that are physically narrower than the slot.
You can connect PCIe cards that are physically wider than the slot by using a riser.

The Ceacent ANM24PE16 has a physical x16 connector and can only be run in a x16 slot.
It can work in a physical x16 slot. The slot could be electrically x1, x4, x8, or x16.
It can work in a physical smaller slot using a riser adapter or cable.

Having a look on the pics it's clear that there are 2x16 pins connected.
Are you counting metals pins on the card? Don't do that. All PCIe cards of the same physical connection size usually have the same number of pins. A PCIe lane consists of 4 pins, two on each side of the PCB. A PCIe lane is a bidirectional link using differential signalling (transmit and receive, + and - ).

Maybe you are talking about the number of surface mount components (resisters?) next to the pins. I think that is an indication of the number of physical lanes that are electrically connected. You can usually look at a picture of a PCIe card to see if all the physical lanes are electrically connected.

Another card (unbranded) has a physical x16 connector and is supposed to run in a x16 slot. This card has 2x8 connections which is half of the lenght of the physical connector.
Yup that is definitely an x16 physical card with x8 electrical lanes. It uses the ASMedia ASM2824 which supports PCIe gen 3 x8 upstream connection.
https://www.asmedia.com.tw/product/249yq0aSx7zRFGJ9/7c5YQ79xz8urEGr1

Is there just different ways of "wiring" the cards or do the differ in terms of disks sharing the same lanes which would mean slower performance. Does it impact speed per individual disk etc?
The ASMedia ASM2824 is a PCIe switch. It supports gen 3 and 24 lanes total. The upstream is limited to 8 lanes which leaves 16 lanes for downstream devices. In this case, there are four downstream M.2 slots. An M.2 slot can have up to 4 PCIe lanes so 16 downstream lanes is exactly enough for that.

A PCIe gen 3 switch can convert the fast and narrow gen 3 x4 of the NVMe device connected to the M.2 slot to the slow and wide gen 2 x8 of the Mac Pro without much loss in performance.
upstream: 5 GT/s per lane x 8 lanes x 5b/10T = 32 Gbps.
one downstream device: 8 GT/s per lane x 4 lanes x 128b/130T = 31.5 Gbps.

In this setup, you have 126 Gbps on the downstream side of the bridge and 32 Gbps on the upstream side of the bridge. It's like a USB 3.x hub. All the downstream USB 3.x devices can perform at max speed but if more than one tries to transmit or receive at the same time, then they will get limited by the bandwidth of the upstream connection.

Lastly does it matter at all in these older beasts or would it be important to consider these specs to get the most out of the read/write performance of the disks?
To get the most out of your gen 2 x16 slot, you would use a gen 4 x16 PCIe card.
upstream: 5 GT/s per lane x 16 lanes x 5b/10T = 64 Gbps.
one downstream device: 16 GT/s per lane x 4 lanes x 128b/130T = 63 Gbps.

A gen 5 x16 PCIe card could be slightly faster then a gen 4 PCIe card but would not be worth the expense since you will be hitting the upstream limit.

64 Gbps is 8000 MB/s but in general you can only get ≈75% to 85% of that as actual data ≈6000 MB/s .. 6800 MB/s. The rest is overhead and inefficiencies.

If you are only connecting gen 3 NVMe devices, then a gen 3 x16 PCIe card is sufficient. A gen 3 x8 PCIe card is alright if you're not doing RAID 0 or won't be transmitting to or from multiple NVMe devices at the same time.

Note that PCIe has separate lanes for transmit and receive, so you should be able to transmit 6000 MB/s while receiving 6000 MB/s at the same time.
 
Many thanks to joevt for this extremely detailed info. I was aware of a lot of this already but it's a very good breakdown that I'm sure many others will also appreciate.

I was not counting pins but thinking about the actual connected components on different cards, not the actual pins themselves.
On some "x16" cards for instance the Sonnet M2 4x4 you can clearly see that they have (2x16) mounted components all the way along the pins while others only have half of that which made me wonder about the difference in performance.
 
Just wanted to post and update about a MVNE M.2 SSD that the list says does not work properly.

Let me preface this by saying that I am running these cards in a IO Crest SI-PEX40157 that uses a ASM2824 pcie 3.0 switch and that is most likely why I am not having any issues.

It seems similar to the samsung 980 series that wont work without a pcie 3.0 swith card.

Anyways, I have been using the WD Black SN770 1TB for well over a month now and have had 0 kernel panics or any other issues with the drives. Speeds are great and the maximum to be expected with the setup.

Hope this helps anyone considering using these ssd's. I got them simply because walmart carried them and I could pick them up locally right away for 55$ a piece and didn't want to wait.
View attachment 2309021View attachment 2309025
This is good news. I see that you are running Monterey. Which OpenCore are you using? Martin Lo? OC Legacy Patcher? Thank you!
 
Hullo.
Has anybody used a Corsair MP600 or MP700 to run OSX in an Intel 7,1 yet?
 
Last edited:
I've been looking into getting a PCIe switch card for my 4,1>5,1 Preferably a quad card.
Your pick, the Ceacent ANM24PE16, is the best value Quad M.2 card currently available, at ~$150. With 2-3 drives in RAID 0, the PEX 8748 chip will easily reach the upper limit of ~6 GB/s speeds on either PCIe 2.0 x16 slot. My Sunweit ST536 (built on the PEX 8747) clocks in at ~2.5 GB/s R/W with a single P41; the newer chipset may improve on that to run marginally faster. Both options approximate the performance of the Highpoint SSD710X series, which have the best compatibility with the 5,1 - allowing you to boot Windows, for example. These will provide the same functionality as the Highpoints at a third of the cost.

To get the most out of your gen 2 x16 slot, you would use a gen 4 x16 PCIe card.
The premium for a Gen 4 PCIe card is of moot value unless futureproofing for later mobos. Even then, OP and others would be better off spending the difference saved to splurge on the size/speed/cache spec options from the Gen 4 class of NVMe SSDs, to maximize performance and longevity - thus simplifying data migration and backward compatibility in any future upgrades.
 
Last edited:
Your pick, the Ceacent ANM24PE16, is the best value Quad M.2 card currently available, at ~$150.
Sadly, at least for me, the link is dead.

Is it this one?

Lieferumfang.jpg

I got one of these some two years ago of Aliexpress for little over 160 euros + taxes and am very happy with it!

As you metion...

storage_speeds.png


...about 6000 sequential r/w with a Raid0 of two 970 EVO plus.
 
Is it this one?
That's the Sunweit ST536, which I've likewise had great results with. I also got mine around the same time as you, before it even had a model number, just a generic description. Seems like they're putting more effort into marketing after getting some solid reviews in the last year. It's still available here on AliExpress (or search "PLX8747" in DWYT store) for anyone interested.

I've seen prices ranging from $170-$220. Wholesale is $150, though some resellers are marking it up for $350-400+ on eBay.

Sadly, at least for me, the link is dead.
Searching for Ceacent ANM24PE16 on eBay returns 10+ listings similar to OP's original link, which have similar pricing and delivery times as the AliEx page I shared. I've not tested it myself but the chip is the "successor" to the Broadcom 8747.

Both cards should offer nearly identical performance, though given the choice today I'd probably still prefer the Sunweit ST536 for the aesthetic factor and build quality (minimalist shroud, black PCB, no logo) enough to pay the ~$30 premium.
 
Last edited:
...about 6000 sequential r/w with a Raid0 of two 970 EVO plus.
Screen Shot 2023-11-17 at 11.36.15 AM.png

Yup, and here's a quick speed run with just a single SSD (P41 2TB). Actually closer to 2800 R/W than what I quoted before. So only 2 drives in RAID0 are needed to fully saturate the theoretical limit of the x16 lane. Leaving the other two slots on the card available to dual- or triple-boot multiple OS if you'd like. Like you, I've been quite pleased with the results. Great value.
 
Last edited:
Yup, and here's a quick speed run with just a single SSD (P41 2TB). Actually closer to 2800 R/W than what I quoted before.
Yes, similar to what i get with a single 980 PRO. I have about 100 lower write speeds may well have to do with my card having just a 8747.

So only 2 drives in RAID0 are needed to fully saturate the theoretical limit of the x16 lane.
Yes, using more than two makes no sense as only two are able to saturate the available bandwidth completely. Am very curious, what speeds such card would reach with single blade or Raid configurations on the PCIe 3.0 of a 7,1.

Also, aside from beeing not able to boot in the first place, Raid would not be the best choice for a boot drive, regarding the massive drop in r/w-speeds for random write transactions, in particular with small chunks of data, which is the most common situation on a system-drive.
 
Last edited:
Can someone do something for me please on a Mac that has never run Windows?
Have a look at your drives in the NVME express section. Is there a FAT32 EFI?

Screenshot 2023-12-21 at 19.40.48.jpg
 
Yes! EFI seems to be formatted as MS-DOS FAT32 by default. On all my disks containing an EFI partition, MVMe as well as SATA, the partition is FAT32.
That's good then thanks. I thought there was something wrong with mine.
My M2 Macbook has no such partition though.
Seems the EFI partition is only necessary when Windows is installed.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.