A while ago I purchased a NETSTOR Turbobox (Model NA211A) on eBay that included a Target card but no Host card. I was lucky to find an official Netstor host card in the aftermarket and my PCIe slot expansion dreams came true.
After researching the switch capabilities/specifications (PCIe 2.0 x8) for the Turbobox, I reasoned that by inserting two (2) passive PCIe 3.0 x4 HBAs - each containing one (1) m.2 NVME PCIe SSD (matching 512GB Samsung SM951 NVME-only SSDs), in to each of the available PCIe 2.0 x8 slots on the Turbobox target board, then I would have assembled the equivalent of a Highpoint SSD7101 or a Syba PEX40129 HBA.
To be clear, the HIGHPOINT uses a PLX switch, the Syba uses a ASM switch and the Turbobox uses an IDT switch (just like the cMP 5,1).
The issue is that when I SoftRAID my matching SM951s to get full PCIe 2.0 x8 data transfer speeds (nominally 3000MB/s), I still get the typical 1500MB/s (PCIe 2.0 x4) speeds offered by each of these disks individually.
Yesterday I ran the 'pcitree' bash script provided to forum members by @joevt (results below) and I need some assistance interpreting the bottleneck limiting the link speed between the cMP and the Turbobox to PCIe 2.0 x4.
ANY feedback - particularly on how to read the second and third columns on link width - is appreciated..
After researching the switch capabilities/specifications (PCIe 2.0 x8) for the Turbobox, I reasoned that by inserting two (2) passive PCIe 3.0 x4 HBAs - each containing one (1) m.2 NVME PCIe SSD (matching 512GB Samsung SM951 NVME-only SSDs), in to each of the available PCIe 2.0 x8 slots on the Turbobox target board, then I would have assembled the equivalent of a Highpoint SSD7101 or a Syba PEX40129 HBA.
To be clear, the HIGHPOINT uses a PLX switch, the Syba uses a ASM switch and the Turbobox uses an IDT switch (just like the cMP 5,1).
The issue is that when I SoftRAID my matching SM951s to get full PCIe 2.0 x8 data transfer speeds (nominally 3000MB/s), I still get the typical 1500MB/s (PCIe 2.0 x4) speeds offered by each of these disks individually.
Yesterday I ran the 'pcitree' bash script provided to forum members by @joevt (results below) and I need some assistance interpreting the bottleneck limiting the link speed between the cMP and the Turbobox to PCIe 2.0 x4.
ANY feedback - particularly on how to read the second and third columns on link width - is appreciated..