Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Rico

macrumors member
Original poster
Jul 22, 2002
48
18
Greetings,

I have a 16-channel RR840A currently in a MP 3,1 in Slot 2 under High Sierra, and it is troubled by being clocked down to 2.5GT/s:

RocketRAID 840A SATA Controller:

Name: RocketRAID 840A SATA Controller
Type: RAID Controller
Driver Installed: Yes
MSI: Yes
Bus: PCI
Slot: Slot-2
Vendor ID: 0x1103
Device ID: 0x0840
Subsystem Vendor ID: 0x1103
Subsystem ID: 0x0000
Revision ID: 0x00a1
Link Width: x8
Link Speed: 2.5 GT/s

High Point "Support" says this should be able to run at 5.0 GT/s on a PCIe 2.0 x8 or x16 (up to 8 GT/s on PCIe 3.0), and up to 6000MB/sec using RAID 0/5; but something is wrong with my firmware/boot ROM (they didn't test on 3,1; only 5,1).

I am maxed out at around ~1350MB/sec (I didn't expect to clear much over 2900MB/sec with current SSD config, but not less than half that), as I cannot negotiate at greater than 2.5 GT/s, and I'd like to grab a bit of a speed boost while I wait for my purchase approval for a new 7,1.

I read on some other threads for other High Point cards (esp. SSD7101A) wherein some folks were able to boost their lane multipliers/link speed on 3,1 and 4,1/5,1 using a PCIe utility. Helpful folks on that thread include @dosdude1, @handheldgames, @joevt, @tsialex, @h9826790 and others.

Might anyone be able to guide me through any available procedure to increase my link speed to 5.0 GT/s?

Any links to existing guides/tutorials would be greatly appreciated. (I opted to start a new thread since this card isn't mentioned anywhere I can find on these forums).

Thanks (from a decade-plus-long lurker and fan of many helpful folks here)

Rico

EDIT: After bashing through the threads, I've got PCI Tools installed, and finally got the expected results:

Code:
ricopro:pciutils-3.6.2 rico$ sudo nvram boot-args="debug=0x144"
ricopro:pciutils-3.6.2 rico$ sudo ./update-pciids

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  255k  100  255k    0     0   215k      0  0:00:01  0:00:01 --:--:--  216k
Done.

But I get stuck here trying to find the right port number:
Code:
ricopro:pciutils-3.6.2 rico$ sudo ./lspci
pcilib: Cannot open AppleACPIPlatformExpert (add boot arg debug=0x144 & run as root)
lspci: Cannot find any working access method.

What am I missing here?

EDIT 2: Okay, so more thread bashing has me getting results with Darwin Dump:

OUTPUT:
Code:
01:00.0 RAID bus controller [0104]: HighPoint Technologies, Inc. Device [1103:0840] (rev a1)
    Subsystem: HighPoint Technologies, Inc. Device [1103:0000]
    Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
    Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
    Latency: 0, Cache Line Size: 256 bytes
    Interrupt: pin A routed to IRQ 18
    Region 0: Memory at 3f92000000 (64-bit, prefetchable)
    Region 4: Memory at 3f92100000 (64-bit, prefetchable)
    Expansion ROM at fffe0000 [disabled]
    Capabilities: [80] Power Management version 3
        Flags: PMEClk- DSI- D1+ D2- AuxCurrent=0mA PME(D0+,D1+,D2-,D3hot+,D3cold-)
        Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
    Capabilities: [90] MSI: Enable+ Count=1/32 Maskable+ 64bit+
        Address: 00000000fee00000  Data: 4073
        Masking: 00000000  Pending: 00000000
    Capabilities: [b0] MSI-X: Enable- Count=18 Masked-
        Vector table: BAR=0 offset=00038000
        PBA: BAR=0 offset=00039000
    Capabilities: [c0] Express (v2) Endpoint, MSI 00
        DevCap:    MaxPayload 512 bytes, PhantFunc 0, Latency L0s <128ns, L1 <2us
            ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
        DevCtl:    Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
            RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
            MaxPayload 128 bytes, MaxReadReq 512 bytes
        DevSta:    CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr- TransPend-
        LnkCap:    Port #0, Speed 8GT/s, Width x8, ASPM L0s L1, Latency L0 <128ns, L1 <2us
            ClockPM- Surprise- LLActRep- BwNot-
        LnkCtl:    ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk-
            ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
        LnkSta:    Speed 2.5GT/s, Width x8, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-
        DevCap2: Completion Timeout: Range B, TimeoutDis+
        DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-
        LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-, Selectable De-emphasis: -6dB
             Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
             Compliance De-emphasis: -6dB
        LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete-, EqualizationPhase1-
             EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-

So is this now as simple as adding this?:

Code:
sudo setpci -s 01:00.0 CAP_EXP+30.w=2:F
sudo setpci -s 01:00.0 CAP_EXP+10.w=20:20

EDIT 3: Okay, so after installing DirectHW.kext, and running the above commands, I appear to have achieved 5.0 GT/s:

OUTPUT:
Code:
ricopro:sbin rico$ sudo /usr/local/sbin/lspci -nnvv -s 01:00
01:00.0 RAID bus controller [0104]: HighPoint Technologies, Inc. Device [1103:0840] (rev a1)
    Subsystem: HighPoint Technologies, Inc. Device [1103:0000]
    Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
    Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
    Latency: 0, Cache Line Size: 256 bytes
    Interrupt: pin A routed to IRQ 18
    Region 0: Memory at 3f92000000 (64-bit, prefetchable)
    Region 4: Memory at 3f92100000 (64-bit, prefetchable)
    Expansion ROM at fffe0000 [disabled]
    Capabilities: [80] Power Management version 3
        Flags: PMEClk- DSI- D1+ D2- AuxCurrent=0mA PME(D0+,D1+,D2-,D3hot+,D3cold-)
        Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
    Capabilities: [90] MSI: Enable+ Count=1/32 Maskable+ 64bit+
        Address: 00000000fee00000  Data: 4073
        Masking: 00000000  Pending: 00000000
    Capabilities: [b0] MSI-X: Enable- Count=18 Masked-
        Vector table: BAR=0 offset=00038000
        PBA: BAR=0 offset=00039000
    Capabilities: [c0] Express (v2) Endpoint, MSI 00
        DevCap:    MaxPayload 512 bytes, PhantFunc 0, Latency L0s <128ns, L1 <2us
            ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0.000W
        DevCtl:    CorrErr- NonFatalErr- FatalErr- UnsupReq-
            RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
            MaxPayload 128 bytes, MaxReadReq 512 bytes
        DevSta:    CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
        LnkCap:    Port #0, Speed 8GT/s, Width x8, ASPM L0s L1, Exit Latency L0s <128ns, L1 <2us
            ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
        LnkCtl:    ASPM Disabled; RCB 64 bytes Disabled- CommClk-
            ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
        LnkSta:    Speed 5GT/s (downgraded), Width x8 (ok)
            TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-
        DevCap2: Completion Timeout: Range B, TimeoutDis+, LTR-, OBFF Not Supported
        DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
             AtomicOpsCtl: ReqEn-
        LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
             Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
             Compliance De-emphasis: -6dB
        LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete-, EqualizationPhase1-
             EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
    Capabilities: [100 v2] Advanced Error Reporting
        UESta:    DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
        UEMsk:    DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
        UESvrt:    DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
        CESta:    RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
        CEMsk:    RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
        AERCap:    First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
            MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
        HeaderLog: 00000000 00000000 00000000 00000000
    Capabilities: [300 v1] Secondary PCI Express <?>

… but When I check link status, I end up with:

Code:
# Initial PCIe 2.0 x8
# Final PCIe 2.0 x8
 
Last edited:
Whew. Long night of banging in Terminal.

So, I got lost and confused for awhile, but if I use port 00:1 instead of 01:00.1, I can get from PCIe 1.0 x8 to PCIe 2.0 x8 (though System Profiler still reports 2.5GT/s); and I see a minor increase from about 1350MB/sec to 1570MB/sec, so it is an improvement, but not as high as I'd think this card can perform, even in a 3,1.

Am I missing any steps? I skipped all the firmware injection steps for the NVMe cards; they didn't seem applicable as this is a SAS/SATA SSD/HDD controller.

Any help to boost this thing up a few more notches would be greatly appreciated.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.