I think any PCIe slot should be able to run as x1 so x2,x2 should be able to work as x2,x1 or x1,x2 or x1,x1.But there is one "catch" to this with TB5. If you want 3 downstream ports, the configuration/bifurcation of the PCIe lanes are as follows ONLY:
x2,x2 (I think can run as x1,x1 or x2,x1 too, but not split to three)
x4
If you want to bifurcate those PCIe lanes into x1,x1,x1,x1, then you can only have 1 upstream and 1 downstream TB port. You also can't bifurcate to something like x2,x1,x1.
I guess Thunderbolt 5 peripheral controllers have a max number of PCIe downstream bridges = 5:
2 PCIe + 3 Thunderbolt
4 PCIe + 1 Thunderbolt
Does adding PCIe ports required adding extra pins to the Thunderbolt controller? The non data signals are CLKREQ#, REFCLK±, and PRSNT2#. Are those signals coming from the Thunderbolt controller? Are they different for each PCIe port?
Does the Thunderbolt programming interface have a downstream PCIe bridge limit? I suppose the USB4 v2 spec might say something about that.
More PCIe bridge options might require more transistors or logic or code but I don't think that's a big problem.
PCIe gen 4 should allow almost 1.969 GB/s per lane. However, if the controller is PCIe gen 3, then it would be limited to 0.985 GB/s.In a dock, people want the following ports it seems: 10G ethernet, CFexpress Type B, CFexpress Type A and internal NVMe ssds. These all require PCIe. 10G uses 1-2 lanes of PCIe 4, CFX B uses 2 lanes of PCIe 4 and CFX A requires 1 lane of PCIe. So to continue this thought, if you wanted a 10G ethernet port AND a CFexpress reader, there are no lanes for PCIe to USB controllers or PCIe for NVMe.
In fact, you can only have two of the following to choose from: 10G Ethernet, CFXB, CFXA, NVMe SSD, PCIe to USB chipsets. So for TB5, you will likely see all USB ports sharing the same bandwidth that TB4 allowed for. Which is one of the reasons to go with a TB5 hub + your current TB4/3 dock.
I think 10G needs only gen 4 x1 or gen 3 x2.
CFX 4.0 type B needs gen 4 x2.
CFX 4.0 type A needs gen 4x1.
Of course, you could add a PCIe switch to add more devices, but that increases cost and space.
USB tunnelling is new with USB4/Thunderbolt 4. Thunderbolt 4 docks would use a USB hub to add more USB ports/devices (devices such as Ethernet adapter, or audio, or SD card reader).Since Thunderbolt is a tunneling protocol and is known to tunnel separate Display Port streams and a bundle of PCIe lanes, I'm unsure about some details:
1. Is there actually a separate USB stream tunneled from the host as well, separately from the PCIe stream to be unwrapped to daisy-chaining downstream ports when an USB device is plugged in there? And if so, a Thunderbolt dock with multiple downstream ports would then have to contain its own USB hub to split that connection?
For example, the Cal Digit Thunderbolt 4 Element Hub includes an extra USB hub to convert the tunnelled USB into 4 downstream USB type A ports. The OWC Thunderbolt 4 Hub doesn't have an extra USB hub.
In both cases the tunnelled USB goes through a USB hub that is built into the Thunderbolt controller to provide USB for the single USB port of the Thunderbolt controller and the three downstream Thunderbolt ports.
When a USB4 or Thunderbolt 4 hub or dock is connected to a Thunderbolt 3 host, then there is no tunnelled USB. Instead, the built-in PCIe USB XHCI host controller of the Thunderbolt controller is used.
The built-in USB XHCI host controller of the Thunderbolt controller might be superior to tunnelled USB in some cases.
https://forums.macrumors.com/thread...ally-10gb-s-also-definitely-not-usb4.2269777/
If PCIe bandwidth is not being used by some devices, then it can be used by other devices.2. a) I'm not entirely certain about how PCIe switches work in detail, so if a dock contains multiple PCIe peripherals but those are used only occasionally, is the remaining PCIe bandwidth then still available to downstream devices via a PCIe switch in the dock? Is bandwidth dynamically allocated as needed (probably with priority for upstream users) or statically at startup/plugin time?
Avoided due to cost and space. You can connect dozens of PCIe devices using 1 or more switches. It's basically a network or tree. Each PCIe switch is like a network or USB hub.2. b) if bandwidth needs are low enough, can't a PCIe lane bundle be split up via switch into multiple slower PCIe lanes and so effectively share the resource? Is that usually only avoided due to cost or is that actually not possible?
For Thunderbolt 5, x1x1x1x1 is possible only if the downstream Thunderbolt ports is reduced from 3 to 1. This is a limit of Intel Barlow Ridge chips.2. c) Is there no 1x/1x/1x/1x PCIe split available in TB5?
Perhaps someone else can make a USB4v2 chip that doesn't have this limit.
I don't think earlier versions of Thunderbolt had a display tunnelling limit.3. Does TB5 now tunnel more than 2 Display Port streams and can available chips for docks extract those at the same time to separate ports? Do M4 Pro and Max actually support that by providing more than just 2 Display Port streams per port?
The Thunderbolt 1,2,3,4 controllers had a maximum of 2 DisplayPort In Adapters. You can see Thunderbolt Adapters listed in ioreg command output or in IORegistryExplorer.app.
Thunderbolt 1,2 had a maximum of 1 DisplayPort Out Adapters. You can chain Thunderbolt docks together to extract all the DisplayPort signals.
Thunderbolt 3 had a maximum of 2 DisplayPort Out Adapters.
Thunderbolt 5 has a maximum of 3 DisplayPort In Adapters or 3 DisplayPort Out Adapters.
I think Apple Silicon is limited to 2 DisplayPort In Adapters? Which means the 3rd DisplayPort Out Adapter of a Thunderbolt 5 dock is not usable. Someone please check ioreg to be sure.
The BlackMagic eGPU and Sonnet eGPU Breakaway Puck RX 5500 XT/5700 have Thunderbolt peripheral controllers with DisplayPort In Adapters connected to their GPU. If the host Mac has its GPU connected to the DisplayPort In Adapters of its Thunderbolt host controller, and has one of these eGPUs connected, does that mean there could be four DisplayPort signals coming from the Thunderbolt port of the eGPUs? They would have to be limited to 1440p60 for all of them to be extracted from the eGPU's Thunderbolt port unless DSC is used.
I don't think so. Just PCIe, DisplayPort, and USB.4. Are there more stream types tunneled through Thunderbolt (5) than just PCIe, Display Port and possibly USB?
Well, there is the stuff transmitted between two Thunderbolt hosts: Thunderbolt networking (between macOS, Windows, Linux), and Thunderbolt Target Disk Mode (between macOS and Mac EFI). I think this stuff includes a signal to enable Thunderbolt Target Display Mode.
Thunderbolt Target Display Mode uses tunnelled DisplayPort from a Thunderbolt Mac to a Thunderbolt 1 iMac. Enabling Thunderbolt Target Display Mode switches the iMac's display connection from the iMac's GPU to the iMac's Thunderbolt host controller's DisplayPort Out Adapter (most host controllers don't have a DisplayPort Out Adapter).