I think it would be better if it made more direct use of the current set of ports, with just one used to connect to the dock, and the rest directly accessible off the mini.
I think it would be better if it made more direct use of the current set of ports, with just one used to connect to the dock, and the rest directly accessible off the mini.
that would not work , they are using all available bandwidth on the ports that are in the mini. aside from the usb A ports.
Would it actually make a noticeable difference though? How many cards are actually saturating the bandwidth? I seem to recall previous articles indicating that many graphics cards were not even significantly bottlenecked with TB1. Even when on paper they should have been.Well, you could put 3 PCI slots worth of data on a single TB3 bus, but you wouldn't want to.
Would it actually make a noticeable difference though? How many cards are actually saturating the bandwidth? I seem to recall previous articles indicating that many graphics cards were not even significantly bottlenecked with TB1. Even when on paper they should have been.
Well, you could put 3 PCI slots worth of data on a single TB3 bus, but you wouldn't want to.
An obvious typo surely...?why would anyone want any PCI slots on a TB3 controller in the first place. This is for PCIE to TB3.
Depends on the usage. For gaming, the higher the framerate, the more of a penalty you pay, and with >60Hz monitors becoming more common, that's a higher load on the bus. Driving higher resolution displays brings the framerate back down and helps hide the overhead involved. But it's important to point out that throughput isn't the only measure here, but latency as well. If my workflow is more serial, then the latency of moving data across the slower bus will slow me down. And if I'm sharing the bus, then that will add latency when both devices want to move bulk data around at the same time.
The big problem though is that the third device is a 4-slot NVMe array. Using PCIe SSDs, that thing could bring a x16 slot close to saturation. Many PCIe SSDs are maxing out due to the 4x PCIe connection on the M.2 connector, and are one of the few things that benefit from being able to use a 4x PCIe 4.0 connection on compatible SSDs.
Whatever bus that thing is on is going to be effectively saturated. Any GPUs on that bus will get scraps. If the two GPUs share a bus in this design, then they will be stomping on each other and you pay a latency penalty. Still probably one smaller than the benefit of having two GPUs on tap though.
I did not expect this much controversy suggesting a simple mass storage drive bay. I would think that the combination of having boot+apps on an SSD, and having mass storage on a hard drive would be an extremely common use case, but everyone is looking at me like I'm a two-headed alien.
I guess this isn't your use case, but is this use case so hard to imagine?
why would anyone want any PCI slots on a TB3 controller in the first place. This is for PCIE to TB3.
actually 1080p vs 4k over TB3 , there is a much less ramerate drop using 4k than 1080p vs desktop counterparts.
here is a great article that covers it all.
![]()
eGPU Performance Loss - PCI Express vs. Thunderbolt
This is the question that many users wants to know: How much performance drop of my Video Card i will have if i put it in the eGPU with Thunderbolt 1,...egpu.io
Looks decent. I fear that par for the course with this category of product is skimping on fan quality, and the one inside the PSU and/or case being unnecessarily noisy during low utilisation.A bigger version - mega dock:
https://www.notebookcheck.net/Anima...g-triple-graphics-cards-for-Mac.453973.0.html