Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Would you buy this for your Mac mini 8,1?


  • Total voters
    50
I think it would be better if it made more direct use of the current set of ports, with just one used to connect to the dock, and the rest directly accessible off the mini.
 
I think it would be better if it made more direct use of the current set of ports, with just one used to connect to the dock, and the rest directly accessible off the mini.

that would not work , they are using all available bandwidth on the ports that are in the mini. aside from the usb A ports.
 
that would not work , they are using all available bandwidth on the ports that are in the mini. aside from the usb A ports.

Well, you could put 3 PCI slots worth of data on a single TB3 bus, but you wouldn't want to.
 
Well, you could put 3 PCI slots worth of data on a single TB3 bus, but you wouldn't want to.
Would it actually make a noticeable difference though? How many cards are actually saturating the bandwidth? I seem to recall previous articles indicating that many graphics cards were not even significantly bottlenecked with TB1. Even when on paper they should have been.
 
Would it actually make a noticeable difference though? How many cards are actually saturating the bandwidth? I seem to recall previous articles indicating that many graphics cards were not even significantly bottlenecked with TB1. Even when on paper they should have been.

Depends on the usage. For gaming, the higher the framerate, the more of a penalty you pay, and with >60Hz monitors becoming more common, that's a higher load on the bus. Driving higher resolution displays brings the framerate back down and helps hide the overhead involved. But it's important to point out that throughput isn't the only measure here, but latency as well. If my workflow is more serial, then the latency of moving data across the slower bus will slow me down. And if I'm sharing the bus, then that will add latency when both devices want to move bulk data around at the same time.

The big problem though is that the third device is a 4-slot NVMe array. Using PCIe SSDs, that thing could bring a x16 slot close to saturation. Many PCIe SSDs are maxing out due to the 4x PCIe connection on the M.2 connector, and are one of the few things that benefit from being able to use a 4x PCIe 4.0 connection on compatible SSDs.

Whatever bus that thing is on is going to be effectively saturated. Any GPUs on that bus will get scraps. If the two GPUs share a bus in this design, then they will be stomping on each other and you pay a latency penalty. Still probably one smaller than the benefit of having two GPUs on tap though.
 
Depends on the usage. For gaming, the higher the framerate, the more of a penalty you pay, and with >60Hz monitors becoming more common, that's a higher load on the bus. Driving higher resolution displays brings the framerate back down and helps hide the overhead involved. But it's important to point out that throughput isn't the only measure here, but latency as well. If my workflow is more serial, then the latency of moving data across the slower bus will slow me down. And if I'm sharing the bus, then that will add latency when both devices want to move bulk data around at the same time.

The big problem though is that the third device is a 4-slot NVMe array. Using PCIe SSDs, that thing could bring a x16 slot close to saturation. Many PCIe SSDs are maxing out due to the 4x PCIe connection on the M.2 connector, and are one of the few things that benefit from being able to use a 4x PCIe 4.0 connection on compatible SSDs.

Whatever bus that thing is on is going to be effectively saturated. Any GPUs on that bus will get scraps. If the two GPUs share a bus in this design, then they will be stomping on each other and you pay a latency penalty. Still probably one smaller than the benefit of having two GPUs on tap though.

actually 1080p vs 4k over TB3 , there is a much less ramerate drop using 4k than 1080p vs desktop counterparts.

here is a great article that covers it all.

 
I did not expect this much controversy suggesting a simple mass storage drive bay. I would think that the combination of having boot+apps on an SSD, and having mass storage on a hard drive would be an extremely common use case, but everyone is looking at me like I'm a two-headed alien.

I guess this isn't your use case, but is this use case so hard to imagine?

1 or 2 video cards are going to be noisy enough, add hard drives to the mix and the noise would be unbearable, IMO. Not to mention the fans it would take to keep things cool.
 
why would anyone want any PCI slots on a TB3 controller in the first place. This is for PCIE to TB3.

This is the sort of nit you pick when someone calls a cheeseburger a hamburger. Or using tech terms, picking nits that I called something USB instead of USB 3.1 Gen 2.

actually 1080p vs 4k over TB3 , there is a much less ramerate drop using 4k than 1080p vs desktop counterparts.

here is a great article that covers it all.


You kinda just repeated what I said here and added a link. You even quoted my comment about higher resolution. It’s still a good article (and one I read months ago), but still.

The penalty is related to the latency spent per frame moving data to/from the GPU. As you push the game to spend more time rasterizing/processing, and less time on I/O (by doing something like increasing resolution), you see a smaller overhead. It will depend on the game, but generally will track with the frame rate of the game (fewer opportunities to pay the penalty, which is essentially fixed per frame).

It tracks with resolution as well because that happens to manipulate how long it takes to rasterize a frame, changing how many times you pay the latency penalty per second. But it’s not the only thing that can affect the total penalty paid. Which is why the penalty varies between games. Both resolution and frame rates are simplifications of what’s going on.

But my original point is that things like this are time sensitive, so even if you aren’t capping the bandwidth available, sharing the bus with an eGPU can still slow things down by adding latency to the eGPU I/O.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.