My reply was more in question of this statement of yours:
"It's like saying you can take 2 different train tracks that go to different stations and make it so that all trains going through those tracks end up at the same station"
But your subsequent explanation explained what you meant by that.
It would be much easier than what you propose in this scenario. The easiest way to fix latency issue is to buffer the data which GPUs already does via the VRAM. PCI-E is in this application is just transferring data from CPU/RAM to the VRAM. So they do end up in the same station, the VRAM. It is identical to the concept of multi-gpu rendering as well as link aggregation of pci-e network devices. as latency would also be a factor in that case as well.
"It's like saying you can take 2 different train tracks that go to different stations and make it so that all trains going through those tracks end up at the same station"
But your subsequent explanation explained what you meant by that.
It would be much easier than what you propose in this scenario. The easiest way to fix latency issue is to buffer the data which GPUs already does via the VRAM. PCI-E is in this application is just transferring data from CPU/RAM to the VRAM. So they do end up in the same station, the VRAM. It is identical to the concept of multi-gpu rendering as well as link aggregation of pci-e network devices. as latency would also be a factor in that case as well.
Last edited: