Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

deconstruct60

macrumors G5
Mar 10, 2009
12,219
3,821
Does a chain of 2-3 small MST displays work like a single DisplayPort source?

The end of the chain? A daisy chain of Displort v1.2 displays work just like most other daisy chain networks work.


Also, someone more familiar with how PCIe works:
- Thunderbolt advertises 20Gbit. That is 2.5GByte/sec. 4x PCIe 2.0 is ~2GByte/sec. Does this imply that Thunderbolt cannot hit its maximum speeds with PCIe data alone (i.e. requires some Displayport data)

The top bandwidth speed is independent of the amount of data on the network.

If trying to measure 20Gb/s then yes. But does 10bit or 1,000 bits travel any faster? no.




- If a Thunderbolt controller is a PCIe 2.0 4x device, can you put one of them on two PCIe 3.0 lanes? Or do you have to put it on 4 lanes that then get clocked down to 2.0 speeds?

The latter. one. 2.0 devices put PCI-e into backward compatibilty mode. That data traverses at 2.0 speeds.




- How much overhead is there with PCIe switching? Could a pair of Thunderbolt controllers (4x PCIe 2.0) share a 4x PCIe 3.0 lane without interfering with each other too much?

Relatively small. The current Mac Pro's two x4 slots are switch. Hardly anyone is having a fit with their x1-3 worth of bandwidth cards.

Depends if the switch is smart and has a buffer to recieve, and merge the traffic into a faster throughput. Switches typically don't. Typically more bandwidth dilution mechanisms, not aggregators.

- Is there any benefit to giving a single FirePro W9000 16x PCIe 3.0 lanes over 8x PCI 3.0 lanes (~equivalent bandwidth to 16x 2.0)

For normal apps video graphics duties. No. For data intense OpenCL work, Yes.


- How much PCIe bandwidth does a typical PCIe SSD use? It seems if Apple is claiming 1.25GB/sec from the SSD that it would need at least 2x PCIe 3.0 lanes? Does that sound right?

This is all part of a new standard.

http://www.anandtech.com/show/6294/breaking-the-sata-barrier-sata-express-and-sff8639-connectors

The speed is relative to whether using PCI-e 3.0 or 2.0. Given Apple likely isn't buying into SFF-8639 connections and controllers it is more likely it is the SATA Express ( x2 and v3.0 ). That way have some more headroom if come up with faster Flash controller ( x2 caps out at 2GB/s. and they are just at 1.25GB/s now. They won't have to change the design much over next 2-4 years. ). Plus likely sharing/spliting time with the GPU ( since attached to the GPU's daughtercard. )
 

GermanyChris

macrumors 601
Jul 3, 2011
4,185
5
Here
While we're having fun picking at the new MP the youtube sphere seem to be up in arms, even the ones who think there will be a DP model (looking at you Elric)
 

chris.k

macrumors member
May 22, 2013
91
1
YSSY
For sakes of clarity, Tesselator, can you post the predicted socket type and Chipset for this system.?

I believe it should be socket LGA2011 (same as Sandy Bridge EP) or will it be the bigger LGA type?

Likewise - C600 series motherboard.., or newer chipset?

.... Or am I completely out to lunch.

/me ducks
 

Umbongo

macrumors 601
Sep 14, 2006
4,934
55
England
For sakes of clarity, Tesselator, can you post the predicted socket type and Chipset for this system.?

I believe it should be socket LGA2011 (same as Sandy Bridge EP) or will it be the bigger LGA type?

Likewise - C600 series motherboard.., or newer chipset?

.... Or am I completely out to lunch.

/me ducks

LGA 2011 and C600 series chipset.
 

Tesselator

macrumors 601
Original poster
Jan 9, 2008
4,601
6
Japan
For sakes of clarity, Tesselator, can you post the predicted socket type and Chipset for this system.?

I believe it should be socket LGA2011 (same as Sandy Bridge EP) or will it be the bigger LGA type?

Likewise - C600 series motherboard.., or newer chipset?

.... Or am I completely out to lunch.

/me ducks

Do we know any of that? I realize some folks have guessed at that info. Some have even based other guesses on those guesses - like price (before the proc is even released). If there's any actual info point it out and list it up and I'll add it in.
 

VirtualRain

macrumors 603
Aug 1, 2008
6,304
118
Vancouver, BC
Do we know any of that? I realize some folks have guessed at that info. Some have even based other guesses on those guesses - like price (before the proc is even released). If there's any actual info point it out and list it up and I'll add it in.

I don't think there's any doubt about the socket being 2011, but the chipset could be the C200 or C600 series... don't know yet.
 

MacVidCards

Suspended
Nov 17, 2008
6,096
1,056
Hollywood, CA
CUDA / OpenCl via TB isn't going to work out very well

Turns out that PCIE 2 x4 lanes (ie, TB) isn't going to work out well to attach GPU.

Obviously right now it is completely 100% impossible in OSX, but even if it does become possible, you aren't going to be able to put more than 1 or 2 GPUs in a housing without saturating a TB controller and using up 2 ports.

Anyone with a fast CUDA GPU and fast Mac Pro can see this for themselves.

1. Find the After Effects CUDA benchmark thread on this very forum.
2. Run it with a GPU in x16 lane, take screen shot of time.
3. Run it with same GPU in x4 lane, take another screenshot.
4. Compare times

What I have found is that a GTX Titan goes from 240 seconds in a x16 lane to 270 seconds in a x4 lane. So it is being throttled by the loss of bandwidth, i.e. it is already "maxed out" trying to hold a single Titan.

A GTX570 doesn't have as much CUDA power to begin with. It goes from 403 seconds to 420. That is 1 (ONE) GTX570 so it seems to ALMOST fit in the available bandwidth. Which means you will never be able to put MULTIPLE GPUs in an enclosure and not have them get throttled down by the paucity of bandwidth. Nor will you be able to daisy chain anything that requires data bandwidth on to that controller. (therefore, you lose 2 ports for data connections for each GPU)

One Titan loses 12.5% at x4, already out of bandwidth. Imagine 2 or three in an enclosure. Not going to be worth it. And this is with cards available today.

In short, TB isn't going to offer the bandwidth needed for external GPUs unless each and every GPU gets it's own enclosure and you only connect 3 total. (to keep controllers separate) This will also mean no external storage can be used at the same time if you want to use 3 external GPUs.

These screen shots can be duplicated by anyone with these cards on a 4,1 or later MP.

Next I will do the same tests on a 3,1. Since the x4 lane slots are in fact PCIE 1.0 on a 3,1 we will be cutting the bandwidth in half again, mimicking the effect of having 2 GPUs sharing a x4.
 

Attachments

  • Screen Shot 2013-08-31 at 6.53.23 PM.png
    Screen Shot 2013-08-31 at 6.53.23 PM.png
    87.5 KB · Views: 83
  • Screen Shot 2013-08-31 at 7.09.58 PM.png
    Screen Shot 2013-08-31 at 7.09.58 PM.png
    103.9 KB · Views: 66
  • Screen Shot 2013-08-31 at 7.46.25 PM.png
    Screen Shot 2013-08-31 at 7.46.25 PM.png
    95.9 KB · Views: 68
  • Screen Shot 2013-08-31 at 7.24.35 PM.png
    Screen Shot 2013-08-31 at 7.24.35 PM.png
    93.3 KB · Views: 81

MacVidCards

Suspended
Nov 17, 2008
6,096
1,056
Hollywood, CA
How to choke a GPU, put it in a TB enclosure

So for this round of tests, I used same hard drive and installs but moved it into the 3,1.

And here we see even more startling results. With a 3,1 Mac Pro, the upper 2 PCIE slots are actually PCIE 1.0 spec. So even though they are x4 lane slots, they are in effect running at x2 speed of the PCIE 2.0 spec used by TB. This approximates the effect of having 2 GPUs in an enclosure, at least in terms of further cutting the limited bandwidth in half.

While some folks have claimed that GPGPU doesn't need much bandwidth, these tests show otherwise. And again, very easily duplicated by anyone with these machines and cards.

Sadly, even the GTX570 gets badly throttled once you reduce x4 to x2.

It goes from 417 seconds to 454 seconds. The idea of putting 4 or 5 of these in an enclosure (that doesn't exist yet) is hereby shown to be completely untenable. Up there with "Unicorn Ranching". The point of diminishing returns is upon you with 1@ Titan or 2 @ 570s. Don't plan on connecting any RAIDS or SSDs to those 2 TB ports that share that controller.

For the Titan, the render goes from 245 seconds to 285 seconds. The bandwidth is already choked out, any more Titans would not buy you much more rendering speed.

And that mythical "daisychain up to 36 devices" stuff? They'd better all be keyboards, mice, and monitors.
 

Attachments

  • Screen Shot 2013-09-01 at 12.41.56 AM.png
    Screen Shot 2013-09-01 at 12.41.56 AM.png
    116.3 KB · Views: 75
  • Screen Shot 2013-09-01 at 12.28.36 AM.png
    Screen Shot 2013-09-01 at 12.28.36 AM.png
    119.8 KB · Views: 66
  • Screen Shot 2013-08-31 at 6.14.17 PM.png
    Screen Shot 2013-08-31 at 6.14.17 PM.png
    112.3 KB · Views: 62
  • Screen Shot 2013-08-31 at 5.52.19 PM.png
    Screen Shot 2013-08-31 at 5.52.19 PM.png
    109.2 KB · Views: 63

netkas

macrumors 65816
Oct 2, 2007
1,198
394
Mhmhm, good tests, nice idea of emulating tb2 with pcie x4 slot of 2009+ macpro, thanks.

So, pcie x4 slot of 2008 macpro is emulation of TB1.

But, there is always but :-D

Pcie x4 slot of 2008+ macpro goes thru south bridge and shares bandwidth with sata drives and other peripherials. (South bridge connected to north bridge with pcie x4 link). These might decrease perf too.

But we dont know where these TB controllers sits in new mac pro, on south bridge(platform hub now) or on cpu's pcie lines.
 
Last edited:

Tesselator

macrumors 601
Original poster
Jan 9, 2008
4,601
6
Japan
Nice post thank you.

Obviously right now it is completely 100% impossible in OSX,

Why is it impossible? Some laptop user has already done this and posted his results too.

Anyone with a fast CUDA GPU and fast Mac Pro can see this for themselves.

1. Find the After Effects CUDA benchmark thread on this very forum.
2. Run it with a GPU in x16 lane, take screen shot of time.
3. Run it with same GPU in x4 lane, take another screenshot.
4. Compare times

What I have found is that a GTX Titan goes from 240 seconds in a x16 lane to 270 seconds in a x4 lane. So it is being throttled by the loss of bandwidth, i.e. it is already "maxed out" trying to hold a single Titan.

A GTX570 doesn't have as much CUDA power to begin with. It goes from 403 seconds to 420. That is 1 (ONE) GTX570 so it seems to ALMOST fit in the available bandwidth. Which means you will never be able to put MULTIPLE GPUs in an enclosure and not have them get throttled down by the paucity of bandwidth. Nor will you be able to daisy chain anything that requires data bandwidth on to that controller. (therefore, you lose 2 ports for data connections for each GPU)

One Titan loses 12.5% at x4, already out of bandwidth. Imagine 2 or three in an enclosure. Not going to be worth it. And this is with cards available today.

When you reduce the number of lanes from 16 to 4 you're also reducing the speed (frequency) at which data and commands can be sent and received so of course there will be some reduction. Tom's hardware showed about a 3% decrease on WinTel rendering CG frames in Blender (I think it was) and you show about 12% in OS X rendering video frames in AE, so now we have a range to go by here - which is good. Of course Video is going to be a worst case because typically frames are processed in a short period of time raising the Data-to-ProcessingTime ratio considerably. Because CUDA and OpenCL can work on one frame at a time and the work which gets carried out takes much more time than it takes to send and receive the commands and data - hundreds, thousands or even millions the time depending of the compute intensity and the kind of renderer (scan-line, bucket, etc.) and the work allocation method. The most efficient resource allocation method for GPU over TB/TB2 would probably be whole frame tasks where each GPU acts like an independent compute node sharing perhaps only system memory if/when needed.

During the card's compute time traffic to/from the card is extremely low as the card is using it's local cores to act on data in it's local memory. This leaves the buss open for commands and data to be sent to a second, third, and forth card all on the same connection - and I assume probably the contention won't become significant in most cases until 4 or 5 GPUs are operating on the same buss (connector). TB1 can do a single card with only a 3% reduction in render-times so it would be logical to assume two on a TB2 connection and taking into account the time-slices I'm talking about here more likely somewhere about 4 or 5 cards before the contention/saturation presents itself as anything close to devastating.

There are three controllers (six connections) so that's between 12 and 15 GPUs.

So let's assume a scenario where frames require 100 minutes to render with no GPGPU at all and a project size of 120 frames (200 total hours). With an internal PCIe v2 x16 GPGPU it's cut to 50min. That's a 50min. savings per frame. Now let's assume a radical 20% reduction per GPU with multiple GPUs daisy-chained on a single TB2 port (and not the 3% to 5% that Tom's hardware seems to imply over TB1) so each GPU spends about 60min. on a single frame - or reduces the total render time by about 40%. Someone check my math but I think that looks like:

CPU Only - No GPU (Total Render Time): 200 hours
1 PCIe 16x v2 GPU (Total Render Time): 100 hours
1 GPU on TB2 Port1 (Total render time): 120 hours
2 GPU on TB2 Port1 (Total render time): 72 hours
3 GPU on TB2 Port1 (Total render time): 43 hours
4 GPU on TB2 Port1 (Total render time): 26 hours
5 GPU on TB2 Port1 (Total render time): 15 hours​

And let's say you have 12 GPUs working (4 GPUs on each of the three controllers) so now we can divide that 26 hours by three to get a total render time of 8.66 hours or let's just say with overhead and everything about 9 hours.

Now let's say each of those GPUs cost us a ridiculous $250 for used GTX 780's and $100 per TB2 --> PCIe adapter/enclosure six to twelve months from now (currently I see used GTX780's for $350) so that's about $4,200 all total. YAY we just sped up our renders by about ten times for only the cost of one workstation computer. That's a 10:1 price performance increase on our render-farm which now is comprised of a nMP and 12 little boxes instead of 8 or 10 full sized workstations. And keep in mind those figures are assuming each GPU is only able to contribute 80% of it's potential. For rendering 100min. frames I think it's more like Tom's hardware reported and there's only about a 3% reduction - due to latency and maybe a tad more due to traffic jams. :)

Will it actually work tho? I dunno. It should. Especially is someone like Squidnet Software takes it up. But I don't see why woun't work right out of the box as is too. For every GPU you place on a TB port the ID and resource is communicated to the rendering engine just exactly like having two (or more) PC GPUs in a MacPro now. That works... so should this. For most video rendering I assume the rule of diminishing returns will come into play a lot sooner than for CG frame renders in apps like Blender, C4d, LW3D, Maya, Xsi, and so on. Compositors like eyeon's Fusion some parts of FCP, Nuke, and so on will likely fair much better than video editors though.
 
Last edited:

MacVidCards

Suspended
Nov 17, 2008
6,096
1,056
Hollywood, CA
Why is it impossible? Some laptop user has already done this and posted his results too.

Link please?

I think you will find that you are in error.

It was on a Macbook Air, but had to be done in Windows.

And I am not sure if you realized that these tests were 100% CUDA.

This was not standard rendering but a CUDA test. Note that the Monster 12 Core 5680 machine had nearly the same x16 times as the relatively ancient 3,1 running 8 cores with RAM at half the speed of 12 core. So pretty good indication that CUDA on GPU was the only speed determinant.

If you look through the AE render test thread, the speeds are amazingly consistent for same cards. Which also makes the GTX780 & Titan seem like the revelation they are, shaving 30-40% off previous "best" cards.

So if CUDA took a 12% hit using x4 on a single card, I don't see where you get 3%.

Not theories, conjecture or opinions, these are actual reproducible tests.


Now let's say each of those GPUs cost us a ridiculous $250 for used GTX 780's and $100 per TB2 --> PCIe adapter/enclosure six to twelve months from now (currently I see used GTX780's for $350)

And please another link for these $350 GTX780s. That's at least $150 less than the lowest of the low in "completed items" on Ebay.

And do you REALLY believe that a PCIE to TB2 enclosure is going to be $100 anytime in the next 6-12 months? That's quite a wonderful thing.
 
Last edited:

Tesselator

macrumors 601
Original poster
Jan 9, 2008
4,601
6
Japan
Link please?

I think you will find that you are in error.

It was on a Macbook Air, but had to be done in Windows.

Oh, windows? UG! I thought he was playing a Mac Game over the thing?
https://www.macrumors.com/2013/07/3...graphics-card-with-complex-thunderbolt-setup/
http://forum.techinferno.com/diy-e-...o-expresscard-pe4l-internal-lcd-[us$250].html

yup, sure enough:
"Oh and we're using Windows because games only exist for it, and I can't get the setup to work on OSX (haven't tried too much though)."




And I am not sure if you realized that these tests were 100% CUDA.

This was not standard rendering but a CUDA test. Note that the Monster 12 Core 5680 machine had nearly the same x16 times as the relatively ancient 3,1 running 8 cores with RAM at half the speed of 12 core. So pretty good indication that CUDA on GPU was the only speed determinant.

If you look through the AE render test thread, the speeds are amazingly consistent for same cards. Which also makes the GTX780 & Titan seem like the revelation they are, shaving 30-40% off previous "best" cards.

So if CUDA took a 12% hit using x4 on a single card, I don't see where you get 3%.

Not theories, conjecture or opinions, these are actual reproducible tests.

Right. I'me using data from actual reproducable tests as well. Here's the link to Tom's page I keep referring to:

http://www.tomshardware.com/reviews/pci-express-graphics-thunderbolt,3263-6.html

And the graphs for those not inclined to click look like:
clbenchmark_thunderbolt.png

luxmark_thunderbolt.png

sandra_thunbderbolt.png


Luxmark would be a better test IMO than the AE Benchmark you're running. Just a thought. Both are useful tho! Notice the difference between those "renderers" and something like a super heavy game where we all know bandwidth is much more critical to the ultimate frame-rates:

wow_thunderbolt.png


This implies that the bandwidth used by renderers is significantly less yet the horsepower used (GPU number crunching) is probably just as or more intense. And this is a Thunderbolt version 1 test too so I would assume TB2 would be more robust and allow even less difference all around. Also notice that in this last Game test here that 55FPS and 41FPS is quite playable. So if for some odd reason you wanted to use a TB2 GPU for display you totally could - in theory - when/if Apple allows GPU over TB2. Dang, another "if"... I don't like that.
 

MacVidCards

Suspended
Nov 17, 2008
6,096
1,056
Hollywood, CA
I am not sure why tests done a year ago on Windows with much lower bandwidth GPUs have any bearing here.

Why not compare Apples to Apples?

Anyone reading this with a powerful GPU like GTX570/580/680 can check this for themselves, x4 slows CUDA with just 1 card. Expecting to run 10 of them and get anything resembling scalable results is dreaming.

Please read through my test results again, I think it's pretty obvious that multiple GTX780s in a single TB controller isn't going to come out well at all. A single GTX Titan is bumping it's head on the bandwidth ceiling, multiple ones aren't going to work at all, at least not on a single controller.

It would be nice to ignore these numbers and imagine a rosy future where you could plug 10 or 15 GPUs into a TB port, but wishing it worked that way isn't going to add a single byte/second to the actual bandwidth available. It isn't there.

And I'm still waiting for that link to the $350 GTX780s you said were available. ;)
 

Tesselator

macrumors 601
Original poster
Jan 9, 2008
4,601
6
Japan
And please another link for these $350 GTX780s. That's at least $150 less than the lowest of the low in "completed items" on Ebay.

You're right. It was a GTX 680 card with 780 in the title so it came up on a "GTX 780" search. :eek: It seems GTX 780's around $500 or so... Bla!

So that kills the idea of using 780's then. I guess it's back to thinking on GTX 570 through GTX 680 which I think is still a pretty good price-performance speed-up with multiple cards. Guess we gotta pay for the cutting edge stuff. :D

And do you REALLY believe that a PCIE to TB2 enclosure is going to be $100 anytime in the next 6-12 months? That's quite a wonderful thing.
No idea really and we're in the wrong thread for guessing but... What are they now $350 MSRP right? So probably some HK maker will do a CB knock-off for $100 to $150 pretty soon... no? I hope so anyway. :cool: $350 for that is just silly IMO.
 

Tesselator

macrumors 601
Original poster
Jan 9, 2008
4,601
6
Japan
I am not sure why tests done a year ago on Windows with much lower bandwidth GPUs have any bearing here.

Why not compare Apples to Apples?

Anyone reading this with a powerful GPU like GTX570/580/680 can check this for themselves, x4 slows CUDA with just 1 card. Expecting to run 10 of them and get anything resembling scalable results is dreaming.

Please read through my test results again, I think it's pretty obvious that multiple GTX780s in a single TB controller isn't going to come out well at all. A single GTX Titan is bumping it's head on the bandwidth ceiling, multiple ones aren't going to work at all, at least not on a single controller.

It would be nice to ignore these numbers and imagine a rosy future where you could plug 10 or 15 GPUs into a TB port, but wishing it worked that way isn't going to add a single byte/second to the actual bandwidth available. It isn't there.

Well, if you're right that kills a lot of the attractiveness of the nMP for me - and probably a lot of other CG folks who would like an alternative to the 5 to 20 big-box rendering farms I see so often. If all TB2 is going to be is a storage, camera, and video capture interconnect then that's going to suck big-time! And Tom's does say in their review that "the magnitude of the impact depends on the GPU's performance" so there's some chance that impact may be significant enough to trash this whole theory. I don't think we'll know for sure until it's actually attempted though.

Could you try your last test rig using LuxMark by any chance?
 
Last edited:

MacVidCards

Suspended
Nov 17, 2008
6,096
1,056
Hollywood, CA
Well, if you're right that kills a lot of the attractiveness of the nMP for me - and probably a lot of other CG folks who would like an alternative to the 5 to 20 big-box rendering farms I see so often. If all TB2 is going to be is a storage, camera, and video capture interconnect then that's going to suck big-time! And Tom's does say in their review that "the magnitude of the impact depends on the GPU's performance" so there's some chance that impact may be significant enough to trash this whole theory. I don't think we'll know for sure until it's actually attempted though.

Could you try your last test rig using LuxMark by any chance?

I'm getting ready to head up North tomorrow. Will be passing through Cupertino.

Not sure I'll have the time before I go.

But consider this, people will not be buying nMP & TB expander to run the Tom's Hardware Windows test suite. Or even to run Luxmark in OSX.

They will be buying it to do things like....run Adobe After Effects with GPGPU acceleration. So that's a little more relevant than a synthetic benchmark.

And who knows, currently a TB to PCIE expander for OSX is like the Flying Car, total vapourware. If they never happen, then this testing isn't meaningful.

But at least we know that x4 chokes CUDA out in After Effects for more than 1 serious GPU. That's a start.
 

Tesselator

macrumors 601
Original poster
Jan 9, 2008
4,601
6
Japan
I'm getting ready to head up North tomorrow. Will be passing through Cupertino.

Not sure I'll have the time before I go.

But consider this, people will not be buying nMP & TB expander to run the Tom's Hardware Windows test suite. Or even to run Luxmark in OSX.

They will be buying it to do things like....run Adobe After Effects with GPGPU acceleration. So that's a little more relevant than a synthetic benchmark.

And who knows, currently a TB to PCIE expander for OSX is like the Flying Car, total vapourware. If they never happen, then this testing isn't meaningful.

But at least we know that x4 chokes CUDA out in After Effects for more than 1 serious GPU. That's a start.


Hehe, tru-dat. But I thinking more along the lines of Blender, c4d, LW3d, Maya, xsi, Nuke3D, FCP Motion or other compositors. LuxMark may be pretty close to those without having to install any of them - even though it simulates a real-time preview renderer and not a final frame renderer. Video like I say is probably going to provide us only with the worst-case results.

Have fun in Cupertino man!
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79

netkas

macrumors 65816
Oct 2, 2007
1,198
394

None of them mentions videocards support.

Probably because OSX drivers dont support it...yet. Even tho TB is around for years already.

Ive got once radeon card working via expressport (hi vidock), and it was a mess, I needed to replug card once osx booted to single user mode, then continue booting. It was 10.7, and now 10.9 is around and things havent changed...yet.
Probably wont going to happen in near future.
 

Radiating

macrumors 65816
Dec 29, 2011
1,018
7
None of them mentions videocards support.

Probably because OSX drivers dont support it...yet. Even tho TB is around for years already.

Ive got once radeon card working via expressport (hi vidock), and it was a mess, I needed to replug card once osx booted to single user mode, then continue booting. It was 10.7, and now 10.9 is around and things havent changed...yet.
Probably wont going to happen in near future.


Pci express natively supports video cards. The problem is that pci express has two modes. Hot plug and static. Thunderbolt is hot plug pci express. Windows supports video cards over this standard. OS-X does not. It would take relativly little effort to update OS-X to work with video cards over thunderbolt but I suspect that Apple has pulled too many developers over iOS 7.

This is something really easy and obvious to do and the longer Apple delays the longer they embarrass themselves.
 

slughead

macrumors 68040
Apr 28, 2004
3,107
237
Pci express natively supports video cards. The problem is that pci express has two modes. Hot plug and static. Thunderbolt is hot plug pci express. Windows supports video cards over this standard. OS-X does not. It would take relativly little effort to update OS-X to work with video cards over thunderbolt but I suspect that Apple has pulled too many developers over iOS 7.

This is something really easy and obvious to do and the longer Apple delays the longer they embarrass themselves.

Considering the extremely limited marketshare for such a product, I would doubt Apple would pour that many resources into building support for it.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.