Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
A highly dubious assertion when the new Redwood Ridge controllers are suppose to drive costs down slightly. One of the biggest Thunderbolt complaints is device costs, not whether they have some utility. More so the match between price and value provided.

Thunderbolt v2.0 double of speed is extremely like not gong to be accompanied by cost decreases. I'm not sure why most folks are going to wait longer for devices that at are least as expensive, if not more, than the ones available sooner.

I would say the biggest complaint is "Cost for functionality" not cost overall, why buy an expensive thunderbolt device that basically just functions as a USB3 port to hang a hard disk drive off, (and an audio port, Ethernet port and video output port that are not going to be used by 90% of purchasers)

Thunderbolts best usage will come when they start letting us add devices to it that make best use of ports capabilities, not just devices that duplicate USB3 devices that are much cheaper and perform the same.

We need Thunderbolt eGPUs, Thunderbolt Video Capture Cards, and Thunderbolt PCIexpress chassis for other cards normally dedicated to desktop PCs, this is especially needed as apple seem to be ignoring the MacPro in favor of the iMacs and Macbooks.. im sure Thunderbolt will come of age, but much like firewire it will be to late to make it a mainstream port, it will be a port that has a use for a select few hardcore users or professionals in a small market where a thunderbolt device does a perfect job, everyone else will end up with a port that adds a cost to the system that is never used (i have never owned or used a firewire product until i bought a Drobo, and even now i use USB to connect that to a network share device because Firewire isn't supported, the new Drobo im looking at buying will also be connected via USB, because there are no Thunderbolt network share devices)
 
I hope it supports DisplayPort 1.2 & PCIe 3.0 for future proofing. And then updated again when PCIe 4.0 comes out in 2014/15 (according to Wikipedia).

While Thunderbolt doesn't have as many products as USB, it's good that someone is trying to get the ball rolling. Hopefully, more & more companies make products for TB.

What depresses me is how many people here are like "Well, I don't need it, so why does anyone else? Apple should get rid of it!" Why must the minority of people who actually use TB suffer just because you don't need something? You and a majority of the world probably don't need a surgeon's scalpel, so does that mean that scalpals don't need to be made anymore?

You can make that argument for optical drives
 
The consequence of that is that you think Macs should be built to a lower standard, let's dumb it down to the lowest common denominator and only support USB, even though the common crutch and internet wisdom from armchair critics is that the pros are ignored. One of the most common argument I see from that group is the need for more ports and expandability, TB solves that, and does it in a progressive way not relying on technology from the past. Let's take express card as an example, it's not possible to chain devices, it's not hot pluggable, it's not dual purpose, the connector is larger and more fragile. Yet it's been used to expand portable systems with things like pcie expansion chassis for TDM cards for example.

The thing I found annoying about Express cards was that when they went to 17" only, the number of supported cards on the Mac side really dropped off. I'm not sure how much is out there today in the way of hardware with stable Mac drivers. Thunderbolt is on almost all newer Macs (excluding mac pros) so at least companies making peripherals have a larger potential audience. It's a decent connector, but regarding things that are hot pluggable, that is tied to drivers. It's required by intel for thunderbolt certification, but it wasn't common to PCI hardware in the past.
 
Out of curiosity, do you have one of the original MacBook Air SuperDrives, or is it the newer Apple USB SuperDrive? Has Apple made any progress or are they still insisting on being retarded?

I have the newer USB SuperDrive. Got it delivered to my office and tried to test it on my PC, didn't work. Thought it's damaged, went over to another PC, not working, then to a MBP, not working, then another MBP, not working either.

Googled it and couldn't believe it. How stupid is it to sell a USB drive that doesn't work with all computers, not even with all their own brand ? I mean, I've seen many stupid things done by Apple, but that's the worst.

What's next ? Power supplies that only work with power sold by Apple ?
 
I would say the biggest complaint is "Cost for functionality" not cost overall, why buy an expensive thunderbolt device that basically just functions as a USB3 port to hang a hard disk drive off,

You are conflating multiple discussions that mention costs as one category. There aren't. There three major groups.

A. Thunderbolt is no USB 3.0 (or USB ) killer. ( my USB 3.0 drive connects with a XX cable. I don't need a XXX TB box to hook up. USB 3.0 is more cost effective )


B. I like the peripheral, but it costs too much. (Typically " I'd buy one if it cost xx cheaper. )


C. Thunderbolt doesn't solve my single protocol connection more effectively ( eSATA drive cheaper. FW connection better . etc. ). Also in this group is folks with myopic objetives like looking at it only as a fast direct attached storage connection cable for a single drive. Those folks would be happy with a $20-30 dongle but then the complaint is going to shift to something else. Franky most of those folks are happy about Thunderbolt at all. Cost is just the convenient metric to throw out. They'll find another one.



Only B is really about primarily about cost. A is a goofy argument on both sides. Neither one is a 'killer' of the other. They overlap somewhat in some areas but largely targeted at two different problems. USB 3.0 is highly backward compatible USB 2.0 solution that uses a different set of wires to go much faster. Thunderbolt primarily hits its groove when multiplexing multiple protocols over a single cable. If not doing that, it is not likley the most cost effective solution in many cases.


(and an audio port, Ethernet port and video output port that are not going to be used by 90% of purchasers)

Likely not accurate. More than likely folks who buy Thunderbolt displays use the speakers (i.e., audio ). For those that own a port hobbled Mac (e.g., MBA ) are have at least one of the Thunderbolt Docking stations sockets filled with something on permanent basis.


Thunderbolts best usage will come when they start letting us add devices to it that make best use of ports capabilities, not just devices that duplicate USB3 devices that are much cheaper and perform the same.

Again the core issue in those kinds of complaints are folks trying to do one-to-one mappings with USB 3.0. Thunderbolt cast as a USB 3.0 killer. It isn't. In fact in most cases Thunderbolt doesn't make any ports disappear. They just physically move to another external device.


We need Thunderbolt eGPUs,

Thunderbolt is a bit narrow for that purpose. Long term Moorse's Law is going to make integrated GPUs powerful. Swimming upstream from Moore's Law is doomed. To many billions invested in keeping that stream remaining a raging current.

Besides this is largely a software issue. Not a hardware one.


Thunderbolt Video Capture Cards,

This is primary example of no port migration. For example.

http://www.blackmagicdesign.com/products/intensity/

PCI-e card . Thunderbolt device. Same exact set of ports. Priced exactly the same. There are going to be PCI-e cards wrapped in a box and connected via TB.

All that primarily does is make those card available to the rest of the Mac line up that is not the Mac Pro. That is not so much about the Mac Pro as making those boxes (e.g., iMac and Mini ) more competitive with alternative systems in their price ranges.


and Thunderbolt PCIexpress chassis for other cards normally dedicated to desktop PCs,

Sorry but this heavily couched in the PCI-e cards solve all problems philosophy that really isn't true. Laptops rule the classic PC market and PCI-e cards are not a dominating feature of the landscape. Again this is far more a software issue than a Thunderbolt one. There are multiple chassis on the market.

ExpressCard is as much a funky USB socket as it is a now comically bandwidth starved PCI-e connection.

If Thunderbolt doesn't work as a industry standard docking station connector then it is going to have a ton of problems. Period. This other stuff is a nice "add on" market segments but no coverage as docking station, then no deep market penetration. Multiplexing protocols is what Thunderbolt is good at. ( move the various protocol controllers out to the external box and run the PCI-e connection back. Likewise run the GPU signal out. )


im sure Thunderbolt will come of age, but much like firewire it will be to late to make it a mainstream port,

FW400 still is a mainstream port. It is around. It isn't universal but it is still pretty widely out there. Not sure why this gets labeled a "failure". Again there are dubious notions that Firwire failed to be a USB killer or vice versa. Those are deeply misguided.


that is never used (i have never owned or used a firewire product until i bought a Drobo, and even now i use USB to connect that to a network share device because Firewire isn't supported, the new Drobo im looking at buying will also be connected via USB, because there are no Thunderbolt network share devices)

This isn't mainstream Mac usage rate for Firewire. I only have Firewire capable external drives; about 5 ( not counting USB Flash Thumb drives ).
 
I have the newer USB SuperDrive. Got it delivered to my office and tried to test it on my PC, didn't work. Thought it's damaged, went over to another PC, not working, then to a MBP, not working, then another MBP, not working either.

Googled it and couldn't believe it. How stupid is it to sell a USB drive that doesn't work with all computers, not even with all their own brand ? I mean, I've seen many stupid things done by Apple, but that's the worst.

What's next ? Power supplies that only work with power sold by Apple ?

I don't get this. I have one and it works on:
Air 2012
MacPro 2008 + bootcamp win7
13" MBP retina
13" MBP 2010

----------

This new generation of Thunderbolt has me SO EXCITED!

I'm TOTALLY going to rush out to my local electronics chain and drop $600-3k on an external hard drive!

YIPPEE!

[Rolls eyes so far back into my head I cannot see]

Well when you are Duplicating / Backing up a Day's shoot on 5K Red Epic footage it makes itself back in, well... less than a day.

Funny how things that some people see as pointless waste of time are someones else Bread and butter.

TOTALLY... YIPEEE!
 
I don't get this. I have one and it works on:
Air 2012
MacPro 2008 + bootcamp win7
13" MBP retina
13" MBP 2010

----------



from the Apple Store:

Compatible with the following computers:
MacBook Pro with Retina display
MacBook Air
iMac (late 2012)
Mac mini (late 2009) and later

It definitely didn't work on the Macbook Pros that I tested it on (all non-retina) and on no PC. Surprising for an external USB drive, though...
 
Actually, Thunderbolt is 2 channels per port, so a 2-port controller has 4 channels, and the quoted speeds are what's available to the upper layers. Cactus Ridge (and presumably Redwood Ridge) controllers have a PCIe 2.0 x4 connection on the back end, which due to 8b/10b encoding is really only 16 Gbit/s total despite the controller having the ability to pump 40 Gbit/s on the front side.

Yes, Thunderbolt is 2 channels per port. However, those two channels are a dedicated pci-e and displayport channel. It's not possible to combine the two to a single pci-e port. So while Cactus Ridge uses a pci-e 2.0 x4 backend, pci-e traffic is limited to a single channel of 10Gbps, which equates to x2 2.0 + 12.5% in real terms.

Need x4 2.0 to start seeing near-desktop levels of performance when attaching an eGPU. That will only come in 2014 with the release of Falcon Ridge TB controllers (20Gbps-per-channel).
 
Yes, Thunderbolt is 2 channels per port. However, those two channels are a dedicated pci-e and displayport channel. It's not possible to combine the two to a single pci-e port. So while Cactus Ridge uses a pci-e 2.0 x4 backend, pci-e traffic is limited to a single channel of 10Gbps, which equates to x2 2.0 + 12.5% in real terms.

Need x4 2.0 to start seeing near-desktop levels of performance when attaching an eGPU. That will only come in 2014 with the release of Falcon Ridge TB controllers (20Gbps-per-channel).

This is clearly not the case since you can drive 2 2560x1440 panels while transferring 430 MB/s of PCIe data at the same time to devices daisy chained off of a single port as demonstrated here. Intel has been very clear that each direction in both channels can carry DisplayPort and/or PCIe traffic. Thunderbolt also provides a full 10 Gbit/s to the upper layers unlike the nominal data rates for PCIe which include encoding overhead. This makes a single Thunderbolt channel equivalent to 2.5 lanes of PCIe 2.0. This equates to > 1000 MB/s throughput in real terms as demonstrated here.

That being said, it would appear that on Mac systems one channel per port is reserved for DisplayPort, which means that exceeding 10 Gbit/s of PCIe throughput requires using both ports on a 2-port host. While this will be alleviated with Falcon Ridge (or possibly sooner), if the PCIe x4 back end remains the same, the maximum throughput will still be 16 Gbit/s less protocol overhead, or about 13 Gbit/s.

Every test I have seen showing the effect of PCIe scaling on GPU performance points to PCIe 2.0 x2.5 levels of bandwidth having minimal impact in all but corner cases. Unless you're dead set on gaming at 120 fps, it should be hardly noticeable. The thing holding back external Thunderbolt GPUs is the lack of PCIe bandwidth optimized drivers with hot-plug support. Since virtually all Thunderbolt ports are found on Mac systems, that means getting Apple to work with AMD and NVIDIA to develop such drivers, which I highly doubt is a priority for Apple at the moment, especially because it would take a *lot* of additional engineering to properly accommodate external GPUs in a seamless fashion. Thunderbolt connected GPUs don't exist because they're challenging to implement, not because 10 Gbit/s isn't fast enough.
 
Every test I have seen showing the effect of PCIe scaling on GPU performance points to PCIe 2.0 x2.5 levels of bandwidth having minimal impact in all but corner cases. Unless you're dead set on gaming at 120 fps, it should be hardly noticeable. The thing holding back external Thunderbolt GPUs is the lack of PCIe bandwidth optimized drivers with hot-plug support. Since virtually all Thunderbolt ports are found on Mac systems, that means getting Apple to work with AMD and NVIDIA to develop such drivers, which I highly doubt is a priority for Apple at the moment, especially because it would take a *lot* of additional engineering to properly accommodate external GPUs in a seamless fashion. Thunderbolt connected GPUs don't exist because they're challenging to implement, not because 10 Gbit/s isn't fast enough.
How plugging? Maybe if you trip over the cable. It appears as a PCI Express bus to the card and operating system otherwise. Getting drivers for the video card might be an issue under OS X.

Not to mention the price of the external x16 PCI Express box and power supply.
 
This is clearly not the case since you can drive 2 2560x1440 panels while transferring 430 MB/s of PCIe data at the same time to devices daisy chained off of a single port as demonstrated here. Intel has been very clear that each direction in both channels can carry DisplayPort and/or PCIe traffic. Thunderbolt also provides a full 10 Gbit/s to the upper layers unlike the nominal data rates for PCIe which include encoding overhead. This makes a single Thunderbolt channel equivalent to 2.5 lanes of PCIe 2.0. This equates to > 1000 MB/s throughput in real terms as demonstrated here.

That being said, it would appear that on Mac systems one channel per port is reserved for DisplayPort, which means that exceeding 10 Gbit/s of PCIe throughput requires using both ports on a 2-port host. While this will be alleviated with Falcon Ridge (or possibly sooner), if the PCIe x4 back end remains the same, the maximum throughput will still be 16 Gbit/s less protocol overhead, or about 13 Gbit/s.

Every test I have seen showing the effect of PCIe scaling on GPU performance points to PCIe 2.0 x2.5 levels of bandwidth having minimal impact in all but corner cases. Unless you're dead set on gaming at 120 fps, it should be hardly noticeable. The thing holding back external Thunderbolt GPUs is the lack of PCIe bandwidth optimized drivers with hot-plug support. Since virtually all Thunderbolt ports are found on Mac systems, that means getting Apple to work with AMD and NVIDIA to develop such drivers, which I highly doubt is a priority for Apple at the moment, especially because it would take a *lot* of additional engineering to properly accommodate external GPUs in a seamless fashion. Thunderbolt connected GPUs don't exist because they're challenging to implement, not because 10 Gbit/s isn't fast enough.

I'll cut to the chase so that users reading this get the most accurate info:

1. I already have an Thunderbolt eGPU implementation (see sig). So yes, Thunderbolt GPUs do exist. Agreed, it was a minor challenge to do so.

2. The DIY eGPU folks (including myself) have tested and confirmed the 10Gbps channel limit for pci-e traffic. Yes, there are two 10Gbps channels per port, one for pci-e and one for Displayport. Unfortunately they cannot be aggregated into a single 20Gbps link in Cactus Ridge.

3. Games certainly see the affects of restricted bandwidth. Tomshardware summarizing the performance relative to a x16 2.0 link as:

x2 2.0 : AMD=86% NVIDIA=73%
x4 2.0 : AMD=94% NVIDIA=86%

REF: http://www.techpowerup.com/reviews/Intel/Ivy_Bridge_PCI-Express_Scaling/23.html

However, this is for an LCD attached to the desktop video card. Most users want an eGPU to drive the internal LCD. For that there needs to be more bandwidth allocated to do it.

An 'eGPU centric' manufacturer could make use of the DP channel on a TB port by add a mux to allow a desktop video card to drive the internal LCD via a loopback connector. It would be like the old-style switchable discrete graphics. Though Apple certainly haven't implemented anything like that on their TB controller.

Instead, users wanting to use an eGPU with the internal LCD currently need to sacrifice some pci-e bandwidth for return traffic to do that. Currently the system also needs an iGPU (drive requirement) along with NVidia Optimus or Lucid Logix Virtu to do that [Windows].

4. Since Falcon Ridge can carry 20Gbps of traffic, I'm guestimating that Intel will upspec the notebook<->TB link to be x8 2.0 or x4 3.0 so as to be able to saturate the link. x4 2.0 only carries 16Gbps of traffic (after overhead).
 
I'll cut to the chase so that users reading this get the most accurate info:

1. I already have an Thunderbolt eGPU implementation (see sig). So yes, Thunderbolt GPUs do exist. Agreed, it was a minor challenge to do so.

2. The DIY eGPU folks (including myself) have tested and confirmed the 10Gbps channel limit for pci-e traffic. Yes, there are two 10Gbps channels per port, one for pci-e and one for Displayport. Unfortunately they cannot be aggregated into a single 20Gbps link in Cactus Ridge.

3. Games certainly see the affects of restricted bandwidth. Tomshardware summarizing the performance relative to a x16 2.0 link as:

x2 2.0 : AMD=86% NVIDIA=73%
x4 2.0 : AMD=94% NVIDIA=86%

REF: http://www.techpowerup.com/reviews/Intel/Ivy_Bridge_PCI-Express_Scaling/23.html

However, this is for an LCD attached to the desktop video card. Most users want an eGPU to drive the internal LCD. For that there needs to be more bandwidth allocated to do it.

An 'eGPU centric' manufacturer could make use of the DP channel on a TB port by add a mux to allow a desktop video card to drive the internal LCD via a loopback connector. It would be like the old-style switchable discrete graphics. Though Apple certainly haven't implemented anything like that on their TB controller.

Instead, users wanting to use an eGPU with the internal LCD currently need to sacrifice some pci-e bandwidth for return traffic to do that. Currently the system also needs an iGPU (drive requirement) along with NVidia Optimus or Lucid Logix Virtu to do that [Windows].

4. Since Falcon Ridge can carry 20Gbps of traffic, I'm guestimating that Intel will upspec the notebook<->TB link to be x8 2.0 or x4 3.0 so as to be able to saturate the link. x4 2.0 only carries 16Gbps of traffic (after overhead).

I knew I'd be taken to task for making the hyperbolic statement that "Thunderbolt connected GPUs don't exist," but as far as it goes, the only OEM to offer a plug and play Thunderbolt GPU solution was Sony, and that was more Light Peak than Thunderbolt.

While one channel may be reserved on Mac systems for DisplayPort, the other blatantly interleaves PCI packets with DisplayPort packets when two displays are connected. Saying that one channel is dedicated to DP and the other to PCIe is patently false. AFAIK, the 10 Gbit/s per port PCIe limitation was imposed by Apple, not necessarily by the Light/Eagle/Cactus Ridge controllers. In other words, that proviso could be lifted with Redwood Ridge or an EFI/OS update from Apple. Even still, it's not like a single device could exploit more than one channel at a time for PCIe, so the 10 Gbit/s per device PCIe limit will indeed stand until Falcon Ridge.

The PCIe performance scaling article you linked to actually proves my point pretty well. The majority of PCIe throttling generally happens at frame rates > 60 fps which means it makes zero difference in real life. Try these tests with Vsync enabled and see what happens. Yes, you may see a 10% hit when you're going crazy hard with an external GPU solution, but it beats most internal solutions by so much that you'd be insane to care. Also, those tests were not performed with any sort of driver optimizations such as PCIe compression which would even further level the playing field.

Your point about driving a built-in panel with an eGPU is precisely the reason why I said the complexity of the problem will not go away without Apple's intervention. Mac OS X uses neither Lucid's Virtu nor NVIDIA Optimus. Furthermore, only a handful of Mac models even have a connection leading from the Thunderbolt controller's DisplayPort output to the built-in panel. This is not a solvable problem for any OEM wishing to bring a Thunderbolt GPU to market, unless Apple decides to go there, which is hugely unlikely. If you spend half a grand on an eGPU setup, it would be nice to be able to use it while running the OS that shipped with your PC.

Edit: I forgot to mention that your Thunderbolt GPU setup uses a 13-inch MacBook Pro, which only has Intel HD 4000, running Windows. Quite a few Macs already have discrete graphics. How do you get three GPUs, potentially all from different vendors, playing nice together? Has this been tested?

The Intel Falcon Ridge demo linked to in this article clearly shows them peaking at 1259.87 MB/s of PCIe throughput, or right where we'd expect them to if there was only a PCIe 2.0 x4 back end. This may be simply their way of continuing to enforce the bandwidth reservation for DP traffic. It's really too early to tell.

How plugging? Maybe if you trip over the cable. It appears as a PCI Express bus to the card and operating system otherwise. Getting drivers for the video card might be an issue under OS X.

Not to mention the price of the external x16 PCI Express box and power supply.

Why would anyone want to shut down their machine in order to do a little gaming? I want to just sit down at my desk with my laptop and plug in my Thunderbolt GPU connected to a big-ass screen and go to town. So yeah, I'd expect hot-plugging to be the norm, not the exception.

At this point, the external chassis without the GPU shouldn't cost more than $220, even for a 2-port model. The whole thing would make more sense if AMD and NVIDIAs partners would produce an integrated device that just put the GPU and Thunderbolt controller on the same card in a chassis with a built in power supply optimized for the TDP of the GPU. Since any GPU that is worth doing this with is at least $150, it's not like people would balk too much at these systems starting at $329 and going up to $1399 for a GTX TITAN.
 
Last edited:
Edit: I forgot to mention that your Thunderbolt GPU setup uses a 13-inch MacBook Pro, which only has Intel HD 4000, running Windows. Quite a few Macs already have discrete graphics. How do you get three GPUs, potentially all from different vendors, playing nice together? Has this been tested?

The Intel Falcon Ridge demo linked to in this article clearly shows them peaking at 1259.87 MB/s of PCIe throughput, or right where we'd expect them to if there was only a PCIe 2.0 x4 back end. This may be simply their way of continuing to enforce the bandwidth reservation for DP traffic. It's really too early to tell.
You can do it post Windows 7. I believe it comes part with the updates included in WDDM 1.1. I had an acquaintance that referred to it as SLI FIRE running dual vendor cards.

Why would anyone want to shut down their machine in order to do a little gaming? I want to just sit down at my desk with my laptop and plug in my Thunderbolt GPU connected to a big-ass screen and go to town. So yeah, I'd expect hot-plugging to be the norm, not the exception.

At this point, the external chassis without the GPU shouldn't cost more than $220, even for a 2-port model. The whole thing would make more sense if AMD and NVIDIAs partners would produce an integrated device that just put the GPU and Thunderbolt controller on the same card in a chassis with a built in power supply optimized for the TDP of the GPU. Since any GPU that is worth doing this with is at least $150, it's not like people would balk too much at these systems starting at $329 and going up to $1399 for a GTX TITAN.
I can imagine rebooting into Windows for a Mac. I have not read any documentation for what Sony does for their notebooks when using the external GPU. Most notebooks before Optimus and AMD's latest Catalyst support required a reboot or at the very least logging out of your session to initialize the discrete graphics option from mobile.

I suspect Thunderbolt already protocol for the sudden disconnection of hardware to prevent damage to the devices, controllers, and cables. Now what happens in software is another story.

Currently, Optimus stores a copy of the discrete GPU's buffer inside of the IGPs buffer as well. I recall a demonstration video where they removed the nVidia GPU from the Mini-PCIe slot in the demonstration hardware and the notebook display continued without a hitch.

I would hope AidenShaw could shed some light on this.
 
Last edited:
You can do it post Windows 7. I believe it comes part with the updates included in WDDM 1.1. I had an acquaintance that referred to it as SLI FIRE running dual vendor cards.

I can imagine rebooting into Windows for a Mac. I have not read any documentation for what Sony does for their notebooks when using the external GPU. Most notebooks before Optimus and AMD's latest Catalyst support required a reboot or at the very least logging out of your session to initialize the discrete graphics option from mobile.

I suspect Thunderbolt already protocol for the sudden disconnection of hardware to prevent damage to the devices, controllers, and cables. Now what happens in software is another story.

Currently, Optimus stores a copy of the discrete GPU's buffer inside of the IGPs buffer as well. I recall a demonstration video where they removed the nVidia GPU from the Mini-PCIe slot in the demonstration hardware and the notebook display continued without a hitch.

I would hope AidenShaw could shed some light on this.

I never meant to imply this was not technically possible. There are half a dozen Thunderbolt PCIe expansion chassis on the market that you can throw a GPU in and will probably work just fine under Windows with a copy of Virtu installed after a few re-boots. nando4 happens to run one. The problem is the commercial viability of attempting to bring a complete Thunderbolt GPU solution to market.

I believe it is actually part of the Thunderbolt licensing requirements that if you ship a device, it needs to have hot-plug capable drivers or you're not getting certified for that platform.

Furthermore, the market for Thunderbolt accessories is comprised of nearly 36 million Mac users and a few tens of thousands of PC users. Apple's proprietary GPU switching technology baked into OS X doesn't support external Thunderbolt GPUs yet, so the whole concept is basically a non-starter except for the DIY enthusiast. Even if you choose to run Windows on your Mac, give up hot-plugging, and kludge some rather expensive solution together, in order to drive the built-in panel you're still copying the frame buffer back over an already PCIe constrained Thunderbolt link. This is where hardware support that would allow driving the internal panel using the Thunderbolt controller's DisplayPort transport capabilities would greatly improve the experience.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.