Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
It is far more economical to simply buy a computer with the right GPU !!!

Bandwidth is only part of the problem. Cost is something that can't be ignored.

Beyond that the long term play is tightly coupled GPUs and CPUs. Due to that tight coupling in a couple of years you will suffer from serious performance degradation due to the long trip to that external device.

So if you are honestly looking out five years into the future that external box will have to be awfully impressive to make up for what is on board every PC. Look at AMDs road maps for GCN and the eventual advent of GPUs as equals on the memory bus. It isn't that such boxes won't work, it is just that demand won't be there, especially for lower end box.

Nice. With this, it'll make a lot more sense to get a external GPU, as you won't have to worry about bandwidth being a bottleneck 5 years down the road. The current external GPU enclosures I've seen, last I saw were going for ~$800, with which you might as well get a new laptop than buying an $800 accessory that wouldn't last you 5 years because of bandwidth concerns.


----------

All this noise about external GPUs is BS, the direction the industry is going is to tightly couple the GPU with the CPU. By tightly couple I mean giving the GPU direct access to main memory with its own memory management unit. Such advancement will greatly reduce latenancy and data transfer issues commonly seen in today's systems.

Interesting idea - it makes me think that in the future, graphics "cards" will be boards that you put into your monitor and not the device....
I really have my doubts about that sort of future. It is a performance regression even today and the delta will just get worst in the future. In a nut shell it is very expensive way to get little.
 
This isn't where industry is going!

Thats just the CPU <-> Chipset connection, not the CPU <-> GPU/TB/Sound/SATA connection.

Its been well documented and tested that GPUs can reasonably on 4x and even 2x connections.
That is garbage, anybody can come up with benchmarks to prove a point. The real benchmark is how such systems will work out in the wild. The reality is a GPU that can work over a 2X connection is too slow to be of much use.
There is a noticeable performance loss with higher end GPUs, but for mid-low end GPUs, which will still be much faster than 98% of laptop offerings can run on it without much loss.
For people living in the past maybe. But you must realize that any future external product will have to compete with the processor of the day. In that respect we are talking Intels Ivy Bridge and AMDs Llano and Trinity. Such hardware will move GPU performance up a notch such that low and even some midrange GPUs will not be able to compete. You then have to look at a upper end external GPU to really get anything of value but then you loose much of your performance over the slow link.

Frankly I'd be surprised to even find low end descrete GPUs on the market in 3 years time. 22nm gives manufactures a lot of room to play in. Go another node down and you will have some fairly impressive GPUs right on the die.
Bandwidth availibility is far ahead of the actual requirements of the GPU.
Baloney. The whole point of the eventual movement to fused GPUs and CPUs is to avoid all the bandwidth and data transfer issues that are common in today's systems. Ask yourself this, how can PCI Express compete with a memory management unit update?
The big danger if anything is the latency that a TB connection would add, not the bandwidth.
I see that as uninformed. Think about this, why did both AMD and Intel decide to integrate the GPU on die instead of a whole bunch of other support electronics?

----------

I'm not sure why people can't grasp this.

Let's see some consumer-grade Thunderbolt products first...


----------

...until you get more devices that use it then who really cares?

It sure sounds like they're having issues with the current technology to me since it's taking so long. I've been waiting for a decent enclosure and/or switch at a reasonable price for nearly a year now?
It isn't the technology but rather the unreasonable expectations they are having problems with. Apples TB based monitor is out there as are a number of RAID arrays, this is what the technology was intended for.
It all sounds great, external video cards, fast speeds, etc... Where's the hardware?! At least Apple could make something by now since third parties are way behind...

:mad:
You can be as mad as you want to be but some of this TB stuff people are wishing for will never come to be. People need to come to grips with the idea that TB is not and never will be a USB replacement. They also need to realize that TB devices need a justification that makes economic sense.
 
That is garbage, anybody can come up with benchmarks to prove a point. The real benchmark is how such systems will work out in the wild. The reality is a GPU that can work over a 2X connection is too slow to be of much use.
image011.png

image005.png

image008.png


Sure theres a loss associated; but its not as if it utterly kills the device.

Lets not forget we're using a more bandwidth demanding GTX480 here - or that the best of best current mobile GPUs (like the one used by the current iMac) are worse than the GTX460. The average mobile GPU is more akin to the GTX430 or 420. Even with significant losses to the card's true potential, its a marked improvement over what a laptop commonly offers.

CPU wise, Laptops are nowhere near as far behind as they are GPU wise.
I see that as uninformed. Think about this, why did both AMD and Intel decide to integrate the GPU on die instead of a whole bunch of other support electronics?
Because its cheaper, and means that the on-die GPU can be designed to utilize the processors cache along with much better access to the system's RAM. External GPU solutions would have to ask go through the processor and then to the actual memory to be able to pull data. When on the CPU itself, that first, and very latency inducing step, is removed. It also simplifies cooling management, because its now centralized on a single point.

Thats not to say there aren't other advantages to it, latency and bandwidth potential between IGPUs and the CPU is unparalleled to what any external GPU can ever hope to acquire. Thats just simple science.

Baloney. The whole point of the eventual movement to fused GPUs and CPUs is to avoid all the bandwidth and data transfer issues that are common in today's systems. Ask yourself this, how can PCI Express compete with a memory management unit update?
Again, thats about latency and not bandwidth. Currently, it takes a few clock cycles for a signal to leave the CPU and reach the GPU. This latency is why the 2011 socket has its RAM arranged like it does (since a few people have found it odd). Its a design that keeps the delay between the RAM and CPU to an absolute minimum. It is the delay, if anything, that would kill a GPU running on thunderbolt. The CPU would simply not be able to feed instructions fast enough to the GPU.

If bandwidth was the concern,then we'd expect a significant improvement running a GPU on a PCI-E 3 lane, and not on PCI-E 2 lanes. The HD7xxx series has failed to demonstrate any difference at all between those modes.
 
Last edited:
I think one of the main points to PCI-3 is to reduce latency by reducing the packet overhead and have larger packets (max). I have not seen any tech specs on that issue however. That would be important to external graphics cards or SSD raids or even grids.

Larger packets don't reduce latency. They improve throughput, but not latency.
 
It'll still cost something to implement the thunderbolt chip which isn't inherently integrated, and passing it on to devices will not exactly bring prices down in the current year.
Motherboard vendors have been taking a bullet for USB 3.0 controllers since 2010. You are still going to be looking at premium offerings to recover costs for a back panel of USB 3.0 ports, ThunderBolt, or DisplayPort.


Too many changes too fast will impede the growth of TB. If folks want the costs to come down the volume has to go substantially up. Vendors aren't going to adopt it if it is a constantly moving target that requires design changes every 12-18 months. It just doesn't have that kind of inertia to justify that.

That said I wouldn't look for this PCI-e v3.0 bump at that two year mark. I'd bet that is 4-6 years out. The major problem is that they are going to come with something that is much faster than PCI-e v3.0 to transport it without inserting additional latencies into the datastream. That means TB will have to go faster than 3.0. Even the PCI-e folks aren't planning to go faster than v3.0 in the mainstream. v4.0 is going to be a special case short distance variant. It isn't going to be 'wake up tomorrow and flip the switch" issue for TB to jump past like a 4x v3.0 worth of bandwidth.
We already have people clamoring today for more speed. I am sure peripheral vendors see the potential for the connector but users are stuck in a chicken/egg situation. I do not think we are to the scale of the Osborne effect though.
 

So we didn't have enough confusion in the world with people between bit and byte so now someone invents "transfer" as a 5:1 ratio??? WTF!? That should be stricken from the history of the universe! What a crock of proverbial cow dung. :eek:

The USB standards committee nuked USB + TB combo solution. They disapproved it.

Who cares what they want. It should be what's best for the consumer and society in general, not some committee looking only to promote its own interests. All they really had to do was create a dual-port with two different shapes on it that interlock. In other words, a USB port with an additional small vertical component that is only used for Thunderbolt devices. This way they can put multiple ports on a laptop and let the user decide what to use them for and yet computers with only one or the other will still fit their respective devices.

But putting ThunderStruck on a port that is not used hardly anywhere in the known Universe and then attaching it to video (worse than useless on a Mac Pro, for example) compounds the mistake many times over. Not only does this potentially choke Thunderbolt's potential because it has to carry video that it doesn't need to carry when a dedicated video port would do, but needlessly adds a video standard that is then going to be missing or a hassle and a half on any machine with video cards (like the Mac Pro) which would then either wipe out your Thunderbolt connections if you bought a different video card without Thunderbolt on it or leave you with a non-standard port that doesn't carry video but should be carrying it (thus being useless with monitor hubs, etc.) This is what happens when you start mixing bandwidth between unrelated devices.

Worse even yet is what happens when you only have ONE Thunderbolt port that is ALSO your video output port. Your monitor has to be the last device (unless it has its own hub) and so any devices you might need to hot connect or remove means unplugging your monitor back and forth with the other devices in-between since someone thought that daisy-chaining was a GREAT idea.... :rolleyes: Display port monitors that aren't thunderbolt will definitely have to be the last in the chain since they can't pass it at all period even if they have 2nd display port to pass just straight video. The standards were no designed together and then just arbitrarily linked by Apple. Bad move.

Let's face it. As soon as Apple gets USB 3.0 with Ivy Bridge, it's game over for Thunderbolt except as a high-end device (just like with Firewire). Incompatible ports + MDP ports that almost no one else uses + no reasonable priced hubs + no reasonable priced devices = FAIL. USB 3.0 is 100% backwards compatible with USB 2.x and 1.x and so you don't need any other ports on a mobile device. Plug in a hub and you're golden with as many ports as you need or can be sustained for available bandwidth. USB 3.0 can replace USB 2.x and 1.x and so it's a no-brainer low cost addition to virtually ALL computers in the near future while Thunderbolt is something that no one really needs and has no real device support beyond a few high-end things like mega-fast raid arrays and costs an arm and a lag for all existing hubs (tied to single monitor choices only from Apple). Little support = Fail no matter how good the technology (witness FW800 in its day; it was vastly superior to USB2.x, but it got little support by comparison because it wasn't needed by most people and devices supporting it cost $100-200 more than the comparable USB2.0 or E-Sata versions due to the need for an expensive on-board controller.
 
Last edited:
LightningBolt

They should have called it LightningBolt. It sounds better and makes more sense. You don't have a bolt of thunder you have a bolt of Lightning. Lightning (Speed of Light) is faster than Thunder (Speed of Sound).
 
Who cares?

Who cares about thunderbolt? Not 1% of Mac owners have a thunderbolt accessory, other than a thunderbolt display. Instead of increasing the speed of it, why don't they decrease the cost of actually owning something that utilizes it first? The least expensive external thunderbolt drive sold on the online Apple store is the $449 1tb 7200rpm LaCie drive. I'd rather spend that $449 on upgrading the primary SSD.
 
I want a TB drive for 1tb to run my VM off of. With a 250 gb SSD, I need the space.
 
Moar speed! I'm actually quite the Apple sceptic but when it comes to Thunderbolt, I love the concept of it and wish it would be used be other vendors too. data and media both ways at the same time at such speeds. Awesome.
 
Most here are arguing the usefulness of TB based on usable speed & price where the reality is that TB is most useful in mobility; where speed vs price is less relevant.

The ability to scale speed/storage and cost restrictions (based on packaging) is where I think TB can be most useful.

In terms of Apple's implementation strategy, people, by decrying the delay and lack of availability of peripherals, are giving Apple more credit than it deserves. Apple have a lot of hard decisions to make. Most of which is dependent on what Intel does, because there are serious limitations on what you can fit in a confined space and packaging it in a way that is both economic and efficient. This issue isn't lost on Apple because look at how far prices have come down for a MacBook Pro and a MacBook Air.

TB offers a platform where a tiny MacBook Air can have the power similar to a Mac Pro. For this purpose, COST is not the highest priority; mobility is.

I think everyone's opinion on TB is correct and what's lost in their arguments is the basis of TB's application.

Basically, it's Performance, Price or Packaging; you can have only two.

-H
 
You're going to use benchmarks of a two year old graphics card to prove your point? Really?
As bandwidth requirements have fundamentally not changed much. Its been re-demonstrated multiple times, though less formally. I assume you're quite inexperienced with hardware, as this is really common knowledge on enthusiast forums.

Plus, the age of the card still utterly fails to defeat the following point I made before:

Its still faster than every existing mobile GPU - including the absolute top-end like that found in the iMac. Any 'standard' laptop will themselves include a far weaker GPU than that. Even with a significant performance loss, an external GPU would still perform significantly better. And as I've also stated before, you would be able to use a cheaper card - like a 460/560/(unreleased)660 and still get a very hefty boost. Such a card would also not be quite as bandwidth heavy.

This also ignores any and all attempts to partially re-engineer either the GPUs or the drivers themselves to be more efficient in their bandwidth use. So far, theres been no need for AMD or NVIDIA to really consider that. And its part of the reason why Intel does not bother to provide more than 16 lanes for GPUs in the consumer line of processors (e.g. SB), despite regularly needing to run Crossfire/SLI configurations. The cards just don't care.
 
Last edited:
So Intel is struggling to expand PCI to be nearly as fast as TB? And TB is just a precursor to Light Peak which will be 10-fold speed.

Seems a little odd.
 
Why would you want Intel to stop improving their products while they wait for other manufacturers to implement the design?

Improving something that isn't broken while ignoring all of the actual weaknesses serves no purpose in my view. Focus should be on making the already speedy ThunderBlunder cheaper and easier to implement so that it gains enough devices and peripherals for the speed to actually become an issue at some point. You'd think this sort of common sense reasoning would be obvious.

Intel processors aren't broken yet they are still working to make them faster. As for making it easier to implement I do believe Intel has been working to better integrate it in hardware.

I've never understood the mentality that a company should focus on a single feature at the exclusion of everything else. There is a good possibility that the engineering disciplines needed to build a faster interconnect is different from those needed to better integrate the components. Should one group of engineers sit idle while waiting for the other?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.