Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
While I agree about USB and Thunderbolt being complimentary, display capability was not in any way an afterthought.

If you look at a diagram of an LGA 1155 platform, you can see that 90% of the CPU's I/O is reserved for graphics or display applications, and the PCH has 17.28 Gbit/s of FDI bandwidth but only 16 Gbit/s for the DMI (although the DMI is full-duplex). Furthermore, HD Graphics 4000 occupies 1/3 of the die area of an Ivy Bridge CPU. Display is clearly very important to Intel's customers.

You say DisplayPort over Thunderbolt was not an afterthought, but then launch into a rant about the Intel HD 4000 rather than Thunderbolt's implementation of the DisplayPort protocol. The reason the PCH has 17.28 Gbit/s of FDI bandwidth is because, as you'll note, that's double the bandwidth for an implementation of VESA's DisplayPort 1.1a specification but since Ivy Bridge processors allow 2 monitor connections from different ports, well, it requires twice the bandwidth. This has nothing to do with DisplayPort over Thunderbolt. I repeat. You're off in left field. Far.

Again, Intel got lazy, adopted an old revision of a VESA published specification. It didn't even have to tack this on to Thunderbolt, and nothing in your post shows that it isn't tacked on to Thunderbolt, only that it isn't "tacked on" to Ivy Bridge (we know that, but the Intel HD 4000 still sucks sorry).
 
Hmmm ... Seems like someone could make fiber optic cables incorporating the optical transducers within the connectors which would work with existing equipment and allow longer, more noise immune distances between devices.


Those are still using the copper PHY connections - not optical connections.

Translating 10 Gbps copper to optical and then back to 10 Gbps copper is not the same as a true optical connection.
 
There is no "optical" T-Bolt today - only copper.

There is no system with an optical T-Bolt port - only copper.

There is no peripheral with an optical T-Bolt port - only copper.

If T-Bolt survives to T-Bolt 2.0 with optical connections, it will most certainly be incompatible with copper T-Bolt 1.0.

Actually, ThunderBolt over optical connection was never meant to be about bandwidth really. The original plan was to have an optical interconnect at the same 10 Gbps that is available today (there is no magic "100 Gbps" fiber, come on, we're doing 16 Gbps over FiberChannel, if 100 Gbps was feasible, you know Enterprise vendors would be frothing at the mouth trying to sell us some HBAs for a couple of 10k$).

The reason Intel went copper was 2 folds :

- Costs. (copper signals are much cheaper to generate than using optical transceivers on both ends to convert electrical signals to light signals)
- Power delivery (so you can power devices over the interconnect like USB).

Again, source is Jason Ziller, Intel guy for Thunderbolt (everyone really needs to read this article) :

Thunderbolt also runs across a copper connection. As initially envisioned in Light Peak, the Thunderbolt technology was supposed to run across an optical connection, although Thunderbolt was shifted back to copper for reasons of cost, according to Jason Ziller, director of Thunderbolt planning and marketing. That would have placed the onus of designing, buying and integrating the optical transceiver on the OEMs, a burden they were unwilling to bear, Ziller said.

[...]

Although an all-optical version of Light Peak is still on the roadmap, the copper connection does have one advantage: Power can be passed along it, and Thunderbolt can provide up to 10 watts to an external drive to power it. The optical version of Light Peak will only be used if extra-long cables are needed, Ziller said.


----------

Those are still using the copper PHY connections - not optical connections.

Translating 10 Gbps copper to optical and then back to 10 Gbps copper is not the same as a true optical connection.

This is the same optical interconnect that was intended with Lightpeak, the optical transceiver was always meant to be on the OEM side. People are mistakingly thinking optical = 100 Gbps. That is a false premise. Light Peak was first designed as an optical interconnect, with 10 Gbps speeds, with the transceivers in the cabling, like this cable.
 
There is no "optical" T-Bolt today - only copper.

There is no system with an optical T-Bolt port - only copper.

There is no peripheral with an optical T-Bolt port - only copper.

If T-Bolt survives to T-Bolt 2.0 with optical connections, it will most certainly be incompatible with copper T-Bolt 1.0.

actually a few weeks ago they said fiber tbolt was being made and it was compatible its on macrumors a few pages ago on the front
 
Hmmm ... Seems like someone could make fiber optic cables incorporating the optical transducers within the connectors which would work with existing equipment and allow longer, more noise immune distances between devices.

It does indeed, and in fact Sumitomo Electric already has. AidenShaw is just being pedantic.

There is no "optical" T-Bolt today - only copper.

There is no system with an optical T-Bolt port - only copper.

There is no peripheral with an optical T-Bolt port - only copper.

If T-Bolt survives to T-Bolt 2.0 with optical connections, it will most certainly be incompatible with copper T-Bolt 1.0.

There are optical Thunderbolt cables, and they work just fine with all of the Thunderbolt generations released thus far.

There are no Thunderbolt ports which utilize an optical interface. There is no generally deployed equipment which utilizes ports with optical interfaces for channel speeds at or above 10 Gbit/s, period. All equipment uses copper or SFP+/QSFP/CXP ports into which an appropriate transceiver module is inserted. Sometimes the transmission media is separate, but often the transceiver module is part of a cable assembly. In a consumer device, an optical interface makes zero sense, and embedding the optical transceiver in the host/device even less. Keeping the transceivers separate from the controller and embedding them in the cable connectors does make perfect sense. This is "the right way" to do things. There is no foreseeable need to switch away from the current copper port at 20 Gbit/s or even 40 Gbit/s. The most likely factor to mandate a change in the connector will be its overall size being too large in relation to devices within a few years.

Ask yourself AidenShaw, why do you seek the optical connection? Is it for the interface's glory, or for yours?
 
Actually, ThunderBolt over optical connection was never meant to be about bandwidth really. The original plan was to have an optical interconnect at the same 10 Gbps that is available today (there is no magic "100 Gbps" fiber, come on, we're doing 16 Gbps over FiberChannel, if 100 Gbps was feasible, you know Enterprise vendors would be frothing at the mouth trying to sell us some HBAs for a couple of 10k$).

The reason Intel went copper was 2 folds :

- Costs. (copper signals are much cheaper to generate than using optical transceivers on both ends to convert electrical signals to light signals)
- Power delivery (so you can power devices over the interconnect like USB).

Again, source is Jason Ziller, Intel guy for Thunderbolt (everyone really needs to read this article) :



----------



This is the same optical interconnect that was intended with Lightpeak, the optical transceiver was always meant to be on the OEM side. People are mistakingly thinking optical = 100 Gbps. That is a false premise. Light Peak was first designed as an optical interconnect, with 10 Gbps speeds, with the transceivers in the cabling, like this cable.

Thank you for supporting and adding additional info....

The "optical = 100 Gbps" crowd is strong, and wrong.
 
It does indeed, and in fact Sumitomo Electric already has. AidenShaw is just being pedantic.

Or trying to be accurate....


There is no generally deployed equipment which utilizes ports with optical interfaces for channel speeds at or above 10 Gbit/s, period. All equipment uses copper or SFP+/QSFP/CXP ports into which an appropriate transceiver module is inserted.

Funny, since I just cleaned out a bunch of junk in the lab and tossed $27K worth of GBICs into the e-Waste bin. Both SMF and MMF units.

But, the point is that devices which use SFP (or other) transceivers are not intended to connect to anything directly - by design the devices run a lower level protocol and expect plugin modules to provide the PHY layer.

T-Bolt is very different - the PHY layer is part of the port silicon, no additional transceiver is needed.

A T-Bolt cable with Cu-optical transceivers on each end has no common counterpart in the enterprise networking space.
 
Cool, but what am I going to plug into the port? :D

If/when 100Gb thunderbolt (or 40Gb or whatever) is available, external video cards will be a lot more attractive, and perform pretty much as well as internal PCIe video, for one example. Even 10Gb thunderbolt isn't a huge limit, as cards have their own VRAM. Maybe also some additional storage?

The end game for me is to have an extremely small/lightweight laptop with great battery life for when i want to be portable, and a dock (Preferably in the form of say, an updated thunderbolt display) which has a mid-high end desktop class GPU in it for when i return to my desk.

External box with no worry about power consumption = no limit to the e-GPU my laptop could use.


I plug in power, i plug in thunderbolt - done. My network, keyboard, etc are all plugged into my thunderbolt display/dock.


Give me an example of any techlologies that was not embraced by consumers and succeeded. TB was designed for gadgets that are mainly used by retail consumers, and it does not need consumer adoption?

There are plenty of examples. KnightWRX listed a bunch of them.

Bolded part of your comment is an assumption, and not necessarily correct.

Thunderbolt was designed as an external high-speed PCIe bus. Professionals are more likely to have use for it initially, however it does enable standardized consumer devices such as docks possible without needing proprietary edge connectors on your machine.

As stated many times in this thread : if you're looking for $100 thunderbolt external hard drives, you aren't going to see them. A single external hard drive doesn't need the bandwidth anyway.

An 8 disk RAID enclosure? Yes, thunderbolt will work a lot better than USB and be worth the increase in price.


Most people's computing needs these days could be served by an iPad. An original iPad at that. It doesn't mean there is no market for thunderbolt and other expensive high end technologies for those people or uses that actually need it.
 
Last edited:
You say DisplayPort over Thunderbolt was not an afterthought, but then launch into a rant about the Intel HD 4000 rather than Thunderbolt's implementation of the DisplayPort protocol. The reason the PCH has 17.28 Gbit/s of FDI bandwidth is because, as you'll note, that's double the bandwidth for an implementation of VESA's DisplayPort 1.1a specification but since Ivy Bridge processors allow 2 monitor connections from different ports, well, it requires twice the bandwidth. This has nothing to do with DisplayPort over Thunderbolt. I repeat. You're off in left field. Far.

Again, Intel got lazy, adopted an old revision of a VESA published specification. It didn't even have to tack this on to Thunderbolt, and nothing in your post shows that it isn't tacked on to Thunderbolt, only that it isn't "tacked on" to Ivy Bridge (we know that, but the Intel HD 4000 still sucks sorry).

Sorry, I copy and pasted most of that rant from somewhere else and didn't do a good enough job at editing for relevance. FDI is 2 sets of 4 lanes operating at 2.7 GT/s as per the DP 1.1a spec. The eDP connection coming off of the processor is also the same DP 1.1a. Intel has yet to release a DP 1.2 capable platform. Apple was the primary interested buyer, and the target platforms were primarily MacBook Airs and 13-inch MacBook Pros running just HD Graphics 3000/4000. You need to look at things in context. Thunderbolt is a way to move everything that would normally hang off of the PCH outside the box, not just external PCIe, which LGA 1155 platforms have virtually none of to spare.

Thunderbolt and the ATD allowed for daisy-chaining of displays almost 18 months before the first DisplayPort 1.2 MST enabled displays hit the market. It did so 5 months before the first GPUs (the higher end AMD Radeon 6000 models) received DP 1.2 certification. The only Thunderbolt equipped Mac to have one of those was the 27-inch iMac with Radeon HD 6970M. AFAIK, none of the NVIDIA 600 series mobile GPUs used by Apple in 2012 support DP 1.2. And in 6 months, Intel will release chipsets and Thunderbolt controllers that support DP 1.2 anyway. So in reality, your argument that the DP aspect of 1st and 2nd gen Thunderbolt implementations was somehow limited is rhetorical at best. It has been more capable than anything else available short of a full desktop GPU, and continues to be so.

Just because the spec was ratified 3 years ago doesn't mean that the necessary silicon magically popped out of the air and validated itself at that point. Once again, the first DP 1.2 HBR and MST capable displays only became available 2 weeks ago!
 
Really?

And where did you get that idea?

The lack of blu-ray purchases I see amongst my peers, the lack of blu-ray selection vs DVD when renting from the local rental store - many of which are closing their doors due to lack of demand.

Yes, people have existing libraries. People own a lot of dead technology.

That doesn't mean it isn't dead or on life support.


Your issue of storage is also a reason for the decline of optical media:

- if i purchase, i don't need to store it, don't need to worry about damage and am less worried about physical theft of my collection
- if i rent, i don't need to worry about late fees due to failing to return - the media simply expires x days after rental

If you have decent speed broadband (and an increasing number do) then keeping your media library largely in the cloud (whether that happens to be itunes, netflix or some warez site) is a lot more convenient.
 
Sorry, I copy and pasted most of that rant from somewhere else and didn't do a good enough job at editing for relevance. FDI is 2 sets of 4 lanes operating at 2.7 GT/s as per the DP 1.1a spec. The eDP connection coming off of the processor is also the same DP 1.1a. Intel has yet to release a DP 1.2 capable platform. Apple was the primary interested buyer, and the target platforms were primarily MacBook Airs and 13-inch MacBook Pros running just HD Graphics 3000/4000. You need to look at things in context. Thunderbolt is a way to move everything that would normally hang off of the PCH outside the box, not just external PCIe, which LGA 1155 platforms have virtually none of to spare.

But again, the fact that the silicon has the bandwidth doesn't mean that it wasn't tacked on to Thunderbolt as an afterthought. The simple fact is, DP doesn't provide any added value to Thunderbolt aside from not using 2 different ports (which means you get to unplug any non-Thunderbolt enabled monitors to plug in devices to the daisy chain, which sucks).

The Ivy Bridge silicon supports such bandwidth purely for the fact that it needs it to power 2 DP 1.1a displays, as that is a feature of the HD 4000. That is it. It has nothing to do with Thunderbolt since Thunderbolt is not even part of the PCH or the Ivy Bridge Die. It's a complete seperate controller.

They didn't design any part of the architecture for Thunderbolt specifically, your link is a stretch at best.

Thunderbolt and the ATD allowed for daisy-chaining of displays almost 18 months before the first DisplayPort 1.2 MST enabled displays hit the market. It did so 5 months before the first GPUs (the higher end AMD Radeon 6000 models) received DP 1.2 certification. The only Thunderbolt equipped Mac to have one of those was the 27-inch iMac with Radeon HD 6970M. AFAIK, none of the NVIDIA 600 series mobile GPUs used by Apple in 2012 support DP 1.2. And in 6 months, Intel will release chipsets and Thunderbolt controllers that support DP 1.2 anyway. So in reality, your argument that the DP aspect of 1st and 2nd gen Thunderbolt implementations was somehow limited is rhetorical at best. It has been more capable than anything else available short of a full desktop GPU, and continues to be so.

Just because the spec was ratified 3 years ago doesn't mean that the necessary silicon magically popped out of the air and validated itself at that point. Once again, the first DP 1.2 HBR and MST capable displays only became available 2 weeks ago!

And the spec came a full 14 months before the first silicon for Thunderbolt hit. No one was asking Intel to make DP 1.2 monitors, just making it part of Thunderbolt from day 1 would have at least guaranteed Macs moving forward would have gotten it.

The Radeon 6000 was released in October 2010 with DP 1.2 support. Apple shipped the first TB enabled Macs in February 2011. If AMD got DP 1.2 support out, it means Apple/Intel should also have gotten it in. They didn't. That sucks for TB Macs that are now stuck without DP 1.2 support and thus no 4K support.
 
Funny, since I just cleaned out a bunch of junk in the lab and tossed $27K worth of GBICs into the e-Waste bin. Both SMF and MMF units.

But, the point is that devices which use SFP (or other) transceivers are not intended to connect to anything directly - by design the devices run a lower level protocol and expect plugin modules to provide the PHY layer.

T-Bolt is very different - the PHY layer is part of the port silicon, no additional transceiver is needed.

A T-Bolt cable with Cu-optical transceivers on each end has no common counterpart in the enterprise networking space.

I didn't include GBICs or SFP in my list because they aren't even capable of 10 Gbit/s per channel. But what is funny is that you use this stuff and still don't understand it.

Thunderbolt controllers do not contain PHY transceivers any more than switches with SFP+ ports do. From the very beginning, the transceivers were developed by other parties, the IP was not Intel's and it was manufactured on a totally different process anyway.

Thunderbolt is a direct consumer productization of the technologies commonly used in the enterprise networking space. An UltraBook doesn't have room for an SFP+ port, so you use a much smaller, friction fit consumer friendly connector. You'll have to use active cables because there is high likelihood of "noisy" ports due to the limited shielding. So you make the transceiver modules smaller and embed them in the cable connectors. Now you have Thunderbolt.

Hmmm, a Thunderbolt cable with optical transceivers embedded in the ends has no common counterpart in the enterprise networking space except for the myriad of pre assembled cables that look like this:

1020022355.JPG
 
Thunderbolt doesn't have to succeed in the consumer market to be a success. That Apple is implementing it in consumer level machines does not mean it is intended as a consumer level technology, nor that its designed in a way to reflect consumer level pricing.

Then why are they bothering sinking so much of their R&D budget and manufacturing costs into something that's intended for pros on a consumer laptop? Why is it on the consumer-level iMac? Shouldn't the iMacs have Fibre Channel too? Surely, the Mac Pro should be the first to get it if it's for pros, but it's going to be the last...

In reality, it's pretty certain it is not just targeted at pros, especially as Apple's catering to the pro market seems to be waning lately. The majority of people don't see anything wrong with USB, TB is just expensive with few benefits to most. The spec should have been launched along side a good 10 or so peripherals that were reasonably priced, to help start off adoption.
 
Absolutely! Those fools who like USB have to put up with a a CPU spike as they choose from thousands of unique-low-cost devices while us TB pure-breds can choose from a dozen or so devices-many do the same thing!

USB cables are too damn cheap too! Why, if I do't have to pay $49 for a cable it just isn't worth it.

Hoo-yeah!

We rock!


:rolleyes:
Yeah CPU spikes can crash your machine or introduce artifacts into your data, especially if your are recording music or video to disc
 
The lack of blu-ray purchases I see amongst my peers, the lack of blu-ray selection vs DVD when renting from the local rental store - many of which are closing their doors due to lack of demand.

Yes, people have existing libraries. People own a lot of dead technology.

That doesn't mean it isn't dead or on life support.


Your issue of storage is also a reason for the decline of optical media:

- if i purchase, i don't need to store it, don't need to worry about damage and am less worried about physical theft of my collection
- if i rent, i don't need to worry about late fees due to failing to return - the media simply expires x days after rental

If you have decent speed broadband (and an increasing number do) then keeping your media library largely in the cloud (whether that happens to be itunes, netflix or some warez site) is a lot more convenient.

OK, I'll answer bit by bit...

I live in a suburb of 50K people or so and I don't think we have a rental store of any kind left. However the electronics stores, Walmart and a few other retailers still have large areas dedicated to optical media. Must still be selling, or they would have replaced the stock with something else that does sell.

What defines a dead technology? I would say if they no longer make it and support it. They still make record players and LPs. Cassette tapes are no longer for sale, though you can still buy players. Same-same VHS. So I could agree they are dead. All optical media players are still for sale (save Laserdisk) as are their media. And still selling. I would not call them dead yet.

If I buy it, at least for now, I have to store it. There is little, as yet, cloud support for video media - that is changing, yes, but the quality of the media is not there yet. Plus I can't get them to store my video media I already own. iTunes will do it for music but not music. I have owned optical media for a quarter decade and have yet to replace a broken/scratched disc - unlike the odd tape or vinyl record I've worn out from over use. I take care of them. In addition all this cloud storage must be paid for. Streaming quality is still not as good as the disc. I have Netflix. Picture quality is ok. Audio is worse. Internet radio bit rates are low. MP3s are better, but not much. I have HD cable, and video quality is very good and audio is good. But on my PS3 with my home theatre pumping out the noise, movies are better looking and better sounding. Period.

My issue with storage is the reason I still buy optical media. It looks and sounds better, and I can't buy it over the ether with that quality and store it locally. So I disagree there too. If I could buy it over the ether and had a big enough HD to hold it, I wouldn't buy discs anymore.

When I buy a movie or album, its mine. All mine. To use as often as I want, whenever I want, for one price. I refuse to pay extra for storage or rental. Its another bloody bill. How many do we have now. It used to be heat, electricy, phone and cable (opt) when I was 20. Now add in internet, cell phone, Netflix account, Sirius account, extra satellite channels, etc, etc, etc.

And if we all start streaming EVERYTHING, will we run out of bandwidth?

I admit, I'd love to cloud everything. But the price, in terms of quality and money is, for now, too high.
 
But again, the fact that the silicon has the bandwidth doesn't mean that it wasn't tacked on to Thunderbolt as an afterthought. The simple fact is, DP doesn't provide any added value to Thunderbolt aside from not using 2 different ports (which means you get to unplug any non-Thunderbolt enabled monitors to plug in devices to the daisy chain, which sucks).

The Ivy Bridge silicon supports such bandwidth purely for the fact that it needs it to power 2 DP 1.1a displays, as that is a feature of the HD 4000. That is it. It has nothing to do with Thunderbolt since Thunderbolt is not even part of the PCH or the Ivy Bridge Die. It's a complete seperate controller.

They didn't design any part of the architecture for Thunderbolt specifically, your link is a stretch at best.

Actually IVB can drive 3 pixel pipelines, but anywho, the point I was trying to make has been totally lost here. Display is a major priority for some of Intel's key customers. One of those customers is Apple. Display in the form of DisplayPort was an integral part of Light Peak from nigh on day one.

And the spec came a full 14 months before the first silicon for Thunderbolt hit. No one was asking Intel to make DP 1.2 monitors, just making it part of Thunderbolt from day 1 would have at least guaranteed Macs moving forward would have gotten it.

The Radeon 6000 was released in October 2010 with DP 1.2 support. Apple shipped the first TB enabled Macs in February 2011. If AMD got DP 1.2 support out, it means Apple/Intel should also have gotten it in. They didn't. That sucks for TB Macs that are now stuck without DP 1.2 support and thus no 4K support.

Ahh... So the spec came down 14 months before products with Thunderbolt controllers hit store shelves... So the silicon was already finished final tape-out and on its way to the fabs when? Intel's DP 1.2 stuff is likely just finished validation and entering production now.

AMD is far and away the most on-top-of-it when it comes to DisplayPort. Even still, DP 1.2 validation and driver support for the Radeon 6000s did not happen until December of 2011 and it was limited to certain high end parts, only one of which ever appeared in a Thunderbolt equipped Mac. Furthermore, it was largely a hollow victory for AMD since the display industry moves so much more slowly. There is not a single 4K panel on the market that supports DP 1.2 HBR connections. So, like everybody else, you'll just have to drive yours with two DP 1.1a connections, which is totally possible on any 2-port Thunderbolt Mac.

So the only Thunderbolt Macs that have shipped with a GPU capable of DisplayPort 1.2 output are the 2011 27-inch iMacs with Radeon HD 6970Ms, and there is no display yet in existence that they cannot drive, so what is it that sucks exactly?
 
TB has been dying since the day it was released...mainly due to 1)insane costs, 2)extremely (and I mean extremely) limited number of devices (like 4) 3)Mac-only, 4)high-end Mac-only.

1) Thunderbolt is meant for power users & Professionals not so much consumers.

2) Only 4 devices?!?

All these are Thunderbolt enabled: Red Rocket Chassis from mLogic, Drobo mini, Buffalo Technology MiniStation Thunderbolt HD-PATU3, Promise Pegasus R4, LaCie 2big 6, G-Technologies G-RAID Thunderbolt 8 GB, Elgato Thunderbolt SSD 240 GB, Seagate GoFlex Desk Thunderbolt, Western Digital My Book Thunderbolt Duo, BlackMagic ultraStudio 3D Capture, Matrox MXO2 LE Max, AJA io XT, Thunderbolted Intensity Extreme, ATTO ThunderStream SC 3808D, Drobo 5D, Matrox DS1 Thunderbolt docking station, Sonnet Echo ExpressCard Pro & Echo ExpressCard, Sonnet Echo Express SE, Sonnet xMac mini Server.

Soon to be released: Belkin Thunderbolt express dock.

Probably more in development.

3) Intel will eventually put them in PC's, which some are already have them.

Acer's Aspire S5 ultrabook, Lenovo's ThinkPad Edge S430, Asus's G55 gaming laptop

4) Mac Mini

Thunderbolt dead? Its only just started.
 
Last edited:
And if we all start streaming EVERYTHING, will we run out of bandwidth?

My video files are on GbE over whole-house Cat6 cabling. With a switched GbE fabric - no bandwidth issues. (All variants of WiFi suck...)

My ISP guarantees about half the bandwidth of a single BD. Fail.
 
Yeah CPU spikes can crash your machine or introduce artifacts into your data, especially if your are recording music or video to disc

CPU spikes should not crash your machine (Any OS with basic multi-tasking will handling them fine) nor do they introduce artifacts into your data. What can happen are dropped frames when recording video or lost audio samples with recording.

However, real-time video capture is nothing something a lot of people do... most video is recorded and then merely transferred for editing in which case TB has little value compared to existing interfaces such as eSATA or USB 3.0 or Fibre Channel or even Gigabit Ethernet.


And if we all start streaming EVERYTHING, will we run out of bandwidth?

I admit, I'd love to cloud everything. But the price, in terms of quality and money is, for now, too high.

No, there is plenty of bandwidth - but the current limiting factor is the last mile from the ISP to home. ISPs are reluctant to upgrade end users to super-high speed technologies such as Fiber optic.
 
Last edited:
Orly? http://market.freak-stuff.com/products/58319/360534920558

So the fact that I trashed $27K of 1Gbps and 4Gbps "GBICs" was a mistake because my new 16 Gbps SAN doesn't exist?

Get real....

Umm, no, I never said it was a mistake. Nor did I ever deny the existence of your or anybody else's SAN. All I was saying is that my initial statement was in regards to 10 Gbit/s or more per channel technologies, which doesn't include GBICs or SFP.

Regardless, I still don't understand how you use this stuff on the daily but don't grasp the underlying technology or understand the design choices made in the development of Thunderbolt.

As for the pic of the SFP+ module, I guess you're trying to say that it's cooler to buy your connectors separate from your transmission media? Well I do that plenty myself, but that type of thing doesn't fly so well in the consumer space. Actually, the transition to tested, pre-built cable assemblies is pretty common across the board for short runs these days.

edit: I see that perhaps I was merely being too strict for you with my nomenclature in terms of GBICs and SFP modules being distinct from SFP+?

second edit: I finally get what you were on about. You thought I was saying that GBIC, SFP, SFP+, QSFP, and CXP modules didn't support connections for optical transmission media. Of course they do. But the socket connection to the switch/HBA/NIC is always made with good ol' fashioned copper, i.e. they're all just like a Thunderbolt port.
 
Last edited:
And if we all start streaming EVERYTHING, will we run out of bandwidth?

I admit, I'd love to cloud everything. But the price, in terms of quality and money is, for now, too high.

We've kinda gone off topic but...

You can get decent high def video over 4-6 megabit. Sure blu-ray is better, but for most people, even DVD is "good enough".

By "dead" perhaps I am a little ahead of the curve. I live in a new house and haven't even bothered to install a TV antenna or satellite. I just stream over the internet. I have a heap of DVDs and CDs as well, but in terms of new content? No way. No more optical media for me.

Even a task as simple as finding the disc to watch is just way easier with streaming media or digital download.

Typically when I buy a DVD i may watch it perhaps 1-2 times, then it goes into the cupboard pretty much never to be seen again. It takes up space. I can't watch it on my tablet when i'm away from home.

For me, the convenience of streaming or downloaded media far outweighs the marginal quality improvement by keeping optical media around.

I suspect I'm not alone.
 
Orly? http://market.freak-stuff.com/products/58319/360534920558

So the fact that I trashed $27K of 1Gbps and 4Gbps "GBICs" was a mistake because my new 16 Gbps SAN doesn't exist?

Get real....

Yup, I've got a bunch of ten gigabit SFPs in my Cisco 4507RE sitting here at work :)

One of my SFP ports...

per-4507-1#sh int ten1/1
TenGigabitEthernet1/1 is up, line protocol is up (connected)
Hardware is Ten Gigabit Ethernet Port, address is 5057.a832.eb9c (bia 5057.a832.eb9c)
Description: ** LACP1 to FAS2240-c1 **
MTU 1500 bytes, BW 10000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 10Gb/s, link type is auto, media type is 10GBase-CU 5M
input flow-control is on, output flow-control is on

Have a heap of them running LACP to my SAN and Cisco UCS environment, which also runs 10 gig ports for its fabric. Note: it knows what cable type is connected as the copper 10 gig cables have transcievers in them. Just like thunderbolt.

And yeah, thunderbolt is just a lower cost variant of the same technology we've had in the networking world for years. It is why the cables are so expensive - they have chips in them, the actual port in your machine itself is abstracted from the physical cable media type.
 
Last edited:
I am now taking bets as to which will actually hit the market first.

1.) Superspeed USB 3.0 at 1:5 odds

2.) An affordable Thunderbolt hard drive priced in line with a comparable USB 3.0 drive OR any other Thunderbolt device besides a monitor. at 1000:1 odds each.

3.) Belkin's Thunderbolt express dock. at 25,000:1 odds

4.) A new Mac Pro. 1,000,000,000:1 odds *

* speed bump does not count, must include at least one new technology like USB3 or T-Bolt.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.