Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
IMO, for professionals looking for more performance, Sandy Bridge is irrelevant. Look at this sneak peak of SB performance and you will see why:

http://forum.coolaler.com/showthread.php?t=240578

Performance is inline with what is on the market right now. If you extrapolate the data, SB may be 1-2% faster at most at CPU heavy tasks.

SB is not worth getting if all you want is performance. However, if you're looking for better efficiency and more features, only then is it worth taking a look IMO. Intel's Nehalems will be neck and neck with the SB cpus two years from now in terms of performance.
 
IMO, for professionals looking for more performance, Sandy Bridge is irrelevant. Look at this sneak peak of SB performance and you will see why:

http://forum.coolaler.com/showthread.php?t=240578

Performance is inline with what is on the market right now. If you extrapolate the data, SB may be 1-2% faster at most at CPU heavy tasks.

SB is not worth getting if all you want is performance. However, if you're looking for better efficiency and more features, only then is it worth taking a look IMO. Intel's Nehalems will be neck and neck with the SB cpus two years from now in terms of performance.

How is that? Sandy Bridge doesn't that much if any software support like adding cores do. Nehalem provided up to 30% better performance than Core architecture did.

For example in this benchmark, the 2.5GHz SB beats 3.33GHz i7. Power efficiency means better performance because more cores and faster clocks can be used
 
Of course Nehalem/Westmere will still be incredibly powerful as far as CPU performance goes after Sandy Bridge is released, but it will no longer have the latest technology, and we're talking about groundbreaking things like USB3, SATAIII, LightPeak and PCI Express 3.0 (which both doubles the bandwidth over 2.0 and has many more lanes per CPU, giving you more PCI Express ports in the computer), where we'll eventually have devices and cards that require these ports. Wouldn't you rather have a computer that contains these ports? Anyone who doesn't at least look over the implications of buying now versus waiting is foolish. This is a completely new architecture right around the corner, with multiple new standards technologies, things that will just be in more and more demand as modern devices are released that will require USB3, SATAIII, LightPeak or PCI Express 3.0. It's your money and I'm not saying you shouldn't buy a Westmere Mac now. Re-read the last portion of the 1st post, where you see the different scenarios, and you'll see that I said it's perfectly fine to buy a Westmere now as long as you know what you're getting into. And yes, Apple had early access in 2009; check post #28 here, the last paragraph, for more on that.



You misread the portion you quoted; I said "Sandy Bridge is a big deal and will bring loads of the latest technology; whereas Westmere is just a Nehalem that has been reduced in size (giving them slightly better performance and allowing them to fit more cores inside each processor).". The important word is in the portion you yourself quoted: "and". ;) I was talking about performance per-core only being "slightly better"; in other words, the die-shrink leads to slightly more effective electrical paths giving slightly better performance per-core AND the reduced size ALSO allows more cores per processor.

Now, when it comes to technology; as mentioned in my reply to chrmjenkins in this post (above the reply to you), Sandy Bridge brings very important new technology that will eventually be required by devices, and then your Nehalem/Westmere will officially be obsolete. That is why post #1 of this thread talks about the different scenarios we're all going to have to face, and just wants to inform people so that everyone can make the call on what they want to do. For a more elaborate reply on this technology-point of what Sandy Bridge brings, see the reply to chrmjenkins in this very post.

Lastly, regarding "we never know when Apple will use them", that is a non-issue, see the last paragraph of post #28 here.

There are alreay westmere boards that have sata 3, usb3, 7 pci-e 16 lane slots etc already. Pci e 3 boards are right around the corner These things will by no means be new when sandy bridge comes out. When sandy bridge comes out alot of the features you speak of as being new wil have already been out for year integrated into motherboards or available as add ons with pci e cards. Sandy Bridge is going to be great and there will be bus bandwith improvements for sure as well as genuinely new processor but as far as features there's not a whole lot new.
 
IMO, for professionals looking for more performance, Sandy Bridge is irrelevant. Look at this sneak peak of SB performance and you will see why:

http://forum.coolaler.com/showthread.php?t=240578

Performance is inline with what is on the market right now. If you extrapolate the data, SB may be 1-2% faster at most at CPU heavy tasks.

SB is not worth getting if all you want is performance. However, if you're looking for better efficiency and more features, only then is it worth taking a look IMO. Intel's Nehalems will be neck and neck with the SB cpus two years from now in terms of performance.

Not once has this thread touted PERFORMANCE as the main reason to wait for Sandy Bridge (it will be faster, though). From post one, and consistently throughout the thread, the issue has been the same. I'll quote a summary from a previous reply to give you an overview of the issue:

Sandy Bridge is imminent, in the near future, and will bring no less than FOUR entirely new standards and evolutions (USB3, SATAIII, PCI Express 3.0 and LightPipe), which will make all of today's computers obsolete, as these new technologies and ports will be required by more and more modern devices as time goes on. That is why it's something that should be taken into account before spending 3 years of saved up cash on a fully loaded Westmere Mac, when you could buy a second-hand/refurb/make a hackintosh to tide you over until the real quantum leap with Sandy Bridge. Unless you are made of money, that will be a much wiser choice. I happen to be made of money and I still won't be buying any fully kitted out Westmere, it's just not a wise choice with such a quantum leap coming up quite soon. The best thing to do is just to get something to get by with while waiting; and the current machines are very powerful, so a refurb/second-hand/hackintosh will be the best choice for most people. Those that need the 4 extra cores (from 2x4 (8) to 2x6 (12)) are free to buy a Westmere now, if they really need it, but even most editors would get by excellently with the 2009 Mac Pro.

There are alreay westmere boards that have sata 3, usb3, 7 pci-e 16 lane slots etc already. Pci e 3 boards are right around the corner.

I think I've already mentioned what you're saying in this thread. Current implementations exist, yes; however, they are unofficial non-Intel controller chipsets. The Intel solution has not been released yet, and therefore they will not be in the Mac Pro. Nor will there be Mac OS X drivers for those unofficial chips if you build a Hackintosh with current motherboards. One such board is the EVGA SR-2, and your only chance at getting support for your precious USB3 and SATA3 ports is if you try to port open-source Linux or UNIX drivers to Mac OS X, which is something that the Hackintosh community does at times, but don't bet on being able to boot from a SATA3-connected disk. Basically, NO these things don't exist until Intel officially brings them out, which they'll be doing with Sandy Bridge and the corresponding new reference motherboards. As for your last comment, "PCI Express 3.0 boards are just around the corner"; no, they are not. PCI Express is dependent on the CPU. For instance, each Nehalem/Westmere Xeon provides thirtytwo (32) "1x" lanes of PCI Express 2.0; with two CPUs that's sixtyfour (64) "1x" lanes, or a total of "64x" of bandwidth. This is then split across the various connectors on the board in any way the manufacturer prefers. Apple, for instance, chose to go for a 4x16 (=64) setup, but could just as well have gone for 2x16 (=32) + 4x8 (=32) = 64 (which would have given six ports, and still only used 64 lanes). Other manufacturers, like EVGA, use Nvidia NF200 line doublers, which is basically an intelligent queuing/messaging passthrough device that allows you to connect multiple fast devices to a single lane, and therefore artificially increases the number of lanes and lets you create configurations such as EVGA SR-2's seven (7) 16x lanes. For PCI Express 3.0, you will need new Sandy Bridge CPUs to take care of the increasing demands such as doubled polling rate and more lanes from the CPU. Please think and be informed before you speak.

Question about lightpeak: will there be a way to get lightpeak to older mac pros when it becomes available?

Yes, Intel has designed some Light Peak PCI Express cards that you will be able to use in existing computers, but it probably won't be as fast as a builtin solution.
 
How is that? Sandy Bridge doesn't that much if any software support like adding cores do. Nehalem provided up to 30% better performance than Core architecture did.

For example in this benchmark, the 2.5GHz SB beats 3.33GHz i7. Power efficiency means better performance because more cores and faster clocks can be used

In another Everest benchmark, the SB CPU scores much lower. We don't know what part of the CPU they are benchmarking (cache latency, etc).
 
Intel's USB 3.0 implementation is still scheduled for 2012 on their controllers. Several other USB 3.0 controller vendors stepped up during Computex as well in addition to NEC, VIA, and Texas Instruments.

(LGA 2011) processors are going to bring PCIe 3.0 but SATA 6 Gbps is going to be available across the board on all sockets.
 
Dual and Quad-Core low-end models come first, before 6-core and 8-core high-end models in Q2 2011. It still begs the question though: Why get Westmere now? Remember that Westmere is just a "tick" 32nm die-shrink of Nehalem with two more cores in (for a total of 6) in the highest-end CPUs. Why not wait for the "tock" (new architecture) in Q2 2011 and get USB3, LightPeak, SATA3, PCI Express 3.0 (twice the bandwidth), native support for 1600 MHz RAM, 8 cores, and the higher performance of the new architecture?

Why? Because once it's available, the hexa 3.33GHz part is an absolutely huge performance bump beyond what is currently available in the MP lineup. If someone truly needs to be using a MP, then there's a pretty good chance that the upgrade from whatever box they have now, to a 6 or 12 core will pay for itself before the new architecture becomes available.

I upgraded from a quad 2.66 i7 to a hexa 3.33 i7 in my Linux/Win7 video encoding box and the performance increase is huge.
 
Sandy Bridge is imminent, in the near future, and will bring no less than FOUR entirely new standards and evolutions (USB3, SATAIII, PCI Express 3.0 and LightPipe), which will make all of today's computers obsolete, as these new technologies and ports will be required by more and more modern devices as time goes on.

Ok, that is just plain silly.

USB 3 and SATA 6 are already flooding the PC market. As for PCI-E 3.0, we aren't even close to saturating PCI-E 1.1 in many cases.

As for Lightpeak, I believe it is useless especially among the growing USB 3 and eSata markets. If you don't believe that, then you're still in luck. Read this:

"Intel has designed a prototype PCI Express card for desktop PCs as an add-on. This would mean many people wouldn't need to buy a new motherboard for the new cable type. The card has two optical buses powering 4 ports. Note that such a card might not be able to keep up with the 40Gbit/s bandwidth of four Light Peak ports. Most desktop motherboards in 2010 have one or more PCIe 16x slot and a few PCIe 1x slots and a few Standard PCI slots. A PCIe 1x slot is limited to 4Gbit/s; a PCIe 16x slot would be enough"

And there you have it. It would need but merely a PCI-E 1.1 16x slot.

What does Lightpeak even have to do with SB? You are linking technologies that are not interrelated at all, and then using them as a basis by which to convince people to to buy any computers in the near future.
 
Price vs Performance... Mac Pros... I was WRONG!

So, I've bought and evangelized Macs for years and years and years...

I was waiting for the new Mac Pros because the current hardware pricing for the performance wasn't worth it...

I was sooo wrong. I just didn't realize in which direction and just HOW much I was wrong.

It's sooooo much worse than I ever thought... wow... I must have been absolutely RETARDED to EVER buy ANY hardware from APPLE.

:confused::eek::eek::confused::eek::confused::rolleyes:

Good God Man, What Was I Thinking???????

I almost bought a Mac Pro Dual 2Ghz 8 Core / 16 Thread box... for $3500 roughly after shipping, tax, etc.

Instead?

I bought a Core i5 750 Motherboard / Combo, 4 Gig Ram, GTS 250 1GB from Fry's. Spent ~$400...

And I'm running OS X 10.6.3 on it... and I'm simply wiping the floor with that $3500 machine.

It's rock - hard stable, faster than a bat out of hell, overclocked to 4.2Ghz, and literally I can't get over how badass OS X runs with a Vanilla Kernel on this thing.

I'm getting WELL over 500+ in XBENCH, and with only 4 Cores, (only 2 are being identified by Cinebench) my Cinebench Scores are almost 6 for the CPU and near 40 for the GPU... with Geekbench, I'm at near 12000, not to mention I'm able to encode a 1 1/2 hour movie with Handbrake in High Definition (720p) in about 15 minutes flat. Ripping the movie from the disk takes roughly 3-4 minutes.

FOR $400!!!!!!!!!!!!!!!!

I can ONLY imagine how fast it would be if I bought something other than a POS Mid-Level CPU and GPU!!!!

So thank you Apple for making a great OS, but your hardware has GOT TO GO BABY! :)

It will be a loooooooong time in Hades before I spend money on Apple hardware again.

Thoughts? Am I crazy?

xbench.jpg
 
As for Lightpeak, I believe it is useless especially among the growing USB 3 and eSata markets. If you don't believe that, then you're still in luck. Read this:

Will the technology behind Lightpeak allow for compatibility between other digital interconnected interfaces? If so, then I still think Apple will take this route, just as it did with the mini Displayport, amidst an industry dominated by HDMI.
 
--> Post #56 <--

Eidorian: See reply #56. USB3+SATAIII will be provided by Intel for OEMs as an all-in-one chip that they can use on their Sandy Bridge motherboards. The thing you're talking about (2012) is the first REFERENCE BOARDS slated to have that AS A STANDARD INSTALLATION, but nothing prevents OEMs like Apple from putting Intel's all-in-one chip on their boards in the meantime.

brentsg: See reply #56, this is not about performance, it's about new standards. As for Westmere (6 cores) vs Nehalem (4 cores), the performance is negligible since most applications cannot even use more than 1 or 2 cores simultaneously. Come back when programs are written to use "n" number of cores without bottlenecks and I'll say your NEHALEM will come to use (let alone the WESTMERE). Heck, stuff like 3D Studio Max crashes if you have over 8 cores (or something like that, I forgot the exact number), and most applications still only use one or two cores.

Salavat23: See reply #56 regarding the USB3/SATAIII motherboards that are already "flooding the market" and why that does not matter to Macintosh users, not even to Hackintosh users. Hint: Intel's official chipsets are not out; those motherboards all use 3rd party non-Intel chipsets that completely lack drivers for Mac OS X. Read post #56 for further information.

Asylum Design: Good job mate.

Icaras: LightPeak is something you would have in addition to USB3 and SATA III; and is meant for connecting super high resolution screens, networking, storage, etc, basically anything that benefits from huge bandwidth. It offers a massive amount of bandwidth and will be excellent for a lot of purposes. It all travels over one, tiny cable as well.


I'm going to bed soon. I hope people will keep behaving. We've been having a good discussion so far!
 
Sigh, I said EXACTLY this: http://www.macnn.com/articles/10/06/10/promises.better.future.ahead/

In the other thread, but some of you lot seem dead set on being as negative as you can. Yes I understand your frustration but at the end of the day Apple is making s*** loads of money of 'iToys' so can you blame them? I mean, they are a business after all.
But fret if you want, I would expect a very nice update soon however, and I think it will offer some unique features, I mean NO ONE predicted Apple would make there own version of Nvidia's Optimus for the MB Pro's.

And I would also like to add I for one am VERY great full for this latest iToy and totally love it, I cannot wait to get one. So my sympathy for you is a lot less then it was before Tuesday!! Sorry....

Thoughts? Am I crazy?

I love the name of your start up disk :D:D
 
Intel's Tick-Tock cycle... Both parts of the cycle have merit.

You will always see some significant architectural improvements on the Tock swing of the cycle. However, those CPU's will be at a premium, compared to the following die-shrink Tick.

Thus buyers looking for the most value should buy now (Gulftown/Westmere) or the next Tick (22nm Ivy Bridge). The buyers who want to be on the bleeding edge (for a premium) should buy on the Tock.

As far as what I/O technologies will have the most impact...

I believe SATAIII is the most anticipated since SSD's are already pushing SATAII to the limit.

USB3 is nice to have, but the only peripherals which can exploit that kind of speed are fast drives, and there's eSATA for that job already. Never-the-less, it's a welcome improvement. LightPeak is a cable simplification play, not a performance driver, since everything it's touted to replace is not a bottleneck now anyway. PCIe 3.0 is also a nice evolution of the bus, but I don't anticipate any performance gains coming out of it for a few generations of graphics cards which are increasingly thermally limited, not bus constrained.

I think it's very possible that most people will be on a second computer after Sandy Bridge before any of these technologies (except SATAIII) are adding value.
 
Link please.

http://www.guru3d.com/news/intel-not-planning-usb-30-until-2012/

Our sources here in Taipei tell us that Intel has no plans to integrate USB 3 into its chipsets until 2012 at the earliest - there are no new platforms due this year, and next year's roadmaps currently show none featuring USB 3. Apparently Intel plans to make a USB 3/SATA 6Gbps all-in-one chip for optional use by motherboard manufacturers on its products, however it’s currently finding it difficult to get the pin-count down to an appropriate size.

It appears that the quote comes from the April conference where Intel simultaneously announced Sandy Bridge's production date (Q4 2010). Note that "integrate into its chipsets" means building it right into their reference motherboards; they won't do THAT until 2012, but in the meantime they'll provide it to OEM motherboard manufacturers as an all-in-one USB3/SATAIII addon chipset.

I really need to go to bed now, hope there aren't more questions. ;)
 
Icaras: LightPeak is something you would have in addition to USB3 and SATA III; and is meant for connecting super high resolution screens, networking, storage, etc, basically anything that benefits from huge bandwidth. It offers a massive amount of bandwidth and will be excellent for a lot of purposes. It all travels over one, tiny cable as well.


I'm well aware of Lightpeak, but my question was whether it will mimic how HDMI, DVI, DP, and mDP behave with one another in that you can easily adapt between different types of cables.
 
I'm well aware of Lightpeak, but my question was whether it will mimic how HDMI, DVI, DP, and mDP behave with one another in that you can easily adapt between different types of cables.

Ahh I suspected that you meant that from your wording. Nope, it's an entirely new connector that uses an optical signal rather than electricity.

It's possible that some manufacturer will release some box that connects to LightPeak and sends out a DVI/HDMI/DP/mDP signal, although that would be in the far-far-far future if the day comes when LightPeak is all there is on the computer and you need to connect some old monitor. ;)

The current display connectors are around to stay for a long time, and I can see Mini DisplayPort gaining momentum thanks to stuff like ATI EyeFinity's 6-output cards (made possible thanks to the small size of mDP). I expect we'll see more and more monitors that feature both Mini DisplayPort and DVI connectors and work with either type of graphics card, until the transition is complete. If I'm not mistaken, DisplayPort also costs a lot less for manufacturers to license (compared to DVI and HDMI), so there is incentive to adapt to it.
 
Ahh I suspected that you meant that from your wording. Nope, it's an entirely new connector that uses an optical signal rather than electricity.

It's possible that some manufacturer will release some box that connects to LightPeak and sends out a DVI/HDMI/DP/mDP signal, although that would be in the far-far-far future if the day comes when LightPeak is all there is on the computer and you need to connect some old monitor. ;)

Hmm, but you can currently change an optical cable into a mini 1/8" digital cable just via a very simple small adapter on the connector itself. Why couldn't we expect that with Lightpeak? Going from lightpeak to USB, Firewire, DVI, HDMI, DP, mDP, etc. via a simple adapter?
 
Hmm, but you can currently change an optical cable into a mini 1/8" digital cable just via a very simple small adapter on the connector itself. Why couldn't we expect that with Lightpeak? Going from lightpeak to USB, Firewire, DVI, HDMI, DP, mDP, etc. via a simple adapter?

I'll answer this, but it's the last question for the day. ;) It seems as though you are thinking of typical digital S/PDIF audio connections in your example? Yes, most computers and audio interfaces offer both an electrical RCA plug AND a Toslink optical plug, allowing you to hook up digitally no matter which type of connector your audio receiver uses.

Well, that's all fine and dandy for something low-bandwidth like S/PDIF.

With LightPeak we are talking 10 Gbps for starters, and they are planning to scale it up to 100 Gbps. It's also mainly intended for high-bandwidth transfers like networking and super high resolution displays (with more pixels than the bandwidth of current display cables allows for). Current connectors for USB/FireWire/eSATA devices, audio or display connectors like DisplayPort/miniDisplayPort and DVI/HDMI are not going away, so this conversion you are talking about is a complete non-issue. Yes, eventually they plan to COMPLETELY replace ALL of our device connections with this single Optical wire, but that's in the far future. Initially its biggest use is likely to be networking.

So, why Optical?

Well, Intel chose an Optical transfer method due to the inherent problems with Electrical transfers; those being lower speeds, higher rates of errors and corruption, lower possible cable lengths, crosstalk between wires, and the fact that electrical transfers require DIFFERENT WIRE CONFIGURATIONS depending on what the protocol is. The last reason is why we have different cables for HDMI, DVI, DP, Audio, USB, FireWire, etc instead of a single cable for everything. That's because each electrical protocol needs its own collection of individual wires to transfer the data it needs.

With Optical, ALL of these problems go away and you can transfer as many protocols as you want at once over a single wire, since it's all sent digitally with an Optical connection. The fact that it is Optical is the reason that it is able to transfer so much data over a single tube. Electrical would never be able to do that and would require multiple wires to reach the same bandwidth. So if you wanted to make something called "ElectricalPeak" and transfer at 10 Gbps, you would have to design a cable with loads of wires in it, all designed for transferring data simultaneously (in parallel) over every wire in order to achieve the same 10 Gbps bandwidth that LightPeak gets over a single wire. And if you're going for LightPeak's upper limit of 100 Gbps with an electrical solution? Forget about it. ;) LightPeak's Optical solution does all of that over a single wire, it makes no sense to stick to electrical cables anymore.

So, for all the reasons above, particularly the fact that we will still have all the old connectors on our computers for the foreseeable future, there will not be converters from LightPeak into existing connectors like USB and FireWire. You would have to design a converter box that accepts the LightPeak signal, decodes it, grabs the packets relating to its transfer, sends it to some converter chip that outputs the relevant electrical signal (such as USB or FireWire), and then go from there. It would make very little financial sense to create such a box since our computers will STILL have all the old connectors for a very long time. LightPeak devices require advanced circuitry to decode and pass through the light signals and therefore will initially be reserved for niche uses such as Networking and ultra resolution displays.
 
http://www.guru3d.com/news/intel-not-planning-usb-30-until-2012/



It appears that the quote comes from the April conference where Intel simultaneously announced Sandy Bridge's production date (Q4 2010). Note that "integrate into its chipsets" means building it right into their reference motherboards; they won't do THAT until 2012, but in the meantime they'll provide it to OEM motherboard manufacturers as an all-in-one USB3/SATAIII addon chipset.

I really need to go to bed now, hope there aren't more questions. ;)
You've only reinforced my point.

Intel has no USB 3.0 offerings until 2012 on the hardware it provides. This leaves many other manufacturers open to provide solutions.
 
I'll answer this, but it's the last question for the day. ;) It seems as though you are thinking of typical digital S/PDIF audio connections in your example? Yes, most computers and audio interfaces offer both an electrical RCA plug AND a Toslink optical plug, allowing you to hook up digitally no matter which type of connector your audio receiver uses.

Well, that's all fine and dandy for something low-bandwidth like S/PDIF.

With LightPeak we are talking 10 Gbps for starters, and they are planning to scale it up to 100 Gbps. It's also mainly intended for high-bandwidth transfers like networking and super high resolution displays (with more pixels than the bandwidth of current display cables allows for). Current connectors for USB/FireWire/eSATA devices, audio or display connectors like DisplayPort/miniDisplayPort and DVI/HDMI are not going away, so this is a complete non-issue. Yes, eventually they plan to COMPLETELY replace ALL of our device connections with this single Optical wire, but that's in the far future. Initially its biggest use is likely to be networking.

So, why Optical?

Well, Intel chose an Optical transfer method due to the inherent problems with Electrical transfers; those being lower speeds, higher rates of errors and corruption, lower possible cable lengths, crosstalk between wires, and the fact that electrical transfers require DIFFERENT WIRE CONFIGURATIONS depending on what the protocol is. The last reason is why we have different cables for HDMI, DVI, DP, Audio, USB, FireWire, etc instead of a single cable for everything. That's because each electrical protocol needs its own collection of individual wires to transfer the data it needs.

With Optical, ALL of these problems go away and you can transfer as many protocols as you want at once over a single wire, since it's all sent digitally with an Optical connection. The fact that it is Optical is the reason that it is able to transfer so much data over a single tube. Electrical would never be able to do that and would require multiple wires to reach the same bandwidth. So if you wanted to make something called "ElectricalPeak" and transfer at 10 Gbps, you would have to design a cable with loads of wires in it, all designed for transferring data simultaneously (in parallel) over every wire in order to achieve the same 10 Gbps bandwidth that LightPeak gets over a single wire. And if you're going for LightPeak's upper limit of 100 Gbps with an electrical solution? Forget about it. ;) LightPeak's Optical solution does all of that over a single wire, it makes no sense to stick to electrical cables anymore.

So, for all the reasons above, particularly the fact that we will still have all the old connectors on our computers for the foreseeable future, there will not be converters for existing types of connections like USB and FireWire. They are not competing, yet. Mainly because LightPeak devices require advanced circuitry to decode and pass through the light signals and therefore will initially be reserved for niche uses such as Networking and ultra resolution displays

Thanks for the thorough explanation. I wasn't well versed into the fine details of Lightpeak so I figured the same practices today with adapting different cable protocols would just easily as work in the future with lightpeak. But I guess that isn't the case.
 
Yes, eventually they plan to COMPLETELY replace ALL of our device connections with this single Optical wire, but that's in the far future. Initially its biggest use is likely to be networking.

I thought a big point of LightPeak was that it would be used on laptops and replace all the connections you currently need to hook up your display/audio/power/USB peripherals, presumably by keeping a little converter box at your desk that everything would connect to with its usual connector. Is this not expected in the near future at all? (I was really looking forward to that the next time I buy a laptop.)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.