I'm disappointed it only runs at 10 Gbit/s, and taking 10 years to get to 100. In terms of optical communications, that is pretty pathetic (although granted not for such a small unit). Hopefully with some further miniaturisation and research Intel should be able to kick it up to a Tbit/s before too long.
When you bring in a new product, it obviously needs to be better than the existing alternatives, but you don't want to be too much better. Twice as fast is good - 100 times as fast is bad. The reasons are partly customer psychology, partly smart business practices.
If the new product is too far ahead of the current market, people often can't get their head around it, or see a value, or they assume that something, somewhere, must be wrong because it just
couldn't be
that much better, for such a low price. It's not rational - it's instinctive, and most marketers know it.
Also, why give everyone 100Gb speeds today, when you can sell them 10Gb for 3-5 years, then get everyone to upgrade to 50Gb for the years 6-10, then get them to upgrade to 100Gb after that? It's 3 bites at the sales cherry instead of one.
...its going to take a hell of a lot of marketing and arm twisting to get the entire peripheral industry to adopt Light Peak.
...why did Apple (who has never been afraid to develop a new standard) have Intel develop it?
This is the really interesting development, and it suggests that Apple have learnt from their mistakes with FireWire.
If you want your new idea to become a universal connector for everything, what better way than to have it built onto every Intel chipset that ships around the world?
Apple tried keeping FW closer to themselves, and it failed (relatively speaking) so now they are ensuring that Intel will make Light Peak a standard for them.
I suspect also that this technology
requires integration right onto the main chipset - which means they had to get Intel involved. There needs to be a Light Peak router (as seen in the demo videos and documentation), and that needs to live on the chipset, I think. That way it can pass the data between any component - HDD to main bus, graphic card to monitor, HDD to another HDD, etc. If you want to replace all other internal and external cables, then you have to be on the chipset. And who makes the most chipsets in the world?
Finally, the way Light Peak is an encapsulation technology for any protocol you fancy is excellent - that gives it indefinite lifespan. As higher layer protocols for networking develop, the lower level LP technology just keeps soldiering on underneath.
For those wondering what I mean, think of Light Peak as a bus (the going to school kind). Any number of different people can climb onto a bus and go to the destination. There could be a nun, a businessman and road sweeper, all on the one bus.
Light Peak is like the bus and the passengers are Ethernet, SCSI, SATA, whatever. They can all be parceled up and transported inside the Light Peak 'bus' and when they get off at the other end, they are still Ethernet, SCSI and SATA. They just became bus passengers for a while. New protocols are like new passengers - Light Peak will still happily carry them to where they want to go, whoever they are and whenever they come along.
Why go through all that work to take electrical impulses convert them into optical impulses and then most likely have to convert them back at the other end if you are going to run copper with the optical. Sure Optical is faster now but if history shows us anything they will be able to get copper up to that speed in a short time. The only thing I see as a benefit of optical over copper is distance but 90% of the time when I am at my desk this isn't a problem.
Limits are being hit with copper due to the fundamental laws of physics, at least as we currently understand them. The speeds we are getting out of cables (and individual processors for that matter) are getting so high, that the clock cycles are starting to break down.
ie: the voltage changes between each clock cycle aren't able to rise and fall as far as they need to before the next pulse is already coming along. So they get muddied up with each other. Which is really, really bad.
That's one of the reasons why we are going to multi-core machines instead. If you can't make one processor faster, use two processors instead.
