Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I'd never heard of this technology before reading this article (although identical concepts have been explored in the sci fi I've read). If it does become a physical reality, then that would certainly change how computers work.

I'm also thinking about how thrilled Apple would be - at last they could try to find a way to hide a single access port on their laptops instead of putting up with the half-dozen or so that are pretty much required for any decent machine. =P
 
Uh, like many in this thread already said, how hard would it really be to run a copper line to power it and package it in the same wire.....

It's unlikely that a purely fiber system would do that and I read NOTHING to indicate that it's in the design. What they can do and what people here seem to think they SHOULD do is irrelevant. Like I said, the system is designed for mobile use and it doesn't need a copper wire for that. I can almost guarantee there will be no copper in the system. The whole point of optical connectors is to NOT use copper so you can run long lengths of wire. You cannot run a long length with small copper wire running power through it. It would overheat after a very short distance.
 
It's unlikely that a purely fiber system would do that and I read NOTHING to indicate that it's in the design. What they can do and what people here seem to think they SHOULD do is irrelevant. Like I said, the system is designed for mobile use and it doesn't need a copper wire for that. I can almost guarantee there will be no copper in the system. The whole point of optical connectors is to NOT use copper so you can run long lengths of wire. You cannot run a long length with small copper wire running power through it. It would overheat after a very short distance.
I guess that your guarantees do not have so much value.
http://blogs.intel.com/technology/2009/09/lighting_fast_-_high_speed_opt.php
The whole point of optical is to gain speed.
You can easily transport power for 100 meters and it is done everywhere with power-on-ethernet.
Essentially mobile use will need power. There's no problem to have your server or tv plugged in for power through second cable, but it very unconvenient to blug two cables with mobile phone or hdd every time you need it.
 
I'm concerned the demo shows a USB-type connector. They're too large and thick.

Also, am I the only person who wants a lockable connector?


Don't worry. When it makes it way to Mac, you'll find that it has a LightPeak Mini-connector, or even LightPeak Micro -connector (for which you may buy an adapter to connect them with the rest of the world). :)

What worries me, is that these kind of projects may give a reason to delay, or even skip USB3 alltogether. They were not part of the promoters at least.
 
I'd never heard of this technology before reading this article (although identical concepts have been explored in the sci fi I've read). If it does become a physical reality, then that would certainly change how computers work.

Everybody seems to have developed some mass blind spot here - and forgotten about TOSlink (S/PDIF) connections - although they've probably already got some in their audio gear or on their Macs!! OK, it's a different protocol, but it's the same technology.

This idea sounds not so much new, rather overdue, if you ask me. TOSlink is ten years old+, and hence proven, cheap (plastic fibre for short runs) and mature - it's about time it was developed for a wider application. Glad to see it finally being proposed for applications that can really push it - audio (even multichannel) was just a stroll in the park for optical...
 
What worries me, is that these kind of projects may give a reason to delay, or even skip USB3 alltogether. They were not part of the promoters at least.

Not necessarily... assuming USB 3.0 does not have any significant cost per unit over USB 2.0 then there are plenty of card readers, printers and the like that can benefit from increased speed AND still be sold at a relatively low cost to the consumer. Light Peak looks to be starting at a minimum $10 per unit so that means your $20 card reader went up 50% in price.

Light Peak will start on the high end devices where the extra speed = $$ saved to companies and work its way down to generic consumer devices over time.
 
OK, it's a different protocol, but it's the same technology.

Actually, it's not. TOSlink uses an incoherent red LED, and started at about 3 Mbps. (Note that even today no home theatre system can run better than heavily compressed 5.1 over TOSlink - you need HDMI or better connections for 7.1 192KHz/24-bit true HD sound.)

Light Peak uses coherent lasers.
 
TOSlink (S/PDIF) connections - although they've probably already got some in their audio gear or on their Macs!! OK, it's a different protocol, but it's the same technology.

I wonder how practical it would be to use the toslink audio ports for new things. Plenty of people don't use them at all or use them in analog mode.

It might be a good breeding ground for a Light Peak "ecosystem".

Rocketman
 
Not necessarily... assuming USB 3.0 does not have any significant cost per unit over USB 2.0 then there are plenty of card readers, printers and the like that can benefit from increased speed AND still be sold at a relatively low cost to the consumer.
I also fear that Apple will skip usb3. Price isn't always the factor for Apple. They skipped eSata, which has almost zero additional cost and Mac users have suffered many times slower connections for years now...
 
I also fear that Apple will skip usb3. Price isn't always the factor for Apple. They skipped eSata, which has almost zero additional cost and Mac users have suffered many times slower connections for years now...

The difference is USB is a widely adopted consumer device interface likely to have massive inertia of its own. LP is a CPU side interface to minimize the plug count while increasing total bandwidth among that group of plugs. That means the other end of that wire has to have a way to interface with "many plugs". Rather then the one-to-one pug style of mini-display port converters Apple offers, I hope it is a one to many scheme that Apple, to date, rarely offers and should. The whole idea is to concatenate many "styles" of I/O to one port. Therefore it should accept many "styles" of input to do so.

The good news is a single dongle of diminutive size and mass can accept a wide range and number of plugs. A second tier dongle could dock to the first or have its own LP connection to receive older or less popular plugs, that still have to be supported. If the dongle system is itself an ecosystem, third parties could supply special capabilities but still compatible with the high I/O low device end plug count meme.

Ethernet and LP I/O would make the systems universal.

Rocketman

3 layers times 6 plugs each? ONE LP point. Choose only the ones you need.
 
I guess that your guarantees do not have so much value.
http://blogs.intel.com/technology/2009/09/lighting_fast_-_high_speed_opt.php
The whole point of optical is to gain speed.
You can easily transport power for 100 meters and it is done everywhere with power-on-ethernet.
Essentially mobile use will need power. There's no problem to have your server or tv plugged in for power through second cable, but it very unconvenient to blug two cables with mobile phone or hdd every time you need it.

Your link site says absolutely NOTHING about it providing power. You might actually try linking to the CNet report which does briefly talk about it.

In any case, I fail to see how this is superior to 10 Gigabit Ethernet, which already exists, has the same speed, can be deployed also to 100 meters (longer with optic fiber connections, which are available) and has power as you say. Why reinvent the wheel? Oh yeah...$$$$
 
Your link site says absolutely NOTHING about it providing power. You might actually try linking to the CNet report which does briefly talk about it.

In any case, I fail to see how this is superior to 10 Gigabit Ethernet, which already exists, has the same speed, can be deployed also to 100 meters (longer with optic fiber connections, which are available) and has power as you say. Why reinvent the wheel? Oh yeah...$$$$

You want to be technical? You can send multiple signals down a single fiber optic cable by simply shooting it as a different angle, as long as it is within the critical angle of the given medium. This will give us bandwidth above any other form. So theoretically we could send 20 streams of data down the one cable. (Well not theoretically, companies already do it) Lets see copper do that, multiply its bandwidth by simply adding another laser and sensor. (Oh Wait it cant)

By your logic we shouldn't of changed from the Abacus, or be researching Quantum/Bio-Computing. It's people like you that stifle science and technology. Refuse to change because its "enough".
 
You want to be technical? You can send multiple signals down a single fiber optic cable by simply shooting it as a different angle, as long as it is within the critical angle of the given medium. This will give us bandwidth above any other form. So theoretically we could send 20 streams of data down the one cable. (Well not theoretically, companies already do it) Lets see copper do that, multiply its bandwidth by simply adding another laser and sensor. (Oh Wait it cant)

By your logic we shouldn't of changed from the Abacus, or be researching Quantum/Bio-Computing. It's people like you that stifle science and technology. Refuse to change because its "enough".

it's shouldn't have. not shouldn't of.
 
The difference is USB is a widely adopted consumer device interface likely to have massive inertia of its own.
I guess that if Apple want's to ignore usb3, it will happen because they want to make demand for LP. No technical reasons.
LP will get cheaper if it gets massivel adopted, but Apple isn't big enough to make this on their own. It might get similiar than FW, where massive adoption never came and Apple has slowest usb ports in the industry. "If you need speed, take firewire (and if you need more speed, buy pc with esata...)"
 
Your link site says absolutely NOTHING about it providing power. You might actually try linking to the CNet report which does briefly talk about it.

In any case, I fail to see how this is superior to 10 Gigabit Ethernet, which already exists, has the same speed, can be deployed also to 100 meters (longer with optic fiber connections, which are available) and has power as you say. Why reinvent the wheel? Oh yeah...$$$$
Blogger in my link is one of the developers and in the discussion he says that they are considering copper for power.
LP's advantage over ethernet in the long term have two cornerstones:
1) you can connect everything to it. I guess that you won't see displays, scanners, iphones, keyboards, mice, etc. with ethernet.
2) roadmap to 100G, which is needed for multiple high resolution displays. Ethernet (or any other tech) will have very hard time to do this in copper for 100m.
 
So say you have HDMI (HDCP) connected via this LP tech. How do you keep someone from snooping the data, seeing as this sounds like one huge buss that everything rides on top of? It would seem to me that the whole HDMI (HDCP)handshake could be compromised. Same question for DisplayPort (DPCP).
 
You want to be technical? You can send multiple signals down a single fiber optic cable by simply shooting it as a different angle, as long as it is within the critical angle of the given medium.

Although I agree with the spirit of your post, and the fact that this statement is "true", it is highly impractical. If you have signals coming from lots of different directions going through the same fibre, you have to align all the detectors up at the other end. Which is fine, until someone touches the fibre, or moves a device, and screws up the alignment.

The reasons fibre optics are better than electrical are (in increasing importance):
1) Electrical signals are attenuated much, much more quickly than infrared light traveling through an optical fibre
2) Electrical data transfer speeds are limited by inductance and capacitance in the wires. Basically, the electrical effect has to reach the end before a new one can be sent, so only one bit of data can exist in a single line at once.
3) Optical data transfer rates are only limited by the optical bandwidth of the fibre (how many different wavelengths you can stick down) and Quantum Mechanics. Or, not only can you have more than one light pulse (data bit) traveling down the fibre from a laser, but you can also have many lasers operating at different wavelengths independently transferring data.

So, basically, optical communication is superior in every way except cost. and amount of equipment required.
 
Although I agree with the spirit of your post, and the fact that this statement is "true", it is highly impractical. If you have signals coming from lots of different directions going through the same fibre, you have to align all the detectors up at the other end. Which is fine, until someone touches the fibre, or moves a device, and screws up the alignment.

LOL, no. Just no.

Multi remotes don't need to be pointed at the exact same point either. It's about telling which package goes where. It's not some semi-analogue connection done with light. It's light and no light (i.e. 1s and 0s) sending information. It's not one receptor to each stream. It's one "tube/channel" which sends data. It's done digitally, therefore it will tell which goes where. Just like a wav file/aiff file where you have little endian and big endian. One having the information bits at the beginning of the file (well, at each "word", rather), the other at the end. both telling the recipient how to read the PCM ("raw" audio data).

This is no different. You don't need to have multiple receptors in the way you suggest. That's dumb.

With that said, I prefer coaxial digital (think AES/EBU and S/Pdif) rather than optical digital for shorter runs. The reason is you have at least two less parts to go wrong. With optical, in the sender end you have a widget to make electrical impulses to light pulses, and in the other end, you have the opposite, so the equipment can run it. This ups the risk of faults. I prefer to keep it simple if at all possible. Simple and rugged.

Although, I wouldn't mind having this sort of thing in my house, along with the cat-cables in the walls.b

Edit #2:
I just realised what exactly you were responding to: Namely the suggestion to shoot the shyte at different angles (man, that would mean you'd have to precisely coil your cable ...

I guess I responded to something else, but I'll let the post stand. If nothing else, then as a monument to my idiocy at times :p
 
Blogger in my link is one of the developers and in the discussion he says that they are considering copper for power.
LP's advantage over ethernet in the long term have two cornerstones:
1) you can connect everything to it. I guess that you won't see displays, scanners, iphones, keyboards, mice, etc. with ethernet.
2) roadmap to 100G, which is needed for multiple high resolution displays. Ethernet (or any other tech) will have very hard time to do this in copper for 100m.

100Gb ethernet over copper is coming and there is already iSCSI which lets you send SCSI commands over ethernet. there are a few iSCSI SAN's out there. not as fast as fiber and brocade switches, but good enough for some uses

scanners, there are corporate scanner/copiers where it's in ethernet and will convert to pdf and email you whatever you scan in. on the consumer side there are some all in one scanner/printers that work over wifi

there is absolutely no use for ethernet mice and keyboards, just like there is no use for light peak ones.

what is the point of putting the display on a different connector than it has now?
 
I just realised what exactly you were responding to: Namely the suggestion to shoot the shyte at different angles (man, that would mean you'd have to precisely coil your cable ...

Indeed. If you should move the device, or should the temperature change slightly, and you're completely b-worded.

I guess I responded to something else, but I'll let the post stand. If nothing else, then as a monument to my idiocy at times :p

I'm, um, not quite sure as to what your post is on about. Without knowing what you were thinking about at the time, it makes little sense to me. You should rewrite it so you have the question in there too. Are you trying to say that you don't need different receptors for data destined for different devices (ie. display, storage, printer, etc.)?
 
You want to be technical? You can send multiple signals down a single fiber optic cable by simply shooting it as a different angle, as long as it is within the critical angle of the given medium. This will give us bandwidth above any other form. So theoretically we could send 20 streams of data down the one cable. (Well not theoretically, companies already do it) Lets see copper do that, multiply its bandwidth by simply adding another laser and sensor. (Oh Wait it cant)

You're just stating gobbledygook there. Theoretical means jack. We're talking about Light Peak here, not some theoretical optical format you're making up in your head and Light Peak is 10 Gigabit just like 10 Gigabit Ethernet. 100 Gigabit Ethernet is already in the works. Both work at 100 meters, same as Light Peak. So once again, I reiterate, WTF is the point in coming up with a NEW standard when one ALREADY EXISTS. The cable in question could be made to handle more than traditional Ethernet. It's all about drivers. Light Peak will need them also. So, like I said, the ONLY reason to do it is because Apple/Intel wants THEIR format to be a standard. There's HUGE MONEY in controlling licensing fees, etc. Apple had hoped Firewire would be the standard, not USB 2.x. for just those reasons.

By your logic we shouldn't of changed from the Abacus, or be researching Quantum/Bio-Computing. It's people like you that stifle science and technology. Refuse to change because its "enough".

WTF are you talking about??? My logic uses fiber to advantage when it's useful. Light Peak gets 100 meters with fiber. Ethernet has the ability to use optic fiber connections to go 3x further than Light Peak. 10Gigabit Ethernet's range is 100 meters with copper. If optical fiber connections are used instead, it has a range of 300 meters, (3x longer than Light Peak).

So by your logic, we should use an inferior standard because its' by Apple than use what already exists and already works. I don't think Steve Jobs needs to be any richer.

Blogger in my link is one of the developers and in the discussion he says that they are considering copper for power.
LP's advantage over ethernet in the long term have two cornerstones:
1) you can connect everything to it. I guess that you won't see displays, scanners, iphones, keyboards, mice, etc. with ethernet.
2) roadmap to 100G, which is needed for multiple high resolution displays. Ethernet (or any other tech) will have very hard time to do this in copper for 100m.

1> This is a software/driver issue, not a connector/cable issue.

2> There's already a road-map to 100G Ethernet. It's well on its way, already, unlike Light Peak which isn't even available at 10G. 10G Ethernet is already available. It could be adapted to do what Light Peak proposes to do with extra jacks and new driver handling. The technology is proven and it's already deployed. The Cat6 cables used for 10G are backwards compatible as well (I'm using them in my 1Gigabit Ethernet setup; they work fine). They're cheap and readily available. I can only imagine how much Apple would charge for a Light Peak connector that's only a meter long.... Just look at the cable prices at Best Buy for basic RCA even and you can see what I mean. Since Apple/Intel could prevent mass supplying, they could charge anything they want.
 
You're just stating gobbledygook there. Theoretical means jack. We're talking about Light Peak here, not some theoretical optical format you're making up in your head and Light Peak is 10 Gigabit just like 10 Gigabit Ethernet. 100 Gigabit Ethernet is already in the works. Both work at 100 meters, same as Light Peak. So once again, I reiterate, WTF is the point in coming up with a NEW standard when one ALREADY EXISTS. The cable in question could be made to handle more than traditional Ethernet. It's all about drivers. Light Peak will need them also. So, like I said, the ONLY reason to do it is because Apple/Intel wants THEIR format to be a standard. There's HUGE MONEY in controlling licensing fees, etc. Apple had hoped Firewire would be the standard, not USB 2.x. for just those reasons.

WTF are you talking about??? My logic uses fiber to advantage when it's useful. Light Peak gets 100 meters with fiber. Ethernet has the ability to use optic fiber connections to go 3x further than Light Peak. 10Gigabit Ethernet's range is 100 meters with copper. If optical fiber connections are used instead, it has a range of 300 meters, (3x longer than Light Peak).

So by your logic, we should use an inferior standard because its' by Apple than use what already exists and already works. I don't think Steve Jobs needs to be any richer.



1> This is a software/driver issue, not a connector/cable issue.

2> There's already a road-map to 100G Ethernet. It's well on its way, already, unlike Light Peak which isn't even available at 10G. 10G Ethernet is already available. It could be adapted to do what Light Peak proposes to do with extra jacks and new driver handling. The technology is proven and it's already deployed. The Cat6 cables used for 10G are backwards compatible as well (I'm using them in my 1Gigabit Ethernet setup; they work fine). They're cheap and readily available. I can only imagine how much Apple would charge for a Light Peak connector that's only a meter long.... Just look at the cable prices at Best Buy for basic RCA even and you can see what I mean. Since Apple/Intel could prevent mass supplying, they could charge anything they want.

Its not theoretical if Comm companies already do it, and it doesn't have to be perfectly aligned. It's digital, generic sensors suffice.

I'm not even going to bother arguing further. I didn't say you had to use light peak, but you shouldn't be shunning it blindly either. I don't blindly shun Windows even though I prefer Unix OSs. As far as we can tell its still a draft format ATM, anything can change.

A. It gets no where

B. You obviously have objections to any thing made by Apple, made clear by your posts. It's competition. You should be welcoming it.
 
I also fear that Apple will skip usb3. Price isn't always the factor for Apple. They skipped eSata, which has almost zero additional cost and Mac users have suffered many times slower connections for years now...

Well USB 2.0 vs eSata is a pretty easy win to USB 2.0 given how many devices run USB 2.0, eSata seems to be just an external drive connection and not much else.

USB 3.0 vs Light Peak is different, most of the device manufacturers would consider USB 3.0 a wise progression as people are already familiar with USB in general, more accepting of it vs new technology Light Peak, for the average user USB 3.0 provides them with all they need. So near term USB 3.0 is a pretty safe bet.

Long term though, say in 3-5 years, as Light Peak evolves, printers, cameras, card readers etc start taking advantage of the speeds and the like then we might see the change over occuring.

I could see Apple going to Light Peak earlier as cost on the PC side isn't breaking the bank, but most likely going with one Light Peak port, and then expansion can be based on hubs for the early adopters. Until you see the device manufacturers commit to products there is no need to do much with the technology.

Hopefully they either make the system ports USB 3.0/Light Peak compatible as opposed to needing special ports, that would make life much easier. Something like my HP laptop which has an eSata/USB 2.0 port.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.