But LP isn't a standard unless they get a quorum of folks to buy in. That means getting peripheral and other semi folks to buy in also. So far there are lots of Apple, Sony, Intel but not seeing much of a standardization effort. If Intel is the sole source supplier not sure this is going to get wide spread buy in. Especially if Intel is jacking up other stuff that is standardized in order to make head room for their proprietary stuff.
Actually, there's a fair number of partners, notably
Manufacturing Partners listed by
Vylen.
This much of an entwined infrastructure, LP definitely has the manufacturing (and design specification) set. Given the end-user product manufacturing from the likes of Hon Hai Precision, there's a serious chance it will show up. Ultimately setting the stage for a full-fledged adoption = specification is accepted as a standard (since LP's not been created by an organization like IEEE).
Afterall, Sony finally won out with BR, and is the "last man standing" in the HD disk format war. Nor was it purely accepted as the technologically superior specification. Their business planning and partnerships played a significant role (more the deciding factor from what I recall).
But even this much backing/support can fail if something falls short, notably the financial aspect (too expensive) or if the actual products can't deliver. If this ends up the case, it's just another attempt that ends up in the waste bin of history. There's definitely a few others sitting in that can somewhere.
But this one seems to have been fairly well planned out in terms of it becoming adopted and a recognized standard IMO. Now we need to see if it will actually deliver as expected.
Firewire is sitting on a PCI-e connection. Apple could declare FW dead (like did with Flash ... old tech that had its time) and swap one of the USB 3.0 pci-e controllers.
Additional parts can be cut out of a design over cost or "falling off" in terms of usability for what it's designed for compared to other specifications. Happens rather often in the world of computers, as I'm certain you're well aware.
If do a swap there isn't much of an increase in board space or parts count. It isn't like Apple hasn't tried to kill off FW before on other Macs. Folks can also plug in a FW card into a PCI-e slot if essentially new one. Apple could sniff out that there were a couple of quality boards out there to fill the gap if that introduce it.
Where it can get tricky though, is in the Tick part of the cycle, as the PCB's have already been designed and in production for about a year or so. To fit newer parts, it's easy if the component package is identical, but in the case of USB 3.0, it's not (not even the same pin count). It would actually require a PCB redesign, which means more $$$. This makes it far less attractive for all but all out high-end boards that users would be willing to pay for. Apple's not in this type of mentality, going by their history with Intel parts.
It's far easier to include (add, or swap out) newer components in a Tock cycle, as you have to design a new board anyway.
Assuming LP is adopted by Apple, I would expect to see FW be eliminated, assuming the bridge chips aren't horrible in price, making LP to FW adapters prohibitive (I wouldn't think so, as it could be detrimental to adoption, but it is possible).
LP is likely even less likely than USB 3.0 to be put into the core chipsets before 2012. In 2011 there will be far more USB 3.0 devices to plug into than there will be LP devices. In fact have any devices (not computers ) been demo'ed? All seen so far are PCI-e cards and giant dongles to the standard interfaces.
No, the initial parts will be additional components attached to the PCIe lanes, not within the chipset. As per devices, I've not seen them, and it's one of the issues that concerns me (i.e. lack of bridge chips currently to demo products with, as well as the potential of unwanted/detrimental CPU utilization).
I'm not trying to indicate LP is an absolute perfect solution, but it certainly has promise so long as it actually delivers what's been promised. What's attractive to me, is the fact there's not a dedicated protocol allowing it to be used for a multitude of busses. But this is one fact that can make or break it.
Assuming it actually does work, I do expect full blown adoption/proliferation will take time, with adoption starting in the workstation market moreso than the server (i.e. small clusters).
That could help out USB 3.0 in the short term, especially for consumer systems, as existing peripheral devices will already work with it. But it has it's limitations too.
Again, a more telling sign is how many vendors have demo'ed peripheral prototypes with newer stuff. Can certainly see FW3200 for aerospace and embedded apps but that doesn't necessarily translate over to computers peripherals.
Apple slow rolled the adoption of FW800 on the Mac platform. It wasn't till recently that all macs went 800. The vast majority of PCs , if have it, are still stuck at FW400. Likewise cameras , etc. There is a smaller subset of devices that went 800 but there is little market pressure to push FW faster; otherwise all these 400 sockets wouldn't survive.
USB 3.0 has many of the historically differentiating features that FW offered (channel like, bidirectional connections, isochronous , speed , etc. ). That's got to put FW on Jobs' "old tech" hit list sooner or later. If throwing FW under the bus (so intel can run over it) gets Apple LP then I suspect Apple would go for that deal.
FW parts have historically been more expensive than others, and is another part of it's demise (i.e. FW HDD disk enclosures are more expensive than their USB counterparts). Performance does cost, but other interface technologies have caught up, and are cheaper. It's really nothing more than simple economics (and what the accountants look hard at). If FW is offered (generally speaking), the S400 spec is chosen because it's cheaper.
I wouldn't be surprised at all that Apple wants to dump it in favor of an economically viable replacement, especially if it also includes things like far faster throughputs and the ability to minimize connections (can allow for other interface chips to be tossed out, actually rendering LP the cheapest solution). The latter is more of an issue with laptops, but Apple does well in that particular market.
Errrr.... intel makes lots of Ethernet connector tech.
I was talking about the optical aspects necessary for LP (why I mentioned the lasers, transcievers,... = optical aspects of the standard), not other standards such as Ethernet.
Intel's great at making chips, but they would take a financial beating if they tried to create LP completely on their own. There's too much R&D in areas they've never worked in before. Which would translate into too much time and money spent on the project, and likely result in it's being scrapped before completion.
Not sure how you do InfiniBand, IB, with some translation layer. The RMA latencies have to be low or you loose one of the major advantages of of IB. Not sure how going to bounce from LP transceiver , trap up to CPU for decoding translating, then push to memory address block without hitting more latency than if natively doing IB. IB almost requires non blocking switch paths between the end points too. Again not so compatible with typical USB topologies.
What I meant with InfiniBand, wasn't via translation, but rather as a replacement (bonded LP ports) to be able to reach sufficient speeds.
Now I don't see it as a full replacement (unless LP's actually capable of being run for such a use without a protocol translation, which I haven't seen anything on), but it's viable for smaller clusters due to the cost IMO (i.e. those that would be sufficient if run with 10G E or FC for example).
I figured that was implied in #2.

Guess not.
LP's advantage is that is substantially faster than what it is being aim at carrying/transporting. You can hide latency gaps and protocol overhead in that speed gap. As soon as you want to push to 80+% of LP bandwidth going to run into problems. A couple of PCI-e 1x links, some USB 2.0 , and a 720p video stream. Sure, basically a docking station for a laptop or mobile device.
If trying to run multiple higher speed, isochronous protocols ... much more likely going to run into trouble. You're also not going to get 10GbE , let alone 100GbE, out of it.
For a single line, no. Latency will prevent. But if you bond it, you can still exceed the band required (static, before accounting for latency), and still be able to achieve a target requirement (i.e. can get 10G E if bonded 2x 10Gb/s LP ports).
This is dependent on whether or not LP can actually be bonded, but I'd think that they thought this one through. If not, then it's really only suited to the consumer market afterall right now.
I think it much more depends upon the applications that come to the machine. The iMac is always going to be a more unbalanced box. If there is more software which scales up with resources then market should do OK. MP is more doom if it has been propped up by "high status symbol" and gaming users than need it to run a business users.
The XServe is probably in a much more precarious state. If it falls then the MP is next in line. If XServe stays and MBA falls then perhaps will have more cycles to put more value into the smaller line up.
Software will definitely have an impact in regard to the MP surviving (and XServe).
Unfortunately, the software always falls behind the software, and it's not helped by the necessitation for backwards compatibility. Worse, is that Apple hasn't finished the Cocoa/Carbon translation work yet, as developers like to wait for others to do as much of the work as possible (i.e. graphics related application suites). Slower cycles for professional applications also tends to push things back (i.e. 3 - 5yr rather than consumer software that may update annually). Then there's of course, software that can't benefit from SMP at all, such as a word processor.
I definitely agree the XServe is in more danger, and could negatively affect the MP if it goes (i.e. shared R&D only carried by the MP, pushing up prices, further reducing sales to the point of unsustainability).
Personally, I wouldn't be offended to see the MBA go away, as to free up time that can be used to shorten current development cycles (since they seem to be adding in new products, but not gone on a hiring binge to alleviate the current product refresh cycle). It just seems like too many projects, and too few people.