Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Not refreshing the Mac Pro for over a year is inexcusable. Additionally, since the release of Snow Leopard and Adobe Photoshop CS5, it is now time for G5 tower owners to switch. Apple had a golden opportunity at WWDC to introduce a new hexacore Mac.

G5 PowerPC based Mac towers were produced from 2003-5 the last model is 5 years old now, Early G5 Power Mac owners shoulda replaced in 2008 if they wanted to stay current! If they are bleating about the lack of a hexcore now, why didn't they drop a log at the sight of the 2009 model with 16 core virtualisation?
Apple leave between 12 and 24 months between OS refreshes so why follow a shorter cycle for the hardware? By that roadmap overdue is early 2011. By staggering the releases of OS and top-end hardware they excite sales of each top flight model twice, it's all about the Benjamins!

Apple don't have the insanely competitive PC market force of other vendors making a Mac Pro to hurry them up. Dell and HP slice cents off every refresh build cost, to beat the other on profit margin and persue each others spec upgrades as close as they can. Meanwhile Supermarkets lay out Medion media towers on the cheap to catch the non techie market. It's all driven by Windows compatible competition, who makes a Mac OS X machine to compete and chivvy Steve along? Oh yeah, they sued their asses....

The iPhone refreshes faster because of the mass of smart phones appearing to steal it's crown, Phone-using Joe Public isn't as brand loyal to a phone OS and Apple know it. If you want a Mac OS running tower you got 1 vendor or a Hackintosh, they won't hurry to break that market up anytime soon.
 
Current MP has worst memory design of them all and it has been longest in market.
What does this mean?

It's so good that Apple can't find a reason to update it :p Seriously, there are couple Macs in need of an update, Mac Pro and MBA. There are always some updates in autumn so lets keep our thumbs up for that! All Macs except Mini and maybe MacBook can be updated some time during the autumn.
 
Light Peak is holding things up I think. .... as NAS and SAN's are really coming down in price and there will be a big push from Apple and Intel to replace Firewire, FiberChannel, 10 Gigabit Ethernet and even Infiniband in a lot of applications.
...
.... That could save companies a lot of money and force Cisco to rethink their strategy.

Eh? There is little to indicate that Light peak doesn't suffer from the same fundamental networking flaw that USB has. Namely, that it is a hub based model oriented toward hooking peripherals to a computer. There is a one central device and the communication is managed through just one. The demos have been of pushing video from one place to another (e.g., multiple digital video streams ) which in most devices just goes in one direction ( from player to monitor). Don't get lost that they are pushing the demo video between two computers. One of them is just a hub ( as in this video http://www.youtube.com/watch?v=nfGevFIVKw4&feature=channel).
One of the computers in these demos is always headless and the images are coming out of another computer which is hooked to the displays.


Most of the LP descriptions contain a laundry list of connector/buses that are supports to be transported. PCI , USB , SCSI , SATA, HDMI are all generally hub-spoke based. Folks have thrown in stuff like Ethernet and Firewire in some lists which aren't. However, those are typically hooked to PCI connections to the computer so could be mimicked by transporting the backend PCI channel to the host computer. Intel isn't a fan of Firewire anyway so I'd be surprised if they bothered to deal with it.

You can't do NAS and SAN with a hub-spoke network constraint. You can do direct attached storage (DAS). In the DAS market, Fiber Channel, 10GB Ethernet , and Infiniband don't make economic sense. They cost far too much to hook to just one machine in most situations.

If there is support for multiple hosts networks then Intel is keeping awfully quiet about it. Neither is Light Peak going to replace everything. There is going to be a subset of protocols that it targets for encoding/decoding so that can be transported over the new, incompatible protocol and then decoded on the other side. There is lots of hand waving that it will do anything. There is no creditable explanation for that.


So, Cisco is probably not all that scared. I suppose there is a lost opportunity for folks with NAS boxes that with low host connection counts ( 1-2 computers) who may now switch to DAS, but that was already there anyway (with eSATA, SAS , etc. ).

Lightpeak has the potential for very small shops ( 1-4 workstations) to buy/use high speed storage boxes. However, they will still run into walls when comes time to push those files between the boxes.

Unmanaged, 24 port, 1GbE switches are in the sub $400 range. It isn't that expensive to deploy for a small group. Even is step up to one with link aggregation (to get closer to 2GbE bandwidth) still only in the $1,300 range which is still around a $100/port-pair.

10GbE still is high, but the costs for that will drop over time also. 40GbE ( and 100GbE ) just past this month as standards. So 10GbE is no longer top of the heap. The costs will start to drop faster now. Once it isn't the "fastest" and is mature ( passed in 2002 and mainstream stuff from 2005-6 era ) the prices will drop back into the reasonable range.

Infiniband and 10GbE will likely kill off Fibre Channel. If anything Apple needs a solution for one of those two since they historically offered Fibre Channel on the high end. (would help also if they figured out that the iSCSI protocol existed. )

All these expectations that Light Peak is going to cure all problems is off.
 
I think we'll still be waiting come tomorrow. I think there is a high probability that something will be updated next month, approximately July 13th. Whether it's the Mac Pro that gets the next update is anyone's guess. However it seems to me there is less evidence/rumor that the Air will be updated soon other than it has been almost 13 months. I was really hoping for June 29th but think now it will be another "just wait" tomorrow.
 
Light Peak is holding things up I think.
Actually, it's not, as LP parts aren't even due to ship until Q4 2010, which means if on time, it won't appear in systems until 2011.

The new CPU's are meant to work in existing LGA1366 boards with a microcode update. It's cheaper for Intel and vendors to take this approach (chipset and resulting boards can run for 2 years, with only the CPU changing to improve system performance). In most cases, new tech, such as LP, USB 3.0, FW1600, SATA 6.0 Gb/s,... are expected to be added to the non-compliant boards via add-on cards.

Newer board designs will incorporate them when determined financially feasible (i.e. is it a budget board that relys solely on bare minimum part count = fewer features, or a high-end unit that additional semis are added for features prior to their inclusion by Intel in the chipset). A good example is USB 3.0 not being added to Intel's chipsets until 2012 (formally announced not too long ago).

I don't know if it will really catch on and be able to replace everything that it potentially could, but I expect that that will be the intention.
It's definitely intended to be capable of replacing a fair few interfaces, but we'll have to see what "shakes out" (most will result over cost).

The cost of high end network infrastructures is a large percentage of the overall cost of the potential customer base of the Pro machines and typically limits the usefulness and scalability of the product.
I'm not so sure on this, as I've the strong impression most are owned/operated by small shops (i.e. graphics design), and independents more than larger entities.

In the former group, they're less likely to be on such a high speed network, and run more as an independent (stand alone) system. The network might be there for a backup solution, if there's not a dedicated means per system. But comparing price for small scale solutions, the independent means is going to be cheaper.

For the latter group, yes, there are significant investments in networking equipment for the desired requirement. But this is shrinking for workstations from what I've seen. In these cases, it's more common to use clusters and then attach the workstation via say FC or Infiniband.

LP would be desirable here though, and is a big selling point IMO (makes small clusters a practical solution for SMB or independents with a high computing requirement).

It is expensive to even support Gigabit speeds for any reasonable number of machines and I can picture Intel really looking hard at getting into the networking game by bypassing the current status quo of Ethernet.
I agree. When I saw the video of LP's first demonstration, that's one of the things that immediately crossed my mind. This would give Intel a serious "leg up" over the competition.

Assuming LP hits their target pricing, it will be cheaper too per system (i.e. card and cable ~$100 per, not including the switch; not sure what the switch costs will be yet). But I suspect they're well aware that it needs to come in under 10G Ethernet to make it attractive. Assuming this happens, the enterprise community will bite.

...(duplex fiber? Seriously?)...
LP is actually Duplex as well (Up and Down are each 10Gb/s). It's just a lot thinner = smaller cable.

And as for need, of course they need it, it's just too expensive for consumers right now (artificially so if you ask me.) You want to switch back to 100 megabit Ethernet? How about 10 megabit? You can copy the same stuff, it just takes longer right? You might argue that the difference between a second and ten is not very great, but given the choice I'm sure most will go with what is faster, and that's what LP is promising. Remember it's starting at 10GB, and it will scale from there. Moving around terabytes of data, especially locally is becoming really common even for 'normal' people. Using cheap fiber cables (yes there is such a thing!) and low cost high volume components LP could potentially change how data is moved.
It's not necessary for consumers to actually need 10G yet, but that doesn't mean they're not interested either.

For consumers, the data rate is usually governed by their ISP throughput anyway (i.e. ISP signal distributed via a router to the other systems in the household). That's not to say they don't transfer large files between systems, but it's not critical that it be done within a fixed period (fast enough to transfer say 1TB/s, as you might need in an enterprise environment for nightly runs such as backup).

Also, you may recall that LP was developed in partnership with Apple (and may have actually been a 100% Apple project.) And it was demoed on Apple hardware by Intel, so I'm willing to bet we will see it there first. And as I said before probably on the next generation Pro's.
The hardware used in the demo wasn't Apple's, but appear to be Intel's own products and commodity components (i.e. PSU, CPU cooler with an LED fan,...). Apple's boards use proprietary connectors, even for the PSU (think of how HDD's and graphics cards get power on an Apple board).

Apple may have played a part (and I think they did, but their contribution is with software, which is why the system was running OS X), Intel is the driving force behind LP in conjunction with other partners. Apple would benefit from this type of arrangement, as they get OS X developed in time, with most of the bugs worked out at the time of release.

and how much do you think LP will be to start off with? I'm not so sure apple would be keen to include it, it could add $500+ to the machine costs - does that seem a reasonable number?
It won't be that expensive, as the parts are only expected to be ~$50USD per LP connection. I'm not sure of switch costs yet, as fewer units sold mean a higher R&D figure per unit sold in order to recoup those costs.

But it would be attractive for Apple as users would be better able to set up small clusters for render farms. As it's a cheaper way to go, that means more independents and small shops can implement such a solution. That means more system sales for Apple (rather than just one workstation per user). Those additional systems may be XServes (assuming they're willing to have a rack rather than just pedestal systems). Either way, more systems sold would increase Apple's sales figures.

of course i don't want to go backwards - but i disagree that even "normal" people have a need for 10GBoE. will "normal" people have a RAID cluster that is fast enough to support that? no. all they have are external USB or internal HDDs that cant go any faster then 1GBoE at this point in time.
Good point, as that much throughput means there must be another part of the system that can push data at such a rate. Ultimately, there's additional costs over the network, and it's all expensive. This means it's out of reach for the average consumer at this time.

Consumer products are all made with a single driving compromise; low cost.
 
I think we'll still be waiting come tomorrow. I think there is a high probability that something will be updated next month, approximately July 13th. Whether it's the Mac Pro that gets the next update is anyone's guess. However it seems to me there is less evidence/rumor that the Air will be updated soon other than it has been almost 13 months. I was really hoping for June 29th but think now it will be another "just wait" tomorrow.

At least "next month" is closer than before. ;)
 
Sorry, but 24 hours will be coming and going and still no....

2010 mac pro :(

Best bet possibly July-September.


That's the position I'm in. If the Mac Pro isn't updated 6/29 I'm buying an used Intel based tower and holding off a few more years before getting a new one.
 
Also, you may recall that LP was developed in partnership with Apple (and may have actually been a 100% Apple project.) And it was demoed on Apple hardware by Intel, so I'm willing to bet we will see it there first. And as I said before probably on the next generation Pro's.

Actually... it wasn't apple hardware at all. It was apple software hower aka hackintosh.
 
G5 PowerPC based Mac towers were produced from 2003-5 the last model is 5 years old now, Early G5 Power Mac owners shoulda replaced in 2008 if they wanted to stay current! If they are bleating about the lack of a hexcore now, why didn't they drop a log at the sight of the 2009 model with 16 core virtualisation?

Oh, I guarentee you, I was salivating over the idea of that computer. Heck, I still am! I'd buy one in a heartbeat if, well, I'm paying top dollar for a 15 month old product. Why didn't I buy one? I had the money. My Dual 2GHz G5 has been running pretty strong on 2 GB of RAM and 1.5 TB of harddrive space. Even Snow Leopard, which effectively made the machine obsolete, didn't really kill it. Aperture still runs, though loading in 600+ 21MP RAW files basically takes all night, and I can edit them faster than the previews are updated.

It's been six years since I got the computer and it's only now not really current anymore. (My PC using friends tend to start feeling their computers' age after three years, and have literally dumped them in the trash after 6.) And that's the thing. I buy the Mac Pro because it LASTS. I get the best bang out of my buck with it. Buying a 15 month old computer, especially for the same price as back then, is basically shooting myself in the foot.

As much as I know that a longer wait more or less correlates with better tech in the box, I am starting to feel the need to upgrade. I can build a hackintosh NOW. And I think I'm not the only one in this boat. We'd buy, but the product just isn't there. Steve Jobs probably does have something big coming up, but I can only wait so long before it's time to give up.
 
Still don't think...

Nano,

Still don't think based on everything you told me that the Mac Pro is going to make use of light peak..the imac and the laptops will get it before the big behemoth will.. By then the xserve may very well be EOL.

I simply don't believe the pro market is Apple's bread and butter anymore, as out of the ashes of APPLE COMPUTER, INC(1984-2006) arose a brand new, pro-sumer(consumer) empire catering to the laymen and not the professionals out there.. Its happening.. and its happening right now!

Personally, I believe firmly that the Macbook/Macbook Pro and all mobile computing devices are the future.. while the desktop is dying. By 2020, I think even the strongest laptop will play the toughest high res games better than even a desktop could.. Am I right?

I walked into Best Buy and saw many many laptops, and very very few desktops - signs that the desktop market is FALLING.



Actually, it's not, as LP parts aren't even due to ship until Q4 2010, which means if on time, it won't appear in systems until 2011.

The new CPU's are meant to work in existing LGA1366 boards with a microcode update. It's cheaper for Intel and vendors to take this approach (chipset and resulting boards can run for 2 years, with only the CPU changing to improve system performance). In most cases, new tech, such as LP, USB 3.0, FW1600, SATA 6.0 Gb/s,... are expected to be added to the non-compliant boards via add-on cards.

Newer board designs will incorporate them when determined financially feasible (i.e. is it a budget board that relys solely on bare minimum part count = fewer features, or a high-end unit that additional semis are added for features prior to their inclusion by Intel in the chipset). A good example is USB 3.0 not being added to Intel's chipsets until 2012 (formally announced not too long ago).


It's definitely intended to be capable of replacing a fair few interfaces, but we'll have to see what "shakes out" (most will result over cost).


I'm not so sure on this, as I've the strong impression most are owned/operated by small shops (i.e. graphics design), and independents more than larger entities.

In the former group, they're less likely to be on such a high speed network, and run more as an independent (stand alone) system. The network might be there for a backup solution, if there's not a dedicated means per system. But comparing price for small scale solutions, the independent means is going to be cheaper.

For the latter group, yes, there are significant investments in networking equipment for the desired requirement. But this is shrinking for workstations from what I've seen. In these cases, it's more common to use clusters and then attach the workstation via say FC or Infiniband.

LP would be desirable here though, and is a big selling point IMO (makes small clusters a practical solution for SMB or independents with a high computing requirement).


I agree. When I saw the video of LP's first demonstration, that's one of the things that immediately crossed my mind. This would give Intel a serious "leg up" over the competition.

Assuming LP hits their target pricing, it will be cheaper too per system (i.e. card and cable ~$100 per, not including the switch; not sure what the switch costs will be yet). But I suspect they're well aware that it needs to come in under 10G Ethernet to make it attractive. Assuming this happens, the enterprise community will bite.


LP is actually Duplex as well (Up and Down are each 10Gb/s). It's just a lot thinner = smaller cable.


It's not necessary for consumers to actually need 10G yet, but that doesn't mean they're not interested either.

For consumers, the data rate is usually governed by their ISP throughput anyway (i.e. ISP signal distributed via a router to the other systems in the household). That's not to say they don't transfer large files between systems, but it's not critical that it be done within a fixed period (fast enough to transfer say 1TB/s, as you might need in an enterprise environment for nightly runs such as backup).


The hardware used in the demo wasn't Apple's, but appear to be Intel's own products and commodity components (i.e. PSU, CPU cooler with an LED fan,...). Apple's boards use proprietary connectors, even for the PSU (think of how HDD's and graphics cards get power on an Apple board).

Apple may have played a part (and I think they did, but their contribution is with software, which is why the system was running OS X), Intel is the driving force behind LP in conjunction with other partners. Apple would benefit from this type of arrangement, as they get OS X developed in time, with most of the bugs worked out at the time of release.


It won't be that expensive, as the parts are only expected to be ~$50USD per LP connection. I'm not sure of switch costs yet, as fewer units sold mean a higher R&D figure per unit sold in order to recoup those costs.

But it would be attractive for Apple as users would be better able to set up small clusters for render farms. As it's a cheaper way to go, that means more independents and small shops can implement such a solution. That means more system sales for Apple (rather than just one workstation per user). Those additional systems may be XServes (assuming they're willing to have a rack rather than just pedestal systems). Either way, more systems sold would increase Apple's sales figures.


Good point, as that much throughput means there must be another part of the system that can push data at such a rate. Ultimately, there's additional costs over the network, and it's all expensive. This means it's out of reach for the average consumer at this time.

Consumer products are all made with a single driving compromise; low cost.
 
2010 mac pro :(

Best bet possibly July-September.

Nope. If the great and powerful Jobs doesn't deliver on 6/29 I'm done waiting. My first computer was an Apple 2e and I'm sticking with Apple, just going with a used one this time.
 
Nano,

Still don't think based on everything you told me that the Mac Pro is going to make use of light peak..the imac and the laptops will get it before the big behemoth will.. By then the xserve may very well be EOL.

I simply don't believe the pro market is Apple's bread and butter anymore, as out of the ashes of APPLE COMPUTER, INC(1984-2006) arose a brand new, pro-sumer(consumer) empire catering to the laymen and not the professionals out there.. Its happening.. and its happening right now!

Personally, I believe firmly that the Macbook/Macbook Pro and all mobile computing devices are the future.. while the desktop is dying. By 2020, I think even the strongest laptop will play the toughest high res games better than even a desktop could.. Am I right?

I walked into Best Buy and saw many many laptops, and very very few desktops - signs that the desktop market is FALLING.
There are reasons it could benefit the MP (workstation use, which a laptop isn't capable of yet).

But if they aren't planning on sticking with the MP once the Xeon line moves towards more than 8 cores (i.e. clusters for cloud computing is a largely anticipated use/aim of the future Xeon parts), and go with Enthusiast Desktop parts instead (i.e. such as what the i7-980X type of parts, as they'll have P/N's with 8 cores per die at that point), then it won't make much sense for them to add it to one or two systems.

Remember, most software hasn't caught up with current hardware. Few applications can actually manage to use all the cores available. More should be changed over, assuming it will actually benefit them, but there's reasons it's slow. Development time required (i.e. may require new compilers and/or major sections of code to be re-written), budgets, and backwards compatibility certainly come to mind.

It will be able to benefit the iMac line, and the inclusion of LP could allow them to consider it as a replacement for the MP at the juncture described above (Apple does determine to EOL the MP).
 
Eh? There is little to indicate that Light peak doesn't suffer from the same fundamental networking flaw that USB has. Namely, that it is a hub based model oriented toward hooking peripherals to a computer. There is a one central device and the communication is managed through just one. The demos have been of pushing video from one place to another (e.g., multiple digital video streams ) which in most devices just goes in one direction ( from player to monitor). Don't get lost that they are pushing the demo video between two computers. One of them is just a hub ( as in this video http://www.youtube.com/watch?v=nfGevFIVKw4&feature=channel).
One of the computers in these demos is always headless and the images are coming out of another computer which is hooked to the displays.

There is no evidence that this is the case, it is claimed to be fully bidirectional and able to encapsulate the FireWire protocol specifically which would lead one to believe this is not an issue.


Intel isn't a fan of Firewire anyway so I'd be surprised if they bothered to deal with it.

They specifically refer to it.

You can't do NAS and SAN with a hub-spoke network constraint.

Of course you can, you mean to say Master Slave arraignment. USB's limitation is that there is only ever one master and everything else is a dumb slave to the master controller. Again, this is unsubstantiated fear on your part, and highly unlikely to be how it is actually implemented.

You can do direct attached storage (DAS). In the DAS market, Fiber Channel, 10GB Ethernet , and Infiniband don't make economic sense. They cost far too much to hook to just one machine in most situations.

Which is exactly why I think Apple and Intel want to solve this issue with low cost LP.

If there is support for multiple hosts networks then Intel is keeping awfully quiet about it. Neither is Light Peak going to replace everything. There is going to be a subset of protocols that it targets for encoding/decoding so that can be transported over the new, incompatible protocol and then decoded on the other side. There is lots of hand waving that it will do anything. There is no creditable explanation for that.

They clearly intend for this to be a low level transport encapsulating other protocols. Many things that once were done in hardware over proprietary busses can now be done in software over standard transports. iSCSI, Fiberchannel over Ethernet (FoE), and so forth. Where iSCSI was once done in hardware using expensive initiators, now we have open source software stacks that can outperform most of the dedicated hardware that was formally used.

So, Cisco is probably not all that scared. I suppose there is a lost opportunity for folks with NAS boxes that with low host connection counts ( 1-2 computers) who may now switch to DAS, but that was already there anyway (with eSATA, SAS , etc. ).

If low cost high speed switches come out that allow people to create networks it will fundamentally impact Cisco's core business. They are in the best position to capitalize on this, so I don't think they are scared, I'm just saying that people are willing to compromise a lot of functionality to get 90% of what they need at 15% of what it would normally cost. Building out a small 10G copper network right now is really, really expensive. There are a lot of professionals who would do it right now if they could afford it, even if it had some limitations. If TCP/IP can be encapsulated (and I can almost guarantee it can as there is already done over Firewire) then all bets are off.

Lightpeak has the potential for very small shops ( 1-4 workstations) to buy/use high speed storage boxes. However, they will still run into walls when comes time to push those files between the boxes.

That's exactly where it will start, and I think describes the target market for Pro's right?

Unmanaged, 24 port, 1GbE switches are in the sub $400 range. It isn't that expensive to deploy for a small group. Even is step up to one with link aggregation (to get closer to 2GbE bandwidth) still only in the $1,300 range which is still around a $100/port-pair.

Yeah, I wish aggregation worked that well. But regardless, that's still a lot of money, and it is much slower than 10G which is what we are talking about. As I said, I use 10G here at my office, I meant my equipment closet, not my house ;-) and it makes a big difference. But it is by no means plug and play. The Cisco gear I use needs lots of configuration and interconnectivity between hardware is finicky and there is lots of TCP stack tuning. It's not at all ProSumer friendly. And it's really really expensive.

10GbE still is high, but the costs for that will drop over time also. 40GbE ( and 100GbE ) just past this month as standards. So 10GbE is no longer top of the heap. The costs will start to drop faster now. Once it isn't the "fastest" and is mature ( passed in 2002 and mainstream stuff from 2005-6 era ) the prices will drop back into the reasonable range.

Actually this is what bugs me. The pricing has not dropped in any significant way. 10G has been around for a while and the companies have a lot of reasons to keep the pricing high. Smaller companies can't undercut pricing since there are a ton of patents around getting copper to do the unnatural things required to move that much data. And Ethernet itself is basically a stupid way to move this kind of data since it is inherently lossy (packets are droppable except with 10GbE!) The barrier to entry is high, the costs are high, the demand isn't there because it's not consumer friendly or priced right. Sound like an opportunity?

Infiniband and 10GbE will likely kill off Fibre Channel. If anything Apple needs a solution for one of those two since they historically offered Fibre Channel on the high end. (would help also if they figured out that the iSCSI protocol existed. )

FiberChannel is actually two things, a protocol and transport layer. They are easily separated, and if you can guarantee reliability (no dropping of packets) you can transport it happily over other layers like lossless 10GbE. Ethernet is an ancient transport designed for different times. LP has the opportunity to not replicate some of the design compromises they had to make.

All these expectations that Light Peak is going to cure all problems is off.

A man can dream can't he? :D

Assuming LP hits their target pricing, it will be cheaper too per system (i.e. card and cable ~$100 per, not including the switch; not sure what the switch costs will be yet). But I suspect they're well aware that it needs to come in under 10G Ethernet to make it attractive. Assuming this happens, the enterprise community will bite.

Yeah, the thing about Ethernet switches being so expensive is that the protocol itself is so freaking patent encumbered and complex. Speed solves lots of issues, and a simple addressing scheme, commodity priced interfaces and a naturally robust, low-noise, lossless and high speed transport all ad up to cheap switches.

LP is actually Duplex as well (Up and Down are each 10Gb/s). It's just a lot thinner = smaller cable.

Yeah, actually I missed the fact they are using two fibers, but at least it's in a single cable :)

The hardware used in the demo wasn't Apple's, but appear to be Intel's own products and commodity components (i.e. PSU, CPU cooler with an LED fan,...). Apple's boards use proprietary connectors, even for the PSU (think of how HDD's and graphics cards get power on an Apple board).

Actually... it wasn't apple hardware at all. It was apple software hower aka hackintosh.

It was reported to be an Apple prototype board, so that's what I was saying. It certainly was running OS X and Intel often is contracted to do hardware designs for Apple, so I don't know. I think it's not really that interesting as there is very little unique about Apple hardware since they use standard chipsets and so forth. My main point was to refute an earlier post that Apple would somehow be at least year behind in deploying LP, I contend they will be first.

Nano,

Still don't think based on everything you told me that the Mac Pro is going to make use of light peak..the imac and the laptops will get it before the big behemoth will.. By then the xserve may very well be EOL.

I simply don't believe the pro market is Apple's bread and butter anymore, as out of the ashes of APPLE COMPUTER, INC(1984-2006) arose a brand new, pro-sumer(consumer) empire catering to the laymen and not the professionals out there.. Its happening.. and its happening right now!

Personally, I believe firmly that the Macbook/Macbook Pro and all mobile computing devices are the future.. while the desktop is dying. By 2020, I think even the strongest laptop will play the toughest high res games better than even a desktop could.. Am I right?

I walked into Best Buy and saw many many laptops, and very very few desktops - signs that the desktop market is FALLING.

Actually, I was the one predicting LP being the issue. And saying that desktops are dying based on what is stocked at best buy is like saying that whales are extinct on the basis of what you see in your bathtub. Okay, that was a little harsh :eek: Basically consumers think that they want laptops. They of course usually leave them on their desks and never actually run them off of the batteries, but that just consumers for you. They dream of blogging at the coffee shop, envy of everyone around. Then they try it once, realize they look like a dork, can't see the screen and can't actually get anything done or drink their coffee and never move the stupid thing again. Okay, rant over.

Desktops and more importantly "workstations" are bought be the corporate world and delivered from Dell and Apple by UPS. We don't go down to Best Buy...
 
If low cost high speed switches come out that allow people to create networks it will fundamentally impact Cisco's core business. They are in the best position to capitalize on this, so I don't think they are scared, I'm just saying that people are willing to compromise a lot of functionality to get 90% of what they need at 15% of what it would normally cost. Building out a small 10G copper network right now is really, really expensive. There are a lot of professionals who would do it right now if they could afford it, even if it had some limitations. If TCP/IP can be encapsulated (and I can almost guarantee it can as there is already done over Firewire) then all bets are off.
I agree. Cisco's business depends primarily on the enterprise market, which is all that can generally afford 10G Ethernet gear. It's still pricey, despite the length of time it's been available.

That's exactly where it will start, and I think describes the target market for Pro's right?
This market segment (small business/independents) is where a substantial amount of potential exists IMO; from inexpensive connections to high speed storage (DAS) to connecting nodes in a small cluster (makes it financially viable, as the networking gear can cost more than the systems used).


But it is by no means plug and play. The Cisco gear I use needs lots of configuration and interconnectivity between hardware is finicky and there is lots of TCP stack tuning. It's not at all ProSumer friendly. And it's really really expensive.
Cisco's interface is ancient (throwback to the '80's), and definitely not easy to deal with for most users.

Actually this is what bugs me. The pricing has not dropped in any significant way. 10G has been around for a while and the companies have a lot of reasons to keep the pricing high. Smaller companies can't undercut pricing since there are a ton of patents around getting copper to do the unnatural things required to move that much data. And Ethernet itself is basically a stupid way to move this kind of data since it is inherently lossy (packets are droppable except with 10GbE!) The barrier to entry is high, the costs are high, the demand isn't there because it's not consumer friendly or priced right. Sound like an opportunity?
Technology wise, we've reached the limit of copper. Optical is the only other way to go faster (and reliability will also improve, so long as the transceivers are designed properly). If the conversion (optical to electrical) is jittery for example, it won't work worth a crap (reliability suffers significantly, no matter the protocol used).

Yeah, the thing about Ethernet switches being so expensive is that the protocol itself is so freaking patent encumbered and complex. Speed solves lots of issues, and a simple addressing scheme, commodity priced interfaces and a naturally robust, low-noise, lossless and high speed transport all ad up to cheap switches.
If LP performs as advertised, Intel will have a "gold mine" on their hands. :D And they know it. :p

Yeah, actually I missed the fact they are using two fibers, but at least it's in a single cable :)
It's also more flexible, allowing for tighter bends without signal loss. :) Despite it's small diameter, it's fairly robust as well as I understand it (not had my hands on one yet though).

It was reported to be an Apple prototype board, so that's what I was saying. It certainly was running OS X and Intel often is contracted to do hardware designs for Apple, so I don't know. I think it's not really that interesting as there is very little unique about Apple hardware since they use standard chipsets and so forth. My main point was to refute an earlier post that Apple would somehow be at least year behind in deploying LP, I contend they will be first.
The Evaluation Board (LP)?
Or the main board?

I ask, as the Evaluation board actually had a small bit of breadboard attached to it (small part count; looked like it might be a filter).

Basically consumers think that they want laptops. They of course usually leave them on their desks and never actually run them off of the batteries, but that just consumers for you.
I've had the same rant, so you're not alone. :p
 
Nope. It the great and powerful Jobs doesn't deliver on 6/29 I'm done waiting. My first computer was an Apple 2e and I'm sticking with Apple, just going with a used one this time.

Another Apple IIe? :p That was a wicked machine in it's time... I still have mine in the garage... if they don't update the Mac Pro, I'm gonna go fire it up and do some rendering on it! :D

Just wait... the new Mac Pro is currently passing through Steve Jobs' colon...

I hope that gave you all some terrible terrible imagery.

:eek: I'm betting on a case redesign then... much more streamlined! :p :D
 
There is no evidence that this is the case, it is claimed to be fully bidirectional and able to encapsulate the FireWire protocol specifically which would lead one to believe this is not an issue.

Bidirectional doesn't say anything about topology. Just means traffic flows in both directions along the arcs of whatever the topology is. For instance, USB 3.0 is full duplex over two signaling pairs..... just like LP. [ In fact, unclear just how much of LP is just recycled USB 3.0 optical. Not 100% same, but unclear just how different. ]

There is also nothing on intel site about encapsulating Firewire that I could find. Allusions to being the "one ring that rules them all" capability, but that is just hand waving. Most of the talk about it sweeping up Firewire are from the commentary not the publicly available technical docs. Conceptually, it could if added it to the protocol and incorporated the coder/decoders into the hardware. However, that just drives up costs. The notion that it "does everything for free" smells like somewhat selling "free lunches". I think Light Peak will do a strict subset of protocols. They just don't want to say what the subset is right now because folks will run out in fill in the blanks with all kinds of stuff. That adds a "magical" quality to Light Peak. (as oppose to being yet another new protocol which isn't compatible at the link level with anything right now. )


They specifically refer to it.

where? I'm talking official Intel docs/statements, not the interpretive articles written about it.



Of course you can, you mean to say Master Slave arraignment. USB's limitation is that there is only ever one master and everything else is a dumb slave to the master controller. Again, this is unsubstantiated fear on your part, and highly unlikely to be how it is actually implemented.

OK we're using different terms. USB and SATA are cheaper than Firewire and SAS in part because they dump the concept of each device on the network being an equal peer. Since LP is an attempt to do "more" for less money and on smaller devices, would not be surprising if it dumps elements that Ethernet/Firewire add topological flexibility. Historically, this is what has been done which doesn't make it highly unlikely. To put a fully capable master controller into every single network node will mean a network that has a higher cost.


They clearly intend for this to be a low level transport encapsulating other protocols. Many things that once were done in hardware over proprietary busses can now be done in software over standard transports. iSCSI, Fiberchannel over Ethernet (FoE), and so forth.

Not as efficiently. What 10GbE allows you to do is blow away lots of overhead to get back to similar speeds as the older protocols were delivering. The bigger bang for the buck is now only have to buy one set of networking gear.


Where iSCSI was once done in hardware using expensive initiators, now we have open source software stacks that can outperform most of the dedicated hardware that was formally used.

Again... what is the CPU utilization with this 100% software stack ?


Building out a small 10G copper network right now is really, really expensive.

So go fiber. Most of the same optical tech for LP could be applied to 10G/40G/100G Ethernet too without any of the Light Peak overhead. (For example 10GBASE-SR uses VCSEL lasers just like light peak on slightly narrower fiber. ) Going over 5Gbs is generally going to turn most devices from being FCC Class B ( ok for home use) into Class A ( not generally OK for home use... noisy clanky radios in part).

Light Peak is useful because many of the legacy (as of now) protocols for connection are stuck at a speed limit. Therefore, can use increases in speed to do cross box transport without introducing noticeable latency if can multiplex/demultiplex with very low overhead.


Actually this is what bugs me. The pricing has not dropped in any significant way.

There has been no faster standard so no pressure. 10G is toooo slow for you ... go to buy 40G. Oh there isn't any 40G. The cycles are about the same as for 10 -> 100 -> 1000 -> 10000 MBps 5-7 years to full maturation.


And Ethernet itself is basically a stupid way to move this kind of data since it is inherently lossy (packets are droppable except with 10GbE!)

Ethernet & TCP/IP is designed as a long distance transmission mechanism. Of course it has to deal with lossy.


The barrier to entry is high, the costs are high, the demand isn't there because it's not consumer friendly or priced right. Sound like an opportunity?

The barrier to entry is going down. The costs are coming down, just apply observation to what has happened to the 1G connector/switch market over last 4 years. Who can't ship a 1Gb switch right now if they want to. Can get a Netgear home router with 1Gb on it for $99. When 1G was top of the heap? No way.



Unmanaged switches are plug-and-play. Much of tweaking and fussing with switches is because want to do something custom. That won't change with light peak if want to do custom and fault tolerant topologies. In fact, won't be very surprising is light peak is simpler because you only get a simple single root tree, daisy chain network. Won't be any switches. Just repeater nodes (hubs) and some lightweight routing.


Ethernet is an ancient transport designed for different times.

Ethernet means you don't have to have a switch. There has not been any switches in the light peak demos either.

It was reported to be an Apple prototype board,

There were no Apple markings on the board. People assumed it was an Apple board primarily because it was running Mac OS X. I'd bet Intel can legally run an R&D hackintosh if they want to through an R&D agreement between the companies. That is the easiest way to allow Intel to boot up the OS on early CPU prototypes if they want to.
 
Lightpeak has the potential for very small shops ( 1-4 workstations) to buy/use high speed storage boxes. However, they will still run into walls when comes time to push those files between the boxes.
Not to mention that NASes that fit small shop's budget, can't even saturate 1G-ethernet in raid5. With link aggregation 1G will be fast enough for many years still...
 
In most cases, new tech, such as ...FW1600...
Is there any chipset supporting fw1600 even announced to be planned?
Consumer products are all made with a single driving compromise; low cost.
This is why usb3 will be the most cost effective (speed/money). Only thing Apple can do for pushing LP to consumer market is by not putting usb3 to new Macs. This will be dead end because the price of LP hubs & peripherals would scare consumers away from Macs.

For "pro" Macs there has to be reason why Apple won't give eSata. This might be, that thtey don't want pros to buy a lot of eSata hardware when Apple introduces the next fast connection. But will that be usb3, fw1600 or LP?
At least when most of new peripherals are usb3 next year, they'll have to add that to Macs or start loosing customers. Will there be only usb3 or usb3+fw1600 or usb3+LP?
Or will they neglect usb3 like blu-ray?
Majority of Mac users can't buy hd-movies from iTunes or don't want because the better PQ in BD, but Apple don't care. They just sell more iPhones & iPads. All others will use usb3 & LP is too expensive for consumers. Apple will just sell more iPhones & iPads?

Hopefully, AFAIK, crossgrading CS is still free...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.