Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Most of those are parts suppliers not peripheral vendors. Namely mostly optical transceiver vendors. Frankly the laser needed for 10G ethernet or infinband isn't all that different than light peak. electrical 1's and 0's come in and light goes out or light comes in and 1's and 0's go out. That is not a protocol controller. Where are the multiple implementors of the protocol controller ? ( this was a dust up with USB 3.0 also where Intel wanted to ship before anyone else had a chance to implement the standard)
I know most are suppliers on the component end, but the one that got my attention is Hon Hai Precision, which is a major end-product manufacturer.

Specifically the volume of OEM/ODM work they do for other major vendors (Apple and HP for example). The Foxconn label they produce as on their own, will only get Intel so far. I see the ability to use the OEM/ODM supplier as a means of persuading other vendors to take the plunge, whether installed on the main board products they sell to these vendors, or as an add-on card they'd supply as an optional product for those systems.

Likewise vendors who are actually going to put parts in their devices that end users will buy. Unless end users buy complete devices to plug in it is not going anywhere. If end users are going to buy dongles that's going to limit adoption. I know Apple likes selling dongles but mini-display isn't exactly taking the market by storm.
I figured this aspect was covered before in a previous post. At any rate, there are potential problems (that I do think will surface, but not sure to the degree yet, as there's still voids in the available information). Namely the supporting semiconductors necessary to make anything work with it becoming available, save connecting 2x LP equipped systems together, as was done in the demo.

USB 3.0 does have the advantage here, as it's already backwards compatible with devices users already own. For the newer standard, the necessary parts already exist, so it's not a situation of Part A is there, but B is vapor = useless for most, if not all practical purposes.

Flash is a standard by that criteria. A real open standard is where there are multiple implementors. As long as there is just one implementor of the controller it isn't very standard. One of the short term limitations on USB 3.0 was that only NEC managed to get something working at first. There are other implementors coming online now.
There's definitely a difference between a standard that began as an open standard (or became that shortly after it was created), and one that has it's IP held by a company or group of companies.

LP's not based on an open standard, so the companies involved would have to agree to do so before any other parts suppliers could get involved without a licensing fee. Unlike NEC, where it was, and they were just the first out with a working part. Others are following, but no license fees are involved.

Sony isn't the only implementor of BR. Nor is there just one controller for the whole market.
No, but they were pushing it hard, and are what most consumers associate with it (I was trying to keep the example simple, as the posts are getting long). Few may actually realize that others such as Panasonic were ever involved with it from the beginning.

Second, with Apple's decoupling the PCI planes from the CPU package sockets and high speed PCI controller means they can change PCB with zero impact on the sockets. Still will save money to run for two years with minor tweaks. However, money not socket implementation is driving that.
Money indeed. It was a good move, as there's fewer separate assemblies between the SP and DP systems. Shortens the parts bin requirements for easier production, and more importantly, saves costs.

Eh? Tock is the "shrink with same socket" phase.
Typing too fast. Not much different than typing MB/s instead of Mb/s mistake.

To me it makes more sense to build into the motherboard a tech that has the more widespread set of devices. The number of USB 3.0 devices is going to greatly outnumber the number of LP ones. The exception would be two boxes need to snap together ( laptop with docking station). Unless there is going to be some lego block change to the MP, just don't see it as pressing.
Other standards are entrenched, and won't be easily moved. I actually see LP making the largest impact on laptop systems as a means of simplification combined with a cost reduction.

For desktop systems, not as much. There would need to be some other reason, namely inexpensive speed, to cause them to go get the necessary pieces to make it all work (assuming they actually show up, as this late in the game, as none of the peripheral components have been announced that I've seen).

Workstations I hope will benefit by being able to exceed 1G E for example, for at most, similar funds. It would assist in the creation of small lower cost clusters IMO if this proves itself out (i.e makes clusters more available to independents or SMB markets that could utilize that much performance).

I don't buy that. There has to be a protocol. It may be a simple one just oriented to transport data packets from one machine to another with some simple QoS/isochronous abilities, but there has to be something.
I actually agree here, and do suspect it's that simple, as they've been silent on those details.

What I expect, is that they differentiate this from other standards (much more complex).

I meant Intel makes actual Ethernet NICs ("Ethernet connectors"). I wasn't talking about making RJ-45 jacks or PHY implementations. They should have labs with lasers , fiber cabling , etc. to test those devices since part of the products.
I know that they make NIC's, and they'd have labs with the necessary equipment for validation testing.

But that can be bought. I think of it this way. Why build a scope, frequency generator, .... when companies like Tektronix, Agilent and Lecroy already product it?

Lasers, fibre optics, ... are areas their partners specialize in. IIRC, there's a significant amount of work put into the lasers,..., namely to get the materials and manufacturing in at a low cost.

So now the "so lightweight you can't notice it" protocol supports bonded ports. I'm not holding my breath.
I previously mentioned that the ability to bond is a hoped for speculation that could allow it be used for more than just a way to simplify laptop connectors.

I think there is amble opportunity for Ethernet and Infiniband to adopt the laser transceivers and wider fiber cable to lower adapter costs. Optical connectors don't have to be crazy high in price.
Assuming Intel and their partners are willing to license them the technology.

But it is increasing an old, tired excuse at this point. The Mac Pro in 2006 had 4 cores. Any app that can't go 4-way in some sections now are slackers in the Pro space. It has been 4 fraking years. That's going to include a major upgrade cycle for all but most glacially slow development. Some iMacs can go 4-way. A decent chance all iMacs could be 4-way by end of this Fall ( unless Apple sticks 2core/IGP into the lower end.)
But they are slow at it, as it's expensive (time consuming as it's almost certainly a major re-write). Those that haven't done it seem to be waiting for someone else to do most of the work for them (i.e. waiting for tools, OS support,...). Basically just adding new features to the same old, tired base code that's been recycled for years (finding ways of milking the product with as little development time as possible).

It sucks, but I don't see anything motivating most software developers that've been sitting on the fence, to get off and go to work. :(

It is relatively easy to start more than one program at a time. Even more so where there are ones that "do something" and don't require user interaction. For Apps that primarily sit and wait for the user actively do something .... often don't really need a MP. That isn't the MP's problem, nor should it particularly be a constraint on MP's design criteria.
It was just to illustrate a point in general. Yes, multiple instances of the same application or simultaneous use of multiple applications can make better use of a system. But that won't change the fact if it can't benefit from SMP, which is what the MP and XServe are really meant for (where the performance will really become apparent).
 
Answer = additional part cost and added complexity (further increases cost).
This makes no sense.
Most laptops which are cheaper than cheapest Mac have eSata and adding that one chip to MP would raise the costs & "complexity" for about 0.1%.
Apple is almost infinite in greed, but I don't think that even they are so picky.

Anyway, when usb3 isn't enough, this will walk all over LP before LP is even in the market:
http://hdbaset.org/
All you need in computer is one RJ-45 and it works with existing cat5e/6 cabling and speeds are enough for homes and 99% of business...
If LP ever comes to market it will be as common as fc is now, which means many times more expensive than mass adopted options.
 
This makes no sense.
Most laptops which are cheaper than cheapest Mac have eSata and adding that one chip to MP would raise the costs & "complexity" for about 0.1%.
Apple is almost infinite in greed, but I don't think that even they are so picky.

Anyway, when usb3 isn't enough, this will walk all over LP before LP is even in the market:
http://hdbaset.org/
All you need in computer is one RJ-45 and it works with existing cat5e/6 cabling and speeds are enough for homes and 99% of business...
If LP ever comes to market it will be as common as fc is now, which means many times more expensive than mass adopted options.
Apple has a lot in common with budget box systems (desktop, laptop,...), where they try to rely on the chipset for connectivity as much as possible. Despite what they actually charge for the system.

Where the complexity comes in, is the fact that a USB 3.0 chip isn't a direct drop-in replacement (different package = not the same pin-out). That would force a board revision (new PCB's) to make it work. That's really where the cost comes in.

As for HDBaseT, it's aimed specifically at video (not a replacement for Ethernet, though it can run packetized data), and I'm not sure if that will become ubiquitous either. SDI isn't for mainstream use as stated by Heilage, and has licensing issues involved.
 
All you need in computer is one RJ-45 and it works with existing cat5e/6 cabling and speeds are enough for homes and 99% of business...
Hi
Except...
That Apple has chosen to fit a half-cock Broadcom 5764 ethernet controller chip in the 27" i5/i7 iMac that doesn't support advanced features such as jumbo frames to enhance bandwidth...

And Apple chose to fit a half-cock ethernet controller chip (Intel Hartwell) in the 2009 Nehalem Mac Pro which totally craps out (= disconnects the network) at over 60MB/s, thereby requiring a PCI-e slot solution for high-bandwidth (= video) pro usage.

Short-termism or what?
A lot more than 1% of Mac Pros are sold to professional video-edit facilities :(
And that's why a replacement Mac Pro for 2010+ needs a re-engineered motherboard revision.
 
Where the complexity comes in, is the fact that a USB 3.0 chip isn't a direct drop-in replacement (different package = not the same pin-out). That would force a board revision (new PCB's) to make it work. That's really where the cost comes in.
They'll have to make new board for new MP anyway, so where's the additional cost?
Btw, does X58 have fw integrated?
 
They'll have to make new board for new MP anyway, so where's the additional cost?
Btw, does X58 have fw integrated?
Not with the 2010 systems. Intel designed the socket and chipset to work with both halves of the Tick Tock cycle, as it's cheaper. So the existing boards and other parts (non CPU), will work with the newer parts with microcode update. Very cost effective this way, as R&D and manufacturing costs (i.e. tooling) are spread out over 2 years instead of one.

Apple had to put the FW chip in, as that's not built-into an Intel chipset (they've a history with FW, and there's been need of something faster than USB 2.0, though they took their time making the entire line S800 compliant IIRC).
 
As for HDBaseT, it's aimed specifically at video (not a replacement for Ethernet, though it can run packetized data), and I'm not sure if that will become ubiquitous either. SDI isn't for mainstream use as stated by Heilage, and has licensing issues involved.

No, HDBaseT is aimed at video and Ethernet. A significant number of TVs are already getting Internet jack already. This is in part driven by

http://en.wikipedia.org/wiki/Digital_Living_Network_Alliance

HDBaseT can be used to provide a 100BaseT along the exact same capabilities that DLNA enables. This connector takes it a step further in that can also transfer large Video/Audio files on same cable ( if bother to buy Cate6 cable in first place. Shouldn't be that hard to start to tell folks to use those with their new TVs. )

The significant factor is that it is the major TV vendors are pushing this.
Whether they are peripherals or not depends upon as being the hub in the system, the data or the display.

I'm kind of curious how they ship high power over RJ-45 and still get the high throughput. Will probably need a special "video Ethernet" to regular "Ethernet" bridge so that TVs can still get to the internet and interact with the other home network computers, but that's a reasonable price to pay if now can transfer large files between media appliance boxes easily.

It has similar "but the plug looks the same" problem that LP will have if it reuses USB like connector. Colored cables and sockets can help with that.

This solution runs into a brick wall when need even faster Internet connection (although could just put in second RJ-45 for that) and even larger video/audio files. However, in a world where most folks only ship 1080p content over short distances it works. It doesn't depend upon some large, widely distributed, infrastructure to be deployed to work.

So can easily build a smart connection between TV and DVR box and pump mainstream internet content through the same connection. The TV focuses on display and decoding streams while the external box focuses on storage and serving the data. Plug it in and will work together if the all the vendors make them play nice together.

It doesn't have to replace LP. LP is the one with the big anchor around its next, because it has to replace everything to live up to the hype.
 
Apple had to put the FW chip in, as that's not built-into an Intel chipset (they've a history with FW, and there's been need of something faster than USB 2.0, though they took their time making the entire line S800 compliant IIRC).
So it wouldn't be any more complicated to add another chip for usb3. And this is why many manufacturers are already doing this. Don't know how long Apple will keep Macs away from anything faster than fw800. Still 2 years before LP that will flop?
 
LP is the one with the big anchor around its next, because it has to replace everything to live up to the hype.
This is why LP will fail in mass market. Nothing can replace everything anymore. Too much infrastructure already installed. And new things will need mass adoption from consumer market to succeed. Otherwise price point will be too high.
 
So it wouldn't be any more complicated to add another chip for usb3. And this is why many manufacturers are already doing this. Don't know how long Apple will keep Macs away from anything faster than fw800. Still 2 years before LP that will flop?
The FW chip is already installed on the existing boards. Adding anything that's not a direct drop-in replacement, means the PCB's have to be reworked. This is the point you seem to either be missing, or underestimating the time and money involved.

The newer CPU's don't need new boards at all, just a change to the firmware so they'll work.

If this were new architecture, then it makes sense to add anything new, assuming it's obtained financial approval.

This is why LP will fail in mass market. Nothing can replace everything anymore. Too much infrastructure already installed. And new things will need mass adoption from consumer market to succeed. Otherwise price point will be too high.
It's intended to be cheap, which will help. The biggest issue IMO will be the peripheral chips needed to make other standards work over it. User's can't adopt if it doesn't exist, no matter how low the cost is on the system end.

I see potential, but will wait and see what actually materializes, as it's easier to screw up than get it right (missing parts, high costs,...). There's also a fair bit of missing information, and that will determine what it's really going to be usable for (i.e. consumer use only, or will it extend to lower cost solutions for independent and SMB sized organizations).
 
This is why LP will fail in mass market. Nothing can replace everything anymore.

I honestly don't think the technical folks constructing LP are trying to make it suck in everything. The everything spin is more a marketing ploy to get folks to buy into yet another new standard. Few folks want a new one with all new connectors and dongles to keep things to connected. However, if hold out the illusion that everything will just hook up and magically work folks will play along. If it can devour everything it must be great, right? Just let people fill in the blanks for themselves.

There does need to be something in the home and over 5Gb space. If Intel can get enough LP deployed so that it is a viable player they could own that. In the mean time between now and when that is a more pressing need, they can transport slower legacy stuff over one wire. In the short term, aggregating lots of different streams together is the only way to drive the high bandwidth requirement in the generic commercial marketplace.

It is also a different approach than USB 3.0 which only tackles one protocol. I don't think Intel is trying to kill off USB 3.0 with LP as much as trying to get USB 4.0 moving so that can be adopted. One way of doing that is to make USB 4.0 tackle a different set of problems initially. Or at least divert USB 4.0 into making quality improvements that don't involve speed.
That would open the door so that there were two "universal" domains. One over and one under 5Gb. Whether they both live under "USB" umbrella is open story.

Note that USB never was going to kill off Ethernet or other more long distance connections. There is an implied "universe" of devices it was going to be applied to that didn't actually include everything.
 
Adding anything that's not a direct drop-in replacement, means the PCB's have to be reworked.

The newer CPU's don't need new boards at all, just a change to the firmware so they'll work.
Well, MP's memory design needs to be worked anyway.
Or it will be interesting to see what kind of welcoming new MP will have if it still has the worst design of all new workstations. Surely Apple could afford new board design more often than every 3 years...
 
This is the point you seem to either be missing, or underestimating the time and money involved.

Some folks may think the run rate is high enough you can distribute the costs. For example 1M Mac Pros even if the PCB adjustment is $2M that is only $2 a device. Apple's margins on a MacPro are at least around 25-30+%. 30% of $2,500 is $750 and 25% is $625. $2 is about a 0.26% change in margin (or 0.3% in 25% case). That isn't a big difference on the corporate balance sheet ( it still going to register as overall 30+% corporate margin) versus putting something of additional value on the box to justify the higher than average price.

I don't think the run rate is anywhere near that high. It is probably closer to a 10th of that. So the percentage change would be high given those conservatively low margin estimates. The other benefit though is that low run rates won't effect the corporate or mac numbers since relatively small.


If Intel came to apple over a year ago and said USB 3.0 is not coming for 3-4 years instead of 1-2 years, then it would demonstrate that Apple is more responsive that other large workstation vendors if they made shift now rather than later. If going to be several years before USB 3.0 is in the chipset then far more discrete solutions are going to get deployed than were envisioned when core chipsets were going to get them "real soon".
Going with the "we only implement what Intel gives us" approach makes it much more difficult for Apple to justify their price premiums for "thoughtful" design.

If Apple sticks to about 12 month cycle for renewal then at this point there will be other workstations released with USB 3.0 onboard before the 2011 rev of the Mac Pro comes out. Additionally, since the CPU/chipset is on another PCB board could make a new one that would take both current and new daughter cards with this one update. (i.e., this part of the board is not hard synched to Inel CPU socket updates.). In that case just moved cost would have incurred from one year to another. There would be no impact on amortization.


In short, it is somewhat of a myopic viewpoint to just look at what happens over 3-4 quarters when perhaps making a change now would be more beneficial over 8-24 quarters. Apple has enough money they don't have to optimize for each quarter. They can make long term strategic moves that may temporarily cost more money, but has far more long term upside. For example, putting the right set of parts on a board given all the of the advance information they are given.

Personally if Intel told me well in advance they were going to delivery the 3620 and 3640 late, push USB 3.0 back multiple years (this on top of previous meetings with Intel where there date kept sliding backwards) , and stick me with IGP processors in my other Macs that was significantly slower than what as shipping now.... I would have rejuggled the rollout for this and next year also.

However, Apple could take the Scrooge McDuck route. It is only going to give the folks with the "lack of value" just more fuel to add to the fire. Apple is going to be last system vendor by years to support USB 3.0 if wait till it is in the Intel chipset (Linux already in production. Windows 7 should get it next service pack if recall correctly). Shoveling $5-10 more in their overflowing pockets is worth loosing customers over time.
 
Originally Posted by nanofrog
Adding anything that's not a direct drop-in replacement, means the PCB's have to be reworked.

The newer CPU's don't need new boards at all, just a change to the firmware so they'll work.

Well, MP's memory design needs to be worked anyway.
Or it will be interesting to see what kind of welcoming new MP will have if it still has the worst design of all new workstations. Surely Apple could afford new board design more often than every 3 years...

This is interesting. So what nanofrog is saying is that the 2010 single proc MPs will still only have 4 RAM slots??:eek:

Apple better not fill three of them up with 1GB chips for the base model!
 
Well, MP's memory design needs to be worked anyway.
Or it will be interesting to see what kind of welcoming new MP will have if it still has the worst design of all new workstations. Surely Apple could afford new board design more often than every 3 years...
Technically speaking, they could do a PCB rework for things like DIMM slots and add features such as USB 3.0 (personally, I'd like to see a SAS RAID chip added, as it would better compete with other offerings = adding value to the system from previous MP's). But given the financial aspects (want the highest margins possible = lowest production costs possible for the target MSRP), I just don't see any of that happening. :(

To keep the margins static (same fixed %), the prices would increase even further to recoup the additional costs. And I'm not so sure how much more buyers would tolerate, given they're using the same basic technology as systems produced by other vendors (same Intel CPU family and chipsets in the workstation segment). And that's just to keep up with other choices in the market, not necessarily exceed them (6x DIMM slots per CPU certainly comes to mind).

Personally, I think there's better odds on a new case, as the existing boards can be stuffed in. Then recycle it for the next model (internals reworked as necessary to fit new board configurations), as they've done for years.

I don't think the run rate is anywhere near that high. It is probably closer to a 10th of that. So the percentage change would be high given those conservatively low margin estimates. The other benefit though is that low run rates won't effect the corporate or mac numbers since relatively small.
I'm figuring things on ~100K units. Unfortunately, I'm not sure if the PCB rework will actually come in in the $2M range though, and there's contract issues to take into account as well (i.e. did they negotiate for a 1 or 2 yr production run?).

Either higher costs or contract reasons could be major sticking points that result in unjustifiable due to financial impact.

For example, if the rework and production increase (based on 100K) is $20M, the cost per system increases $200 per unit (design for both 6x DIMM's per CPU and USB 3.0, validation testing, and component acquisition). That would get their attention, as it's a substantial rework (mainly due to the DIMM slot portion within the space constraints).

If Intel came to apple over a year ago and said USB 3.0 is not coming for 3-4 years instead of 1-2 years, then it would demonstrate that Apple is more responsive that other large workstation vendors if they made shift now rather than later. If going to be several years before USB 3.0 is in the chipset then far more discrete solutions are going to get deployed than were envisioned when core chipsets were going to get them "real soon".
Going with the "we only implement what Intel gives us" approach makes it much more difficult for Apple to justify their price premiums for "thoughtful" design.
I'm not sure on the Apple - Intel relationship situation, but I've the impression it's more of the latter (base on the chipset methodology to minimize costs).

Additionally, since the CPU/chipset is on another PCB board could make a new one that would take both current and new daughter cards with this one update. (i.e., this part of the board is not hard synced to Intel CPU socket updates.). In that case just moved cost would have incurred from one year to another. There would be no impact on amortization.
This was a smart move IMO for reduction in SP and DP system cost, and as you indicate, could prevent the necessitation of a complete redesign with each new architecture (assuming of course, the main connector between the daughterboard and main board doesn't end up short on pin count). Not enough traces for PCIe lanes for example.

In short, it is somewhat of a myopic viewpoint to just look at what happens over 3-4 quarters when perhaps making a change now would be more beneficial over 8-24 quarters. Apple has enough money they don't have to optimize for each quarter. They can make long term strategic moves that may temporarily cost more money, but has far more long term upside. For example, putting the right set of parts on a board given all the of the advance information they are given.
I agree with this, but with the changes that occur, there's limitations as to how far projections can be made.

Personally if Intel told me well in advance they were going to delivery the 3620 and 3640 late, push USB 3.0 back multiple years (this on top of previous meetings with Intel where there date kept sliding backwards) , and stick me with IGP processors in my other Macs that was significantly slower than what as shipping now.... I would have rejuggled the rollout for this and next year also.
Understandable, and I agree.

Unfortunately, I don't think Intel came completely clean on this (i.e. chipset support delay for USB 3.0 not mentioned at that time).

It's another reason that could explain Apple's negotiations with AMD, as they can get consumer chips cheaper, and continue using discrete graphics for improved performance. But I don't see it working out as well in the workstation segment (though not impossible it could happen).

However, Apple could take the Scrooge McDuck route. It is only going to give the folks with the "lack of value" just more fuel to add to the fire. Apple is going to be last system vendor by years to support USB 3.0 if wait till it is in the Intel chipset (Linux already in production. Windows 7 should get it next service pack if recall correctly). Shoveling $5-10 more in their overflowing pockets is worth loosing customers over time.
This seems to be the approach they pursue though, going by what's happened with the Intel based systems (i.e. no support for things like SAS controllers/built-in hardware RAID). They did offer a much better value however, as the prices were lower (better cost/performance).

Perhaps the market's smaller than even we're estimating, and they're just milking what they can out of it rather than investing a single additional cent past bare minimum to retain customers, let alone increase them. Further, software issues such as the lack of FCP being updated to what users really want/need, and the tumultuous relations with Adobe, are causing users to look to other solutions for their needs.

Hard to say for sure, but there are increasing alternatives from what I understand from various posts in terms of software, which opens up the possible use of other systems.

This is interesting. So what nanofrog is saying is that the 2010 single proc MPs will still only have 4 RAM slots??:eek:
Unfortunately, I do think they'll only have 4x DIMM's per CPU (see above). :rolleyes: :(

I'm not sure if they will continue with 1GB sticks in the base configurations or not, but I wouldn't totally discount that being the case though.
 
Maybe the Mac Pro could get a SSD that does not take up a drive bay, similar to the SSD option on the XServe. That would be nice because I can stick the OS and applications on that drive, and keep my data on the hard disks.
 
Well really shouldn't use Dell to compare. Apple really doesn't follow others and has been great about setting its own path. Just cause it doesn't release the product that u happened to be in the market for right now doesn't mean they're being left in the dust. It definitely can't be easy leading the pack in every sector. It will be here soon dude, we're all going nuts. Just be glad ur actually able to drop this much loot on a computer when people can't even afford to meet mortgage/rent in this current state. Jeez I got off tangent. Where the **** are my hexacores Steve!!
 
This is interesting. So what nanofrog is saying is that the 2010 single proc MPs will still only have 4 RAM slots??:eek:

Apple better not fill three of them up with 1GB chips for the base model!

Why wouldn't they? And you can probably expect a "beefy" 160 gig drive standard....... :rolleyes:
 
wrong. the iMac DOES support jumbo frames. :) do your research please
Hi
The current 27"/21.5" Core 2 Duo, and older Core 2 Duo iMacs (and the Mac Mini etc) all support jumbo frames.

But the 27 iMac i5/i7 earlier in the year failed to work with a major networking company's SAN software, and their research team could only suggest potential users downgrade to a 27" Core 2 Duo....

So if you have some sort of proof that jumbo frames are implementable on your i7 iMac, or a link to updated info on this flaw being rectified there are lots of people who would love to hear that:
http://forums.creativecow.net/readpost/8/1093375

At the moment you're my only research lead ;)
 
Hi
The current 27"/21.5" Core 2 Duo, and older Core 2 Duo iMacs (and the Mac Mini etc) all support jumbo frames.

But the 27 iMac i5/i7 earlier in the year failed to work with a major networking company's SAN software, and their research team could only suggest potential users downgrade to a 27" Core 2 Duo....

So if you have some sort of proof that jumbo frames are implementable on your i7 iMac, or a link to updated info on this flaw being rectified there are lots of people who would love to hear that:
http://forums.creativecow.net/readpost/8/1093375

Hah. You got me there. Turns out my source was dumber then I am ;)

Is it a sw or hw limitation?

P.s. Sorry to get ur hopes up
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.