Apple doesn't design or manufacture their own gear though (they do produce the industrial design and a spec sheet). But the actual circuit design is ODM'ed by another company (Hon Hai Precision does the majority of it, but they've used Intel for the MP boards in the past).
That's pretty sad if Apple can't even lay out a circuit board with in house personnel. That whole "designed in Cupertino" on the box is a grossly superficial statement of situation if so. I know they go to outside vendors for production. I can see them going to same or similar folks to create prototype boards. However, if they can't even do the ECAD design and simulation for the boards and are just picking parts out of parts bin/catalog and saying don't use the cheap ones .... WTF. How is that designed in Cupertino ?
OK. if you are saying they are committed to using the exact same board for two years in a row. Then yeah no new stuff. However, this EVGA board is new. They managed to do it and not go bankrupt. In those other years if nothing outside of the core chipset needed changing they the could keep the same board for two years. But this isn't much different than when transitioning from FW400 to FW800. That isn't in the core chipset.
Apple could use the NEC solution just like 90% of the boards you can walk into a decent motherboard store and see. Perhaps, they don't like committing to longer term to follows on to that.
USB 2.0 and 3.0 pins out differences seem likely to be least of change worries. 3.0 is substantially faster. Not sure how the trace lines are going to work at substantially faster speeds. I would be surprised if 3.0 didn't not force layout changes. I assumed there would be differences. I didn't assume that had to use the exact same board for 2 years. I figured the run-rate Apple has on Mac Pros is high enough to pay for the PCB board R&D inside of a year. If it really takes two years..... then that is surprising.
Likewise even the NEC solution has problems tapped into the single PCI-e slot it is in now if push it real hard. If later designs have a 3.0 solutions hooked into two different PCI-e channels you'd be able to get more aggregate throughput out of the machine. [ Same as when had two independent FW800 channels versus the current designs which crumble back to FW400 once plug a complicated FW network. Likewise why boards with serious Gb Ethernet have two independent controllers. Plugging several 2.0 or 1.0 USB devices into a 3.0 controller is going to have impact on throughput. ] If trying to deliver box with top end cross section I/O bandwidth may not want to be dependent just on core 3.0 even when it does arrive.
Just because the socket and support chipset are the same doesn't necessarily mean have to freeze the entire layout of the rest of the motherboard.
If the motherboard is frozen for two years there are ZERO reason not to deploy right now.
The comment is based on 2 specific facts.
1. Intel's roadmap is pushing cores per CPU, and the pricing is out of bounds for workstation use (they're filling the requests for servers/clusters with high core counts - it's all about efficiency).
A significant percentage of Mac Pros are deployed as severs.
Over half of the "new" Gulftown Xeons lineup are 4 core models.
Likewise Intel has now speed bumped the rest of the 3500 line used in the single processor package models.
The price range line up is approximately exactly the same (for the parts Apple is likely to use) as it was last year. Apple can add a few models at the top of the line with 6 cores that those folks probably can leverage. There is no need for or requirement that Apple deploy 6 cores across the board.
2. Software is behind for the most part. There are a few applications that can use beyond 8 cores, but it's rare (simulations for example), and usually isn't available for OS X.
The is fewer major "making money" software titles in daily use on the upper end of the Mac Pro spectrum that cannot leverage additional horsepower than there is on the lower levels.
It also depends upon what the user does. If the user serially interacts with applications it is hard to keep workload up. However, if a single user interacts with several programs at once ( start long operation in one, then move to another , eventually cycle back to the first. rinse and repeat ), then can easily get a workload that is higher than what any single one application can do. (Certainly much easier to do when focus multiple users onto a single box. )
It doesn't make sense to sweep in the folks who buy iMacs into the pool of folks talking about when in a Mac Pro discussion. Few relative to the overall Mac OS X market is not as material as few in the submarket of those spending over $2,500 on a computer. Those folks are likely paying more for their software. If paying lots more for your software and it can't keep up.... it is not the hardware where you should question where throwing money at.
Finally, the software doesn't go more than 8 because more than 8 isn't commonly deployed. The GCD starts folks in the direction of not hard coding caps into the software. Even those not leveraging GCD will increase their software caps if the deployed machines start to appear in reasonable numbers with more than 8. If injecting hard coded caps no reason to set it higher than what exists.