Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Apple doesn't design or manufacture their own gear though (they do produce the industrial design and a spec sheet). But the actual circuit design is ODM'ed by another company (Hon Hai Precision does the majority of it, but they've used Intel for the MP boards in the past).

That's pretty sad if Apple can't even lay out a circuit board with in house personnel. That whole "designed in Cupertino" on the box is a grossly superficial statement of situation if so. I know they go to outside vendors for production. I can see them going to same or similar folks to create prototype boards. However, if they can't even do the ECAD design and simulation for the boards and are just picking parts out of parts bin/catalog and saying don't use the cheap ones .... WTF. How is that designed in Cupertino ?

OK. if you are saying they are committed to using the exact same board for two years in a row. Then yeah no new stuff. However, this EVGA board is new. They managed to do it and not go bankrupt. In those other years if nothing outside of the core chipset needed changing they the could keep the same board for two years. But this isn't much different than when transitioning from FW400 to FW800. That isn't in the core chipset.

Apple could use the NEC solution just like 90% of the boards you can walk into a decent motherboard store and see. Perhaps, they don't like committing to longer term to follows on to that.

USB 2.0 and 3.0 pins out differences seem likely to be least of change worries. 3.0 is substantially faster. Not sure how the trace lines are going to work at substantially faster speeds. I would be surprised if 3.0 didn't not force layout changes. I assumed there would be differences. I didn't assume that had to use the exact same board for 2 years. I figured the run-rate Apple has on Mac Pros is high enough to pay for the PCB board R&D inside of a year. If it really takes two years..... then that is surprising.

Likewise even the NEC solution has problems tapped into the single PCI-e slot it is in now if push it real hard. If later designs have a 3.0 solutions hooked into two different PCI-e channels you'd be able to get more aggregate throughput out of the machine. [ Same as when had two independent FW800 channels versus the current designs which crumble back to FW400 once plug a complicated FW network. Likewise why boards with serious Gb Ethernet have two independent controllers. Plugging several 2.0 or 1.0 USB devices into a 3.0 controller is going to have impact on throughput. ] If trying to deliver box with top end cross section I/O bandwidth may not want to be dependent just on core 3.0 even when it does arrive.


Just because the socket and support chipset are the same doesn't necessarily mean have to freeze the entire layout of the rest of the motherboard.

If the motherboard is frozen for two years there are ZERO reason not to deploy right now.








The comment is based on 2 specific facts.

1. Intel's roadmap is pushing cores per CPU, and the pricing is out of bounds for workstation use (they're filling the requests for servers/clusters with high core counts - it's all about efficiency).

A significant percentage of Mac Pros are deployed as severs.

Over half of the "new" Gulftown Xeons lineup are 4 core models.
Likewise Intel has now speed bumped the rest of the 3500 line used in the single processor package models.

The price range line up is approximately exactly the same (for the parts Apple is likely to use) as it was last year. Apple can add a few models at the top of the line with 6 cores that those folks probably can leverage. There is no need for or requirement that Apple deploy 6 cores across the board.


2. Software is behind for the most part. There are a few applications that can use beyond 8 cores, but it's rare (simulations for example), and usually isn't available for OS X.


The is fewer major "making money" software titles in daily use on the upper end of the Mac Pro spectrum that cannot leverage additional horsepower than there is on the lower levels.

It also depends upon what the user does. If the user serially interacts with applications it is hard to keep workload up. However, if a single user interacts with several programs at once ( start long operation in one, then move to another , eventually cycle back to the first. rinse and repeat ), then can easily get a workload that is higher than what any single one application can do. (Certainly much easier to do when focus multiple users onto a single box. )


It doesn't make sense to sweep in the folks who buy iMacs into the pool of folks talking about when in a Mac Pro discussion. Few relative to the overall Mac OS X market is not as material as few in the submarket of those spending over $2,500 on a computer. Those folks are likely paying more for their software. If paying lots more for your software and it can't keep up.... it is not the hardware where you should question where throwing money at.

Finally, the software doesn't go more than 8 because more than 8 isn't commonly deployed. The GCD starts folks in the direction of not hard coding caps into the software. Even those not leveraging GCD will increase their software caps if the deployed machines start to appear in reasonable numbers with more than 8. If injecting hard coded caps no reason to set it higher than what exists.
 
The 5600 series Xeons can run two DIMMs per channel at full speed (unlike the 5500 series) so having less than six RAM slots per processor looks stingy. (Having said that, Apple restricts the RAM speed anyway.)

Not sure if those reports mean can run two at 1333MHz (as opposed to being able to just do it as speeds below that ) or you inncur no memory speed hits with two ( or three ) DIMMS per channel. I think it is the first from the several I've read. Not sure how Intel could not take some hit switching between DIMMs (unless they were required installed in pairs and always set up to talk to two. ) If have point to detailed info that says otherwise that would be cool, because haven't found one yet.

Apple is likely going to stick with 1066MHz so that 1333Mhz is moot. (The new 5600 adds support for lower voltage memory, so it isn't Apple alone who is trying to fit into tighter thermal envelopes. ) Again this is indicative that they tweaked the timing/voltage tolerances on the controllers, not that changed the multi DIMMs on channel implications.


Assuming there is not speed hit in dual DIMMs...

The expedient solution would be to make the box about an 1-1.5 inches wider and redo the daughterboard with the extra two slots.

The problem is that the single CPU package version seems likely to stick with the 3500 series ( since it got speed bumped across the line-up that Apple was using. ). Only if they flip to 5600 for both single and double package versions would the new design be leveraged across the line.

I'm not sure they will do that. They obviously didn't go with the "extra wide" container back in the 3500/5500 era.
 
Not likely before Intel has support for it on it's own chipset.

To roll it in across the whole product line, I agree won't be until in the chipset.

However, the MacPro motherboard has some wiggle room. If they wanted to do a "swap out", just drop the discrete USB controller where the Firewire chip is now. Isn't like Apple hasn't demonstrated before how they are itching to toss Firewire. The NEC USB 3.0 chip hooks to PCI-e just like the firewire controller does. It isn't that large of a board change and they've kicked FW to curb too. (I think it is premature to do that, but firewire isn't in the Intel core chipsets either. ) It isn't like the Mac Pro board is as hard pressed for space or extra PCI-e lanes.

I would guess though that the "villagers" would come out in force with pitchforks , torches and tar if Apple dropped Firewire off the Mac Pro willy nilly like that though. Not recommending that they do it. It is just illustrative that you can add stuff to a full sized board that isn't on the chipset.


Likewise, Intel's update rate on their server class support chips is slower than the mainstream support chipsets. If going to wait until in the support chips, the MBP and iMac might see USB 3.0 before the Mac Pro did. (depending upon timing... the non server versions might come out unaligned with the MBP/iMac releases and get delayed). If that happened, that's goofy.

However, that would give Apple a good excuse to kick Apple support for USB 3.0 drivers to be kicked out into the future. If had Mac OS X support for discrete USB 3.0 chips and Intel's 3.0 implementation less likely that USB 3.0 PCI-e cards (when arrive) will have some flakey 3rd party driver.
 
That's pretty sad if Apple can't even lay out a circuit board with in house personnel. That whole "designed in Cupertino" on the box is a grossly superficial statement of situation if so. I know they go to outside vendors for production. I can see them going to same or similar folks to create prototype boards. However, if they can't even do the ECAD design and simulation for the boards and are just picking parts out of parts bin/catalog and saying don't use the cheap ones .... WTF. How is that designed in Cupertino ?
I doubt they even issue a true BOM, since they're not producing the schematic, let alone the PCB work. Just a short list in a spec sheet (HRS), such as the CPU socket (which dictates the chipset), and the rest is left up to the ODM.

It seems that the industrial design aspect is enough for them to put the "Designed in Cupertino" on the label. :rolleyes:

OK. if you are saying they are committed to using the exact same board for two years in a row. Then yeah no new stuff. However, this EVGA board is new. They managed to do it and not go bankrupt. In those other years if nothing outside of the core chipset needed changing they the could keep the same board for two years. But this isn't much different than when transitioning from FW400 to FW800. That isn't in the core chipset.
That's EVGA though, and I'd be surprised if Apple is willing to spend that much for a board (remeber, Apple charges ~$800USD for a board that has equivalentish <save PCIe slots available on DP boards> for less than half that). BTW, that EVGA board is HUGE (15" x 13.6").

Also, keep in mind, that Apple's sales numbers of the MP's aren't that high (low production volume compared to similar systems by other vendors). It's doable if they still did the hardware development, but it would force them to reduce their margins to keep the MSRP to a level potential buyers would go for it. At some point, there is such a thing as too expensive, despite the marketing,... Especially if the purchase is corporate, as the accounting dept. is likely to have already calculated a purchase limit (they're all about money in my experience, not what system is percieved as the best choice, no matter the reason - that's their job).

Apple could use the NEC solution just like 90% of the boards you can walk into a decent motherboard store and see. Perhaps, they don't like committing to longer term to follows on to that.
I presume you're talking about USB 3.0 chips. Assumin this is the case, there's 2 providers I'm aware of; NEC and Marvell.

USB 2.0 and 3.0 pins out differences seem likely to be least of change worries. 3.0 is substantially faster. Not sure how the trace lines are going to work at substantially faster speeds. I would be surprised if 3.0 didn't not force layout changes. I assumed there would be differences. I didn't assume that had to use the exact same board for 2 years. I figured the run-rate Apple has on Mac Pros is high enough to pay for the PCB board R&D inside of a year. If it really takes two years..... then that is surprising.
Let's ignore the technical aspects of USB 3.0 for the moment. In simple terms, the reason they want to use the same board for 2 years is purely financial. That's one of the reasons Intel's using the current Tock - Tick cycles (2yr socket and chipset lifecycle). It's possible to do redesigns each year to allow for newer tech, but it's more expensive (and even fractions of pennies matter, as once you multiply by production quantities, it results in some real numbers that the business side looks very hard at).

They most likely don't want to justify the expense of a re-design (even though it's more of a modification) to include USB 3.0 and SATA 6.0Gb/s parts. The PCB work isn't exactly inexpensive, and their contracts with the ODM may interfere with this as well (I've no idea of the specifics, particularly production quantities and delivery dates). It's possible they could surprise us, but I wouldn't bet on it.

Then there's a bit on the technical side. Given the PCIe interface issue with existing USB 3.0 chips, it's immature, and would cause hesitation for those looking to sell solid performance rather than specs. Other workstation and server systems are likely to forgo such parts right now, until this issue is solved (matures).

At least that's how we would have looked at it at HP when I was still there.

Just because the socket and support chipset are the same doesn't necessarily mean have to freeze the entire layout of the rest of the motherboard.
No, it's not forced. But there's other considerations to be taken into account as mentioned above. It's not as simple as you might think.

I can't remember the number of times I've seen good ideas shot down before it ever had the chance to become a prototype based on financials alone.

If the motherboard is frozen for two years there are ZERO reason not to deploy right now.
They may not want it to interfere with the launch of other products (detracts attention, and could be seen as a potential for reducing sales). Then there's other issues such as resources, logistics,... to be considered as well. They run rather lean with their personnel resources as I understand it.

A significant percentage of Mac Pros are deployed as severs.
Where are you getting this? :confused:

I ask, as I've never seen a breakdown, but going by the market they're aimed at, is workstation use. The available software that can take advantage of such systems seems to bear that out IMO.

The price range line up is approximately exactly the same (for the parts Apple is likely to use) as it was last year. Apple can add a few models at the top of the line with 6 cores that those folks probably can leverage. There is no need for or requirement that Apple deploy 6 cores across the board.
I wasn't expecting hex core CPU's accross the board, as there's only 1x SP hex core currently available (W3680), and not all of the DP parts available are hex core either. Some are still Quad core (32nm though).

The is fewer major "making money" software titles in daily use on the upper end of the Mac Pro spectrum that cannot leverage additional horsepower than there is on the lower levels.
They'd have to be developed to do so however. For example, Photoshop as well as other Adobe apps still can't use all the avialable cores (i.e. multi-threading is usually only 2 cores as I understand it).

It also depends upon what the user does.
Of course it does. But there's also a dependency on the capabilities of the specific software as well.

For example, I can't force my software to use more cores than it can handle, but I can attempt to run multiple instances if the workflow will allow it.

Finally, the software doesn't go more than 8 because more than 8 isn't commonly deployed. The GCD starts folks in the direction of not hard coding caps into the software. Even those not leveraging GCD will increase their software caps if the deployed machines start to appear in reasonable numbers with more than 8. If injecting hard coded caps no reason to set it higher than what exists.
To me, what matters is what the software does. If it could utilize more than a Quad core system, then it should be able to scale. It's up to the developer of course, but users would be able to take advantage of it if they already own such a system, or might be willing to budget the funds for a new system to improve profitability over a fixed period of time.

The development schedules aren't typically as rapid as consumer applications either, and that has an influence as to what features are available and when and thusly how it affects users. Ultimately, software follows hardware. There's just no way around this. :(
 
Why the delay?

I'm no longer in the market for a Mac Pro having purchased a refurbished Dell T5500 at a bargain price after Apple's huge price hike on the 09 models, so
I'm a bit of an interloper on this forum.

But I'm puzzled about Apple's delay on the '10 model. On the 15th March Dell released a new BIOS for the T5500 which is to upgrade it to accept 5600 series chips so I presume that these will be a simple slot in.

So Apple could have released an upgrade of processor very simply and presumably quickly. So why haven't they? Either they're not getting the
supplies from Intel that quickly or they are planning a more substantial upgrade than just a processor change.
 
Yes, but not all 6-core Xeons will be at that price point. I was referencing the current SP boxes, and the ridiculous profit margins Apple collects on each one they sell.

You are right. The cheapest DP-capable 6-core Xeon 5600 series processor is only $996. Much cheaper ;)

The only 6-core UP Xeon 3600 series processor is the Xeon 3680 that goes for $999.

I will, however, agree that the current Mac Pro is ridiculously priced for what you get. Especially the 8-core version as the newly released Xeon E5620 (2.40Ghz) is only $378.
 
it should be easy ;)

I looked into Dual-Socket Hackintoshes and I saw only a couple of posts on InsanelyMac about (mostly partial) success with various Supermicro boards.

I guess you are the real "netkas" and I would love for you to take the EVGA board and hackintosh it.

We may be able to arrange for paypal donations for that.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.