Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
A macbook pro is not big enough - To me a Mac Pro is huge and needed, just not affordable at the moment.

I have 2 Voyager Sata caddies with the pop out drive system and move through 6 drives, 4 firewire 800 drives, CF car reader in the Firewire 400 and all the USB ports are used. What a mess :mad:

No complaining about the Mac Pro please :confused:

Need one - cant afford one :(
 
"Kind" of wrong.

Have 5 drives allready MP and adding the sixth internal next week, you can use the free sata slots on the mobo. Just to rectify.

Running a 1+0 system with fifth drive in slot nro 1 holding backup system and bootcamp.

Yes, however, the consumer only sees 4. Remember, The ICH10R chip supports native 6 SATA ports. However, Apple only gives you 4 bays. You gotta find the space for the rest.
 
I'm almost certain they'd sell more units and make more profit if they changed it to one smaller/simpler enclosure (i.e. prosumer) and one bigger high-end enclosure.

Not necessarily. Cherry picking out all of the potential upsides isn't a balanced analysis of what will happen to the product line up ecosystem. This ignores several very real factors.

1. You going to create more expensive super-max model ( drive pricing up). Once you prune off the super-max enclosure the number of units sold will go down from Mac Pro numbers. That means the number of users you will have to amortize costs will go down ( fewer people paying for more custom designs. ). You also have increased support costs because now have products to cover and configurations to test/certify/diagnose. Also will have yet another team with is largely constructing the same thing CPUs/memory/etc. the only minor variations here are a couple extra support chips and more/less wire routing. More parallel design teams, more costs.

So most likely the "overhead tax" assigned to the "super-max" users will go up much more than it will on the "mini-tower" folks.

The spin here is that magically more users are going to flow in to fill the gap. That's isn't particularly likely. Apple will still have 4-6% of overall market. If the "super-max" crowd is 1% of the overall PC market then 6% * 1% is 0.06% of the market. That's not a big group to chase after for a multilbillion a year corporation. Bubba Gump's custom PC internet business... yeah alot, but for Apple no.

Even if you do get folks to flow into "super-max" market you are also going to get folks flowing out. Right now Apple has decent pricing for folks who are in the middle of the targeted market. Those folks will see box increase in costs for value they don't really leverage.

If look at HP/Dell/Lenovo there super-max boxes are higher. The farther up their product lines you go the more and more Apple mark up margins start to match there or even look a bit better. Same diminishing small market segment factor drive up prices on their side too and they sell many more units (because folks want Windows and to lessor extent but significant in this specific hardware market, Linux. )



2. The mini-tower model is going to cannibalize a subset of the iMac market. Again that doesn't increase overall profits at all.

Additional, secondary effect of decreasing the number of LCD panels sold by Apple which will have cost increases along parts of other lines since impacting their economies of scale. Even more costs if force Apple to jump back into much more broadscale commodity monitor business.

Even though the base Mac Pros sell less in number because priced a bit high compared to mini-tower alternatives their development costs are very low because just an extremely minor change from what the mid/upper Mac Pros. So as long as they grow the Mac Pro chassis base some, then they are lowering costs aggregate costs (hence making more profit).

Decouple the more base level Mac Pros from the upper end versions by turning them into mini-towers and the price will likely have to fall back. You'd have to increase volume just to tread water. The amount of profit is lower per unit and have increased cannibalization by significant amount.

Likewise support/certify/diagnosis queue is bigger and more expensive because have split one product into two.


Don't your breath to see Apple do this.

Yeah sure Apple could build a Super-max workstation. They could also build:
2U , 4U servers
Grid in a cargo container boxes
a smaller screen Macbook to crack the sub $800 barrier.
A toughbook like ruggedized Laptop
PCI expansion boxes
RAID boxes ( oh wait they did and canceled that)
etc. etc. etc.


In short, Apple operates differently that most of the PC vendors. Many of the PC vendors have super low margin models at the lower product lineup extreme and super high end margin models at the highest end product lineup extreme . They try to blend the lows with the highs to get an acceptable rate.

Apple puts approximately the same margin on everything. Low and high. The high end stuff isn't there to prop up the low end stuff and the low end stuff isn't there to drag down the high end stuff. Apple's Mac business is oriented toward finding the sub 8% of the personal computer that wants to buy the subset of possible products they make and selling them to just those folks. They aren't even trying to sell something to everyone.


So when folks come up with laments of "well they are missing me" .... that doesn't matter. "Well there are 10,000 people like me" ... still doesn't matter. Apple spins like they are small company but if there aren't significant economies of scale in it they aren't doing it. 10,000 folks isn't huge scale.
As the overall PC market grows by 10's of millions a year 8% becomes an increasing larger number (relative to 10K-100K range. ). In that sense, Apple can stretch claim "small" while still not trying to filling smaller niches in the market.
 
Why is it nobody here is even thinking about an expansion chassis for the MacPro? Magma makes a bunch of chassis that'll work - giving you more pcie slots and drive bays as well. Hell, the ProTools people use them all the time.
There's Cyclone as well.

But, there's a downside, as you're pushing everything across a single slot in the workstation (potential to throttle, and added latency). Nor are they inexpensive. But they still have their uses.
 
1. Mac Pro supports a 1kW PSU. So there would be no need for a bigger one. All Apple has to really do is add more expansion cords.

2. Bays agree and make them SSD compatible.

3. Larger enclosure is nice for more stuff. However, it is not necessarily the case. You can become more space efficient. See the RAM risers. You can make the RAM modules attach like on a regular PC and save up space.

4. Extreme users (like I said) go with a custom built and variety of options. They do not pick Apple.

Extreme users, in my experience, rarely actually build their own boxes. Extreme *hobbyists* go with custom built, extreme *users* make money on their hardware (or the hardware is paid for by grants, which you justify with publications) and don't want to waste time building boxes (not *always* true, there's a cluster at MIT right now I got shown last year that is motherboards and PSUs ziptied to wire shelving, because it fit the needs of the users, but that's not the normal situation).

Saying extreme users don't go mac is foolish. Extreme users pick apple when the situation is appropriate for it. For example, my workstation is a mac pro, but my *research* runs are on dell, cray, HP, IBM, or supermicro compute nodes, not xserves. I do my analysis and some test runs on my workstation, but apple doesn't make a product that works well for high density HPC.

If you want a good example, go to the annual supercomputing conference (lots of extreme users, lots of extreme hardware on the conference floor). You'll see tons of macbook pros in people's hands, and plenty of mac pros running vis boxes and front end interfaces. The *clusters* are not apple. As I said before, the right tool for the right situation.
 
not enough PCI slots (to have all the expansion cards), not enough power (to drive multiple high end graphics cards), not enough drive bays

What are the root causes driving all of these drive bays and PCI slots.


Historically you had the following.

Cards which fit into NuBus/PCI/PCI-e slots that had either limited bandwith at the connector or limited bandwidth to the CPU/Memory. So "more" cards needed because couldn't go 'wide' enough.

Had drives with latency and bandwidth problems relative to the speed of CPU and slightly lessor extent memory. So more spindles to masks these problems.


That gets set as some generic rule that "power" is measured by number of sockets and slots. The more are required to get work done.

The modern trend lines is that can do more with the fewer slots:

Instead of one monitor or even two monitors graphics cards can drive 3 (or more.... some of the AMD/ATI technically can drive 6 but run out of edge space on card for connectors. )

PCI-e 2.0 and upcoming PCI-e 3.0 uncork the bandwidth problem to the CPU. Unless talking about bleeding edge Infiniband cards .

Drives.... use fewer spindles... that is the space waster. It isn't as neccesary now for a larger population of users.


Users also have to pick a solution. Some of this is also from dragging old legacy stuff into the future. Had an big direct attach storage RAID set up but now putting in a Fibre SAN set up. Want Mac Pro to do both. .... pick a side. If put bulk storage on network ... do it. Go SAN and/or NAS. The "do everything" approach is where run out of slots.

There are a small subset of users whose problems keep growing bigger faster than the slot/socket improvements come. Sure the Mac Pro isn't for them. However, for many folks their workload isn't expanding faster as the hardware is getting better. The objective for the Mac Pro marketing folks at Apple is to find more of those folks. Not try to find more of the folks the solution doesn't target. There are going to be at least as many folks who used to require to extreme custom workstation than can use a more mainstream Mac Pro now with modern gear, than there are folks who still can't merge into the Mac Pro product stream.
 
Isn't the chipset 36-lane?

The X58 chip has 40x lanes. However, x4 of those are relegated to the DMI or the connection between the ICH10R bridge and the X58. That leaves x36 lanes open, which translates into either x16, x16 (making x32) and x4 or as Apple has it. x4, x4, x16 with another one in the unknown (claimed to be x16).
 
The X58 chip has 40x lanes. However, x4 of those are relegated to the DMI or the connection between the ICH10R bridge on the X58. That leaves x36 lanes open, which translates into either x16, x16 (making x32) and x4 or as Apple has it. x4, x4, x16 with another one in the unknown (claimed to be x16).

Yes I think in the Mac you would be able to configure:

x16+x16+x4
x16+x8+x4+x4

With AMD 6100 and only 1 SR5690 bridge (for 2 CPUs), you can actually configure:

x16+x16+x4+x4+x2
x16+x8+x8+x4+x4+x2
x8+x8+x8+x8+x4+x4+x2

The last 6 lanes I showed as X4+x2 can actually be configured in upto 6 slots in any combination of x4, x2, x1.

There's an additional x4 going to the southbridge.
 
x1 lanes would be pitiful if you happen to have a RAID or Fibre Channel card in there. Besides, Apple guarantees (in the Technical Specs page) 2 x4 PCIe lanes with physical x16 support.

Yes, but it looks like you wouldn't be able to configure 2 x4 if you already configured 2 x16.
 
x16+x16+x4
This is the configuration used in the 2009/10 MP's. Those 4x lanes are "shared" by both slots 3 and 4 by using a PCIe switch (located on the backplane board, which is where the slots are soldered).

Slots 1 and 2 are dedicated.
 
This is the configuration used in the 2009/10 MP's. Those 4x lanes are "shared" by both slots 3 and 4 by using a PCIe switch (located on the backplane board, which is where the slots are soldered).

Slots 1 and 2 are dedicated.

I imagine x16+x16+x1+x1 is possible, otherwise having those 2 slower slots instead of just 1 makes no sense.
 
I imagine x16+x16+x1+x1 is possible, otherwise having those 2 slower slots instead of just 1 makes no sense.
It's possible, but a waste of lanes. And though dedicated, would be slower (4x, even with latency due to switching).

As per the 16x + 16x +4x configuration, I managed to get pics and part numbers off of the backplane board, which revealed the PCIe switch.
 
The X58 chip has 40x lanes. However, x4 of those are relegated to the DMI or the connection between the ICH10R bridge and the X58. That leaves x36 lanes open, which translates into either x16, x16 (making x32) and x4 or as Apple has it. x4, x4, x16 with another one in the unknown (claimed to be x16).

fair enough, I didn't think about internal allocation
 
PCI-e 2.0 and upcoming PCI-e 3.0 uncork the bandwidth problem to the CPU. Unless talking about bleeding edge Infiniband cards .

PCIe 3.0 is double the bandwidth of current PCIe 2.0 In other words, here is how much a PCIe x1 transfers over it's 3 revisions

At x1 speed or in other words per lane
PCIe 1.0 @ 250MB/s
PCIe 2.0 @ 500MB/s
PCIe 3.0 @ 1000MB/s

PCIe 3.0's extra bandwidth means now we can have x8 lanes electrical lanes to a physical x16 and still have enough bandwidth for a GPU or High Speed Expansion card. Which means now LGA 1156 owners have an easier time with multi-GPU setups not having bottlenecks.

However, as per Apple systems, PCIe 3.0 will only make newer GPUs not bottleneck at the x4 links (that would be 4GB/s vs 2GB/s)
 
PCIe 3.0 is double the bandwidth of current PCIe 2.0 In other words, here is how much a PCIe x1 transfers over it's 3 revisions

Or Apple can get rid of the switch, so the last two don't have to share a 4x. 8x + 8x + 4x + 4x . For GPUs that arent' blocked on the current 16x's this is really same situation using fewer wires. If put sufficient memory on the GPU side of the PCI-e bus is there often a need for high speed? [ putting aside graphics cards swapping frame buffer data. That's it driven because separated the computations. ]

Suspect though if they have switched the last two slots, they'll keep that set up because the stuff jammed in the last two 4x slots will more likely be v1.0 and v2.0 stuff and not bleeding edge v3.0 stuff. So the relatively marginal latency increase is even less in most cases.

However, even if double, PCI-e is just treading water against Infiniband (IB)

http://www.theregister.co.uk/2010/06/29/infiniband_roadmap/page2.html

It is making very similar changes at just as fast a rate (if not faster. )
For the relatively pedestrian FC and Ethernet cards available for Mac OS X it does get better with the 2x speed up. Perhaps the 1x IB card could show up though because might get by with one of the 4x slot better. It is more a software issue though I think.
 
This is the configuration used in the 2009/10 MP's. Those 4x lanes are "shared" by both slots 3 and 4 by using a PCIe switch (located on the backplane board, which is where the slots are soldered).

Slots 1 and 2 are dedicated.
So, since I have the Apple RAID card in the top 4x slot, would it be better to use the free 16x slot for my eSATA card, or leave it in the 3rd (4x) slot I have it in now? I wasn't aware of it being a shared slot.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.