Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The server market is the backbone of the business market. Macs will be niche in enterprise as long as the backbone isn't there, and stronger than last time.

I'm going to have to disagree with that. Apple can be a great contender in the enterprise market without even touching server space.

There's plenty of client side areas Apple can compete in. God knows any of the current "Enterprise" companies aren't going to deliver a well polished client side, especially in the Mobile space.

If you need a "Certified Engineer" and $10k worth of training to set up your software, you're doing something wrong.
 
Last edited:
Actually, you can get by with a mid/high level iMac now for most graphic design needs (photoshop, illustrator, etc) these days and even average video editing needs

MacPros are really now for higher end video and 3D applications or those that really need to get their work done fast and rendered fast.

Actually, if you go blow for blow, building a PC that has the same specs (and that means same specs across the board down to the minor details), they really aren't that far off. Some magazine did that one time and they were within $300 of each other.

oOo, cool, I wasn't aware! You learn something new everyday.. Just goes to show how much and how fast technology seems to be progressing today! Surprised to hear about the PC build too - then again, I really only check New Egg, and piece together all the parts that I would want hahaha

I agree on MBPs being somewhat overkill for some people - I bought mine just before the Sandy Bridge ones came out because my old BlackBook just couldn't handle the HD iPhone 4 video like I thought it could (again, I don't even do much, just edit family vacation videos and Photoshop pictures in bulk).. With my MBP, it's super swift when editing, and rendering takes no time at all - I can't even imagine how much faster the new Snady Bridge ones must be!
 
Some designs changes i'd like to see (all the rest i'm fine with):

- Dust filters
- Thunderbolt ports, front and back (instead of one of the firewire ports)
- Usb 3.0 replacing usb 2.0 ports
- PSU on bottom to keep it cool
- HD's on bottom to keep them cool too
- At least one dedicated SSD bay

How does having the PSU on the bottom keep it cool?...

Hot air rises, so the heat generated by the PSU will just rise and fill up the case.

Unless I'm missing something or the laws of physics have changed in recent years?
 
Direct Attached Storage is a pain to manage : "Hey, XY server needs more storage space... oh wait, the array is full, we need to purchase a new array for it... too bad we can't use YZ's array which only has 2 bays occupied...".

Centralized storage arrays with LUNs solves all of these issues. Running out of storage ? Present a new LUN and just plug it in to whatever volume manager you use and grow your existing filesystem, all with 0 downtime or even having to physically connect anything to the box.

For data centers, Thunderbolt is a non-contender.

That's the nice thing about the equallogic, right? ;)

Only issue I currently have with throughput is being limited by 4gigs when there are 30 some odd VMs running in our 3 host cluster. I would love to be fiber channel but between state budget cuts and PITA systems guy it ain't happening.

On thunderbolt though, I truly believe it will be a non-starter. Sure, it's cool for those of us that know about it but people in general won't know and won't really care either way. Honestly, consumers should already be above 10Gbps because the physical hardware is already there, just a matter of market elasticity.
 
I need:
8 Internal Bays.
More PCIe Slots.
Thunderbolt.
Keep Dual Optical Bays.
More Ram Slots.
Built in Fibre Channel (This is a stretch)
That should be a MacPro. What you guys want is that magic headless iMac. I want more, not less.
Working in Video I need the most horsepower possible. 32 Cores would be nice.

At home I can live with my iMac, but editing on it is a pain. A MiniMacPro might work there, but it will still cost 2k and people will bitch.
For work I can justify spending $8,000 on a high powered PRO machine.

Exacly, these are workstations if you want something small with limited expandability buy an imac.
 
That's the nice thing about the equallogic, right? ;)

Only issue I currently have with throughput is being limited by 4gigs when there are 30 some odd VMs running in our 3 host cluster. I would love to be fiber channel but between state budget cuts and PITA systems guy it ain't happening.

On thunderbolt though, I truly believe it will be a non-starter. Sure, it's cool for those of us that know about it but people in general won't know and won't really care either way. Honestly, consumers should already be above 10Gbps because the physical hardware is already there, just a matter of market elasticity.

You do realise you can switch your multi-path policy to something like Round-Robin or Least used link or something and use both your fabrics at the same time, giving you double bandwidth (in your 4 Gig port configuration, giving you 8 Gbps, or in a 8 Gbps FC configuration, 16) right ? Actually, you should have a look at what it is set to, some versions of ESX and ESXi are completely retarded and set the default policy to use Fabric 1 only (older versions prior to 4.x didn't have a supported configuration for using both paths at the same time, the support was experimental I believe).

Or you can run FCoE or FCoIP and use dual 10 Gbps for FC on the cheap (I do realise HBAs can be pricey). Or heck, iSCSI over 10 Gbps links...

Also, looking at my current I/O statistics for one of our biggest ESXi boxes (about 20 VMs), I see we average about... 10 mbps over the fiber. ;) Servers aren't constantly writing at full bandwidth anyhow and the convenience of centralized SAN management trumps Direct Attached Storage any day of the week in a data center environnement.

Heck, I wish our DMZ servers could be attached to the SAN (stupid Security policies) so that I could actually grow the filesystems on which the file repository sits... seeing how Sun (now Oracle) wants to charge us over 10k$ for about 72 GBs of disks, just because the hardware is EOL'd and it lacks the 2nd controller so that we can use the drive bays that are free in it...

Thunderbolt brings me back to those days. It's just not something I'd ever consider for data center use. It's not going to replace iSCSI or Fiber Channel. It's a complete non-contender in that space. Consumer space or workstations ? Yeah, sure, seems it could replace Firewire and USB disks, if the price and availability of actual peripherals is good. That last part remains to be seen.
 
It makes a lot of sense. Quietly cooling two CPUs, a high-end GPU, 8 DIMMs and multiple drives in such a form factor makes me a little dubious. That and it seems pure hearsay on the part of 9 to 5 mac.

Mods please don't lock this, discussion of Mac Pro related articles in the main news section is really hard to have as 90% of the posts are by people who have little interest or knowledge in the topic.
I like the idea (exists with other cases, and the one's I'm thinking of, such as offerings from SuperMicro, work very well).

My concern though, seems to be the same as yours. Specifically packing a workstation into a 3U enclosure. 4U or even 5U, fine, as there's sufficient space for full height PCIe cards and cooling (3U seems to tight though for a workstation that has to be planned thermally speaking with all slots filled).

Yet another sign Apple is going to kill the Mac Pro.

You'll see! With Final Cut Pro on it's deathbed there is no way the Mac Pro is sticking around!

/s
I get the sarcasm. My issues aren't with the concept of the case that's usable as both a tower or rackmount though.

As far as the MP's continuation, it's to do with the direction Intel's going to meet enterprise customer requirements/requests that I've noticed (more cores than most workstation software can utilize, and the price is going up as a result). Add in Apple's margin on smaller unit sales vs. other workstation vendors, it doesn't look good.

TB further complicates the issue, particularly when a single die consumer desktop CPU releases with 8 cores (not to far away), as the iMac could be considered as a replacement (not ideal, but functional enough for quite a few users).

Keep in mind, creative professionals don't actually need ECC as the software's not based on recursion (worst case, flipped bits due to radiation cause a bad pixel here and there, not the entire image).

doubtful, this is a key switcher market... it would be crazy to axe the very thing that will continue to switch the PC builders/gamers over the next 5 years... this is a key ingredient to apple taking the industry over with time.
Not so much lately, given the pricing since 2009 (enthusiast users are being forced out due to costs). Even professionals (i.e. independents and SMB's <particularly S for small>) are feeling the pinch as well, going by posts here on MR.

I think the iMac will take care of gamers...
This is what Apple expects them to buy from what I can tell (i.e. SP MP is ~$1000USD more than a PC equivalent).

You are essentially now using a PC with EFI firmware and OSX operating system. The only advantage over a hackintosh is that it's all fine tuned, modified and tested under one roof ....
Exactly.

From an electronics POV, the MP is made of the same equipment used in PC equivalents. Apple uses the case to distinguish it physically, and the firmware to lock OS X to the machine.

The desktop market has been exhausted and its time passed anywhere, so now it's all about mobile and portable computing.
This has been claimed for awhile, and in developed nations, it has its validity.

But when you look to less developed nations, desktops still out-sell laptops due to more bang-for-the-buck (i.e. look at China; they're less likely to have more than one system, so they choose the desktop for more power at a lower cost = higher desktop sales currently). This will change over time, but by then, citizens of developed nations may be so poor, that we have to dump laptops and devices for desktops again. :eek: :D :p


- Dust filters
Definitely, given the cost of the MP.

How does having the PSU on the bottom keep it cool?...

Hot air rises, so the heat generated by the PSU will just rise and fill up the case.

Unless I'm missing something or the laws of physics have changed in recent years?
The PSU doesn't run as hot as the CPU or GPU (hot air from the boards rising into the PSU doesn't do it any favors). Hot air off of the PSU heat sinks can be exhausted before it ever rises to the boards. More of a win-win.

Of course, by using baffling (separating the case into chambers), it won't matter that much anyway thermally speaking.

But even with baffles, the layouts are improved with PSU's located on the bottom IMO.
 
You do realise you can switch your multi-path policy to something like Round-Robin or Least used link or something and use both your fabrics at the same time, giving you double bandwidth (in your 4 Gig port configuration, giving you 8 Gbps, or in a 8 Gbps FC configuration, 16) right ? Actually, you should have a look at what it is set to, some versions of ESX and ESXi are completely retarded and set the default policy to use Fabric 1 only (older versions prior to 4.x didn't have a supported configuration for using both paths at the same time, the support was experimental I believe).

Or you can run FCoE or FCoIP and use dual 10 Gbps for FC on the cheap (I do realise HBAs can be pricey). Or heck, iSCSI over 10 Gbps links...

Also, looking at my current I/O statistics for one of our biggest ESXi boxes (about 20 VMs), I see we average about... 10 mbps over the fiber. ;) Servers aren't constantly writing at full bandwidth anyhow and the convenience of centralized SAN management trumps Direct Attached Storage any day of the week in a data center environnement.

Heck, I wish our DMZ servers could be attached to the SAN (stupid Security policies) so that I could actually grow the filesystems on which the file repository sits... seeing how Sun (now Oracle) wants to charge us over 10k$ for about 72 GBs of disks, just because the hardware is EOL'd and it lacks the 2nd controller so that we can use the drive bays that are free in it...

If I am not mistaken our PE 2970s are 10gig (x4), I do not know about the Equallogic (I'm just the "helper" on most of this anyway) but I assume it is 10gig as well. What makes this funny to me is that we should be able to use one physical link per server back to the equallogic and have greater throughput versus going from the 2950 to a 3com 4500 (1gig switch, I 10gig switch would be a great start here) and back to the equallogic. (that would throw out redundancy though)(Lead systems guy threw in a 4400 by mistake initally) Most of our VMs are file sharing servers instead of processing servers, I would think the higher the transmit speed the better. We also recently added an R715 and it likes to take the brunt of the load from the cluster (16 physical cores/32GB RAM in it vs 8/32GB in the 2950s) so it having greater throughput would be helpful. We are on 4.1, I will def look into the multipath policy, thanks :)
 
How does having the PSU on the bottom keep it cool?...

Hot air rises, so the heat generated by the PSU will just rise and fill up the case.

Unless I'm missing something or the laws of physics have changed in recent years?

the PSU is cooled down by air inside the case, if it's on top of the case, it will get mostly hot air, if it's on the bottom, only fresh air. Plus the heat generated by any Mac Pro PSU will go out the back.. Missed didn't you? :D

The G5 had a better PSU location, just not a better form size for it.
 
Psu at bottom ???

Some designs changes i'd like to see (all the rest i'm fine with):

- Dust filters
- Thunderbolt ports, front and back (instead of one of the firewire ports)
- Usb 3.0 replacing usb 2.0 ports
- PSU on bottom to keep it cool
- HD's on bottom to keep them cool too
- At least one dedicated SSD bay

How exactly is a PSU at the bottom going to aid cooling? Heat Rises . . . . so anything above the PSU gets even hotter, this is why traditionally PSUs are at the top of the case.
 
How exactly is a PSU at the bottom going to aid cooling? Heat Rises . . . . so anything above the PSU gets even hotter, this is why traditionally PSUs are at the top of the case.


Aids with the psu cooling, cooler psu = expanded lifetime and less noise on fans

..so at the BOTTOM of the case there is cool air (at least cooler then on top, since hot air rises), that means if the PSU is on the bottom it gets cool air and expells hot hair out the back of the case (NOT inside the case, meaning NO hot air coming out of the PSU back will stay in the case and get things even hotter).

Hope i was clear since english is not my primary language, but i think it's quite simple :)
 
..so at the BOTTOM of the case there is cool air (at least cooler then on top, since hot air rises), that means if the PSU is on the bottom it gets cool air and expells hot hair out the back of the case (NOT inside the case, meaning NO hot air coming out of the PSU back will stay in the case and get things even hotter).

Since the Mac Pro has separate compartments for the processors, extension cards and PSU / optical drives, it doesn't matter where the PSU is whatsoever!
 
This is great news but it won't replace the desktop top model altogether. I would say a majority of users would want it on their desktop rather than in a rack although, come to thikn of it if it meant a dedicted and modular storage rack then I'd probably go that route. I would imagine you could just take off the handles or it be optional altogether. Either that or they abandon handles and have internal hand slots, for want of a better phrase. Im sure the designers at Apple could utilize the slots for cooling chimneys or something.

I was actually hoping it would be bigger, have more internal bays. 2 more would be fantastic. But I guess this just opens up the possibility of a dedicated and modular expansion rack for storage via thunderbolt.

As for Mac Pro dying, it just isn't going to happen. Anyone who has used an iMac in professional situations where they hit the wall of limitation knows that they need a Mac Pro. While iMacs have always being tempting I would never buy another one for what I do since the last one I owned a 500Mhz Graphite DVse. They won't open up iMacs because they need to separate the am/consumer and pro markets.
 
Last edited:
I think the iMac will take care of gamers and builders.. the mac pro is NOT a gaming device, it is a high class workstation that is designed for use with using and manipulating multi-threaded pro and audio apps.

Yeah but it is the fastest "gaming" device Apple sells so...
The stock 5770 is faster than the iMac 27". The iMac has always had pathetic graphics. It is never paired with a card that can actually play a game in the res native to the screen your playing on. You always have to down res to get any AA or heavy shadows. This is unacceptable for any respectable gamer. I play on a smaller screen specifically because I want everything dot to dot. The graphics chip is soldered to the motherboard, also not ever going to be cool.
On the Mac side my Pro functions as a fancy pants Xeon workstation and on the Win side it performs as a Core i7 980x and 5870 should perform and that is a damn respectable gaming rig. The less capable machine is never seen as ok for gaming. For me to think of iMac 27" they'd have to ship it with at least a 5870 if not a 6970 to ever hope to get 2560x1440 res in game. The heat alone would melt that case in a couple months:)
 
Since the Mac Pro has separate compartments for the processors, extension cards and PSU / optical drives, it doesn't matter where the PSU is whatsoever!

I think that is not quite true.. hot air rises and the top compartment is not fully closed, last time i saw, there are vents for the HDD's in the "shelf" they slide in that allow hot air to go to the top compartment keeping them a bit fresher, and the PSU even warmer.

Feel free to correct me on this since i only opened the case once and it was like an year ago

EDIT: found a pic that illustrates this: http://images.anandtech.com/reviews/mac/MacPro2010/_DSC2953.jpg

while the PSU will get fresh air from the front, it will also get the hot that rises from the bottom, Hot HDD's included.
 
Last edited:
If not this year then soon I predict Apple will revamp the MP to be a module system tied together using TB. Of course, I hope they'll wait until the 100GB TB spec is ratified and in use, otherwise it will be a step backwards. But overall I think it could be a serious improvement for the MP. You buy the "brain" you want (mini ala i3/i5, a middle brain with Desktop i5/i7, and a "pro" brain with 1 or 2 Xeons. The brain would be CPU, RAM, USB, and TB (and perhaps wireless and ethernet). You can buy storage containers and video containers as you need.

This system would be easily and quickly standardized (commoditized) giving continuing Apple's tight fist of control but letting them spin off the lowest margined, fasting changing areas of video processors and storage.

I personally think it will work a bit like RED's cameras ushering a new era of embedded and server room technology. You could have a fanless I/O station and/or monitor sitting on your desk with all the fans and heavy lifting equipment isolated somewhere else.
 
I think that is not quite true.. hot air rises and the top compartment is not fully closed, last time i saw, there are vents for the HDD's in the "shelf" they slide in that allow hot air to go to the top compartment keeping them a bit fresher, and the PSU even warmer.

Feel free to correct me on this since i only opened the case once and it was like an year ago


while the PSU will get fresh air from the front, it will also get the hot that rises from the bottom, Hot HDD's included.

I'm not exactly sure why Apple put those "vents" in the plate, they sure don't go through the whole panel, though. The compartment to the top is indeed closed apart from a few tiny holes.
 
I'm not exactly sure why Apple put those "vents" in the plate, they sure don't go through the whole panel, though. The compartment to the top is indeed closed apart from a few tiny holes.
There's not a lot of venting on the back (nor ability to install a fan in push mode), so it's likely as a means of moving additional heat out of the PCIe zone, and pull it out through the PSU (not as hot when mixed with cool air drawn in around from the front of the case past the ODD's, so it shouldn't be hot enough to cause damage to the PSU).

Just a thought anyway... ;)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.