Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
given you only have one kind of building block when this "depends" swings to a architecture that does require something other than a 1U then solely depending upon 1Us is a problem. If you want to narrow the scope solely to 1U problems/solution pairs then having only a 1U is fine.... but that is circular. All system architectures don't naturally fit into a 1U solution.

Never said it fit ALL architectures. I was opposing the opposite end of the absolute, that it NEVER fit in an enterprise. I don't like absolutes like "never" or "always", but people here are really good at it. Sometimes 1U machines work well. Sometimes they don't. They do for us.

Just like the consumer market, Apple is niched in the server market. For companies that want to buy from just one vendor, no matter what the problem/solution that is a limitation.

I certainly don't dispute this. However just as with the rest of their line, if you like it and it fits your structure, then it's fine.

Exactly how many active virtual machines do you get running in a 1U/ESX box? I can perhaps see consolidating stuff that was only at < 10% utilization max load anyway. However, if had several 1U boxes at 40-50% capacity how do you consolidate those into a single 1U box?

VMWare quotes 10 VM's per core on ESX capability.

We have 8-core 1U's, and generally run 5 VM's per machine so far, without an issue. Web servers, mail, Domain controllers, app servers, TS servers, BES server, XMPP server.

Entire VM images are backed up; if the physical server goes down, they can be brought up on any ESX or Workstation machine, and there's differential backups for as much drive space as you want to dedicate to it. So if something goes haywire, you can go back as far as you want to a good version of the VM.

Can't really do that too easily in the "real world".
 
Why not?

It really depends on the system architecture as to whether or not solely 1U's are acceptable. As I said before, a few 1U's, a SAN, and VMWare ESX negates the need for large individual servers in a lot of cases, particularly in small to medium-sized businesses.

Not everything is a suitable candidate for virtualisation. We've got into ESX 3.5 farms in a big way at work but we still have some blades and some 4U servers and I can't see any of those going soon. We're discussing buying a few more 1Us when we buy 4 new 4U servers and a new blade chassis this year, too.
 
Not everything is a suitable candidate for virtualisation. We've got into ESX 3.5 farms in a big way at work but we still have some blades and some 4U servers and I can't see any of those going soon. We're discussing buying a few more 1Us when we buy 4 new 4U servers and a new blade chassis this year, too.

Oh for crissakes, where did I say everything was? Where did I speak in absolutes? I know it isn't in all cases. But it is in many cases, including ours.

Everyone's different. I don't dispute that. I realize that. Never said otherwise.

Here's me:

It really depends on the system architecture
in a lot of cases

Notice how it doesn't say:


It fits all system architectures
in any case
 
Of course you can. Physical servers doesn't not necessarily equal the number of deployed servers. You can put more virtual machines with moderate workloads on a 4U box than you can a 1U box. Defacto you can turn a 4U box into somewhat of the equivalent of a Blade chassis. What matters is if you can run the 4U box at 80+% utilization all the time. If each server runs at < 20% utilization the problem is not 1U vs. 4U ; it is consolidation.

If you tried selling virtualised servers as dedicated servers then you would basically be lieing to your customers. A dedicated server is just that, an entire computer dedicated to one customer.

Virutalization workload may collapse CPUs but not collapse memory and/or I/O . Typically you can get more I/O and/or memory out of a 4U box then a 1U one. Especially when have multiple VMs going at it at the same time.



Similarly a datacenter isn't solely comprised of web servers. DB servers? Analytics, Data warehouses? those always fit on 1U boxes. ( you can cluster these things to some extent but if offering them up as services you have individualized workloads so may not necessarily be able to cluster just a single virtual instance. )


P.S. Somewhat also moot because XServe boxes aren't really prime ESX (and competitors ) targets. Somewhat a catch 22 if no 4U (or big boxes) then won't every drum up a big enough market for them to make them a target.

I'm guessing you do a lot of work with virtualised servers :).

Virtualisation is fine for some tasks but for web hosts they generally only get used for VPS accounts. Shared accounts can be managed by individual computers or a cluster and dedicated servers must be a whole computer dedicated to one customer anyway.

I am not disputing the fact that you can do some cool stuff with virtualisation, I am just saying that for quite a few tasks 1U servers are better than 4U servers.
 
If you tried selling virtualised servers as dedicated servers then you would basically be lieing to your customers. A dedicated server is just that, an entire computer dedicated to one customer.

I disagree. Though a hosting company needed to be totally transparent about this, I believe dedicated just means fully configurable and that you can have full admin/rebooting privilages, which you get with a VM. As long as you have a good quota of allocated resources, you can run 6 very happy single-vCPU VMs on an octo.

I'm very interested that Apple has for the first time claimed that SSDs are fast, not just durable. This is great, because it means they should start trying to put fast drives in our laptops sometime in the future, which is why we buy SSD anyway.
 
I disagree. Though a hosting company needed to be totally transparent about this, I believe dedicated just means fully configurable and that you can have full admin/rebooting privilages, which you get with a VM. As long as you have a good quota of allocated resources, you can run 6 very happy single-vCPU VMs on an octo.

Nope, that is called a VPS (or sometimes a VDS) - virtual private server or virtual dedicated server. It is a completely different product.

A dedicated server is just that, dedicated.
 
Nope, that is called a VPS (or sometimes a VDS) - virtual private server or virtual dedicated server. It is a completely different product.

A dedicated server is just that, dedicated.

It depends on how you look at it, and probably where you live, as well as who the host is. These ultra cheap hosting sites probably fudge it a bit when the say "dedicated server".
 
Nope, that is called a VPS (or sometimes a VDS) - virtual private server or virtual dedicated server. It is a completely different product.

A dedicated server is just that, dedicated.

It's not really important. You can call things what you like as long as the customer knows what they are getting. Thinking of servers in terms of physical boxes is somewhat outdated and has little use for the customer. Some people want their very own box in somebody else's room though and you don't get much more dedicated than a single box.

Personally I'd rather rent 8 dedicated cores over two machines and share the other 8 with another client. Sure you're twice as likely to have a failure, but you have an extra level of redundancy. Win.

On a new note, will the Apple website start to speed up a bit as they bring new xserves? Probably not :-(
 
Especially as you get an unlimited client version of OS X Server for it. Try buying an unlimited client version of Windows Server (I know you can't) and see how much that costs you...

Stop kidding yourself. Of course you can, the things are called "connector" or "CPU" licenses, depending on the product. Besides that, you can install Windows on a variety of servers that Apple's engineers are not even allowed to dream of - and the Microsoft BackOffice platform is much more versatile than OS X Server. I also strongly doubt that Apple's support for their servers gets anywhere near what Dell provides.
 
True I don't disagree I was just making the point that Apple are pretty competitive in the server world./QUOTE]

If that were true, we might actually be seeing Apple servers "in the wild". But as it is, I have never seen a company that's using them, and I've seen quite a few server rooms and companies in my life.

The list why Apple isn't - and cannot be - a real or significant player in the server market is too long to put it here.
 
Not really. It is far more effective at reducing energy costs when the bulk of your storage is off in the NAS/SAN (i.e. off the server). This way the OS drive which isn't doing much ( you shouldn't be heavily swapping more of the time) does nothing when they is nothing to do.
Which is exactly the point with the boot drive sitting in the chassis separate from the magnetic drives. It is a low power device capable of responding quickly when needed. When not needed it sips power.

As for swapping and logging I'm really hoping Apple isn't doing this to a SSD.
http://www.apple.com/xserve/performance.html [ go down to the disk section]

The SSD drive if want to pull/push alot of data is right in the same ballpark as the SATA drive. This is a SATA SSD drive. You can get nice random I/O response, but your OS should NOT be doing tons of random OS probes at the disk.
What the OS does with the disk is very dependent upon use. For some applications the SSD could be a win if boot and applications run from it.
SSD are much better as a cache to the spinning drives or for doing tons of random I/O with block sizes that match their read/write block size.
I think you are missing two important points. One is that a SSD can lower your power budget. Some apps thrive on platforms that have data and code storage separated.
SSD on the PCI-e slot which isn't bounded by SATA/SAS speeds can have a bigger impact (if Mac OS had drivers for them).
I can't disagree with the idea that PCI-Express is a better place for SSD storage and by extension that SATA is pretty much a dead end. The problem is as you point out no Mac hardware. In the meantime we are very much on the bleeding edge of SSD systems and as such better hardware is arriving every few months. Most of this hardware is on SATA so ideally you would be able to readily implement it on an Apple server.

As to SSD performance in a server that requires knowing the specifics to determine value. In any event for the right app SSD can be a performance advantage even on SATA.
http://news.cnet.com/8301-13924_3-10212989-64.html

so may not want to put them into Apple drive modules. The IOPs of SSD is in a different class that what is usually attached on SATA/SAS bus. Open question whether really want to use that bus for that in the future for latest/greatest SSD tech. It is nice to package as SATA since it is a driveless option, but that puts limitations on the drive.
Long term yes there is the fact that SATA is a dead interface. It simply can't keep up with the fastest SSD technology. However until a standard card format comes around (supporting hot swap for example) we will have to live with SATA and it's limitations. Thankfully SSD on SATA does offer advantages over the spinning storage formats and often can be justified at current prices.
Dell and HP have similar market segmentation for the drives that go into their server class boxes also.

If the limitation leads to substantially high likelihood of server uptime then most folks make that trade-off. Stick to the smaller subset of drives that the hardware vendor says is OK.
I would fully expect that a vendor would have tested and validated drives available. The thing that bothers me is the apparent restriction on running alternative drives. At some point the person implementing the alternative drive will have a better idea than Apple as to it's suitability. Frankly excessive restrictions just limit the potential applications for the hardware.
In a temporary, need a physical keyboard for some deep maintenance, context what is wrong with daisy chaining the mouse off the keyboard?
1.
Speed
2.
A device might already be mounted.
In norm mode nothing should be hooked to the USB slots on the front.
That is your opinion and in my mind reflects a narrow view as to what a piece of server hardware can be put to use as.
If want to do KVM that is better to put in the back with the rest of the wires that permanently hang out the back of the box.
KVMs certainly work but not all operations adopt that approach. Some places simple use a stand on wheels to place a keyboard at the troubled station.
As for storage. The DVD ROM drive? If it is a specific file(s) that all the machines need after being restored, burn a disk and walk it around.
A good choice if it cuts on a DVD and all your machines reliably read the disk. In any event between USB dongles and USB drives the use of DVD/CDs for such file transfers have fallen off rapidly.
dramatic improvements. Not very likely. Some incremental a couple of percentage, yeah. But when has a new OS been 25-50% faster than the last one (that wasn't a complete dog.) ?

That is a good question and frankly no one knows hot well Mac OS/X will respond to Apples speed ups. Personally I believe there is a lot of room in Mac OS for speed ups. How that would translate into a benefit on a server is harder to define.

Given how Linux performs on similar hardware I suspect that there is much Apple could do to enhance server performance. That doesn't even include working OpenCL into the equation.


Dave
 
Thinking of servers in terms of physical boxes is somewhat outdated and has little use for the customer.

Not really. If you have a website that will bring a VPS down because it requires too many resources then you obviously need to start looking at dedicated hardware.

Plus the new rules coming into effect regarding merchants processing credit cards requires them to have dedicated hardware.

Stop kidding yourself. Of course you can, the things are called "connector" or "CPU" licenses, depending on the product. Besides that, you can install Windows on a variety of servers that Apple's engineers are not even allowed to dream of - and the Microsoft BackOffice platform is much more versatile than OS X Server. I also strongly doubt that Apple's support for their servers gets anywhere near what Dell provides.

Hardware support has always been Apples weakness, I'll give you that.

The point I was making was that price wise Windows is way up there. I seem to remember that the last time I looked the cost of a 2 CPU version of Windows was much higher than the cost of the unlimited client version of Mac OS X Server.

True I don't disagree I was just making the point that Apple are pretty competitive in the server world.

The list why Apple isn't - and cannot be - a real or significant player in the server market is too long to put it here.

I think saying "cannot" is pushing the realms of truth a little. Yes, they need to fix a few things and become more serious about the enterprise. But if you look at it from a purely cost based situation, the Xserve is extremely competitively priced, the OS is cheap in comparison to Windows (and some Linux support contracts I might add), plus some innovative features coming the future that will help with performance (Grand Central is going to be big when it hits for OS X Server).
 
Stop kidding yourself. Of course you can, the things are called "connector" or "CPU" licenses, depending on the product. Besides that, you can install Windows on a variety of servers that Apple's engineers are not even allowed to dream of - and the Microsoft BackOffice platform is much more versatile than OS X Server. I also strongly doubt that Apple's support for their servers gets anywhere near what Dell provides.

And in my experience you'll need Dell's excellent support, because server components on Dells fail a lot!
 
Long term yes there is the fact that SATA is a dead interface. It simply can't keep up with the fastest SSD technology. However until a standard card format comes around (supporting hot swap for example) we will have to live with SATA and it's limitations. Thankfully SSD on SATA does offer advantages over the spinning storage formats and often can be justified at current prices.

Dead? Lolwut? :D

The claim that "it simply can't keep up with the fastest SSD technology" sounds rather subjective, because interfaces get updated every now and then to support more bandwidth...I don't see how SATA is any different. If it were truly dead that would mean it has been replaced by something far better, but I don't think it has. :cool:
 
Until I can get SSD drives for the same price as the equivalent sized SATA (which itself is impossible, because there's no such thing as a 2TB SSD) SATA ain't dyin'.

EDIT

Perhaps I misread; he was apparently talking about the interface itself, not necessarily the drives.
 
Never said it fit ALL architectures. I was opposing the opposite end of the absolute, that it NEVER fit in an enterprise. I don't like absolutes like "never" or "always", but people here are really good at it. Sometimes 1U machines work well. Sometimes they don't. They do for us.

Nice backtrack. How about we go back to what you wrote the first time back in post 86 and what was quoted from post 85.

polaris20 said:
bartzilla said:
I don't think there's much wrong with 1U servers per se. Not so good as your sole server option though, that's for sure.
Why not? ...

The 'not' in "Why not?" negates the 'not' in "Not so good as your sole server option...". Which is equivalent to asserting that: it is good as your sole server option. Sole server option is an absolute. If you are against absolutes, then your comment should have been "I agree. There are a subset of situations where 1U is the right choice ... " as opposed to "Why not?".

You appear to in the mindset that it is OK because the 1U solution space is all you are considering. For the set of problems where need to virtualize and the resource requirements are a significant fraction of a 1U box then other options should be on the table; i.e. not good as sole server option.



VMWare quotes 10 VM's per core on ESX capability.

VMWare doesn't do this. I imagine there is a "rule of thumb ... 10" or "up to 10 generally", but suspect you have removed this from the context or only have consolidated extremely lightweight workloads. (e.g., number like that thrown around here but typically followed by the caveat on having to consider workload http://communities.vmware.com/thread/170137 ) Virtualization is more than counting cores/CPUs. You are virtualizing cores, memory, and I/O. Documents like the following http://www.vmware.com/pdf/vi3_301_201_config_max.pdf (may be newer ones just first one found on search) list all three components. You'd need to size your consolidated workload on all 3 dimensions. In short, It is at least a 3 dimensional problem ( 4 if doing consolidation across geographic regions with different peak workloads). You have to account for all 3 dimensions in your sizing efforts not just one. Where 1Us tend to crap out on VMs for machines that don't have minimalistic memory requirements is that they can't scale up that large. 4Us generally can. [ These days cores are doubling from previous generations. ] Especially when they are NUMA boxes and can get 4 different banks of DIMM slots and typically a higher number of high bandwidth PCI slots with multiple PCI hubs.

Sure if you are consolidating several 1U boxes that were all running at avg running <5% CPU , <10% Memory and <10% I/O. For example, serveral "servers" that could have been happily running on a bunch of mac minis. However, 1-3 of those which peak out at 60% utilizations your service levels will crap out when 2 of them both peak out at the same time. You also have less flexibility to whatever bounded allocations that VMWare allows for so that service levels across machines are not impacted. Primarily, because just have less resources to allocate.
 
Nice backtrack. How about we go back to what you wrote the first time back in post 86 and what was quoted from post 85.



The 'not' in "Why not?" negates the 'not' in "Not so good as your sole server option...". Which is equivalent to asserting that: it is good as your sole server option. Sole server option is an absolute. If you are against absolutes, then your comment should have been "I agree. There are a subset of situations where 1U is the right choice ... " as opposed to "Why not?".

You appear to in the mindset that it is OK because the 1U solution space is all you are considering. For the set of problems where need to virtualize and the resource requirements are a significant fraction of a 1U box then other options should be on the table; i.e. not good as sole server option.





VMWare doesn't do this. I imagine there is a "rule of thumb ... 10" or "up to 10 generally", but suspect you have removed this from the context or only have consolidated extremely lightweight workloads. (e.g., number like that thrown around here but typically followed by the caveat on having to consider workload http://communities.vmware.com/thread/170137 ) Virtualization is more than counting cores/CPUs. You are virtualizing cores, memory, and I/O. Documents like the following http://www.vmware.com/pdf/vi3_301_201_config_max.pdf (may be newer ones just first one found on search) list all three components. You'd need to size your consolidated workload on all 3 dimensions. In short, It is at least a 3 dimensional problem ( 4 if doing consolidation across geographic regions with different peak workloads). You have to account for all 3 dimensions in your sizing efforts not just one. Where 1Us tend to crap out on VMs for machines that don't have minimalistic memory requirements is that they can't scale up that large. 4Us generally can. [ These days cores are doubling from previous generations. ] Especially when they are NUMA boxes and can get 4 different banks of DIMM slots and typically a higher number of high bandwidth PCI slots with multiple PCI hubs.

Sure if you are consolidating several 1U boxes that were all running at avg running <5% CPU , <10% Memory and <10% I/O. For example, serveral "servers" that could have been happily running on a bunch of mac minis. However, 1-3 of those which peak out at 60% utilizations your service levels will crap out when 2 of them both peak out at the same time. You also have less flexibility to whatever bounded allocations that VMWare allows for so that service levels across machines are not impacted. Primarily, because just have less resources to allocate.

I said "Why not?" because the person being quoted said it's not good as a sole architecture. I disagreed because it can be, but I wanted to hear the reasoning. There is no backtracking involved, sorry you didn't understand that. I also think I made that abundantly clear in following posts that I viewed it as a possible option for some, but not everyone. Please read the whole thread.

Also convenient how you left out what I said immediately after I said "why not"

Me said:
It really depends on the system architecture as to whether or not solely 1U's are acceptable. As I said before, a few 1U's, a SAN, and VMWare ESX negates the need for large individual servers in a lot of cases, particularly in small to medium-sized businesses.

In post 95, you say:

All system architectures don't naturally fit into a 1U solution.
__________________
I don't disagree. For your company, your needs clearly dictate a 2U or 4U or blade. Good deal! I don't doubt you.

What part of that doesn't make sense as to my viewpoint?
And thanks for telling me everything I already knew about what it takes to virtualize. The lesson was unnecessary.

EDIT

Let me make myself clear, since you're apparently misreading me.

In some situations, such as ours (and other SMB's), it is my opinion that a 1U architecture as a sole format will work completely fine, especially coupled with virtualization.

It will, however, not work in all situations, apparently such as yours.

This is what I've meant from the get-go.
 
Not really. If you have a website that will bring a VPS down because it requires too many resources then you obviously need to start looking at dedicated hardware.

Well gee ... if you excessively oversubscribe the physical resources you will run into problems. If you point 10,000 users at a database running on a single 1U box it will suck too.

One thing you can do is use a hypervisor that has resource service level commitments you can set. Secondly what you can to is dynamically move VMs off to another machine to reduce the oversubscription. Take the slower moving stuff off machine or move this soon to be peaker load to hit a off to a machine with alot more resources .... errr instance say a 4U box. Then move it back after the peaker load subsides. Other than a couple of seconds on movement back and forth customer doesn't see the difference.
Probably will think browser is doing something funky for second.

Unless you are hosting 100% merchants and hit some kind of Black Friday sales load, this works.


Plus the new rules coming into effect regarding merchants processing credit cards requires them to have dedicated hardware.

one this is a niche problem (many sites punt processing off to an external site anyway ... primarily because locking down a server to the extent this implies is hard. ). Second, chuckle ... yeah right IBM mainframes and high end unix boxes are out of the processing business because they don't run OS on raw iron. Please, do tell Visa and Mastercard because their core operations run on virtual machines.

Perhaps dedicated I/O and some requirements on qualifying hypervisors, but I'd like to see these rules. Again there are some somewhat insecure ways folks have craved out pseudo virtual setups with stuff like BSD jail and Solaris Containers but not quite the same thing. Staying away from the paravirtualization stuff and going to be pretty hard for one VM to see content from another VM.

Operation wise though a host would be blocked because their sysadmins have access at the hypervisor level and could bypass security in the hosted OS. But that's isn't because of the software. That is because the trust network is not controlled. If that anal about security though, they should allow them physical access either. So not only dedicated but secure from the host personnel too.








I think saying "cannot" is pushing the realms of truth a little.

The other problem for Apple is how being a major player is measured. If in terms of revenue ... they'll likely never catch up... [ unless jump up and buy Sun. Not holding my breath. ]. In terms of units sold they can be decently in the game. Primarily selling to the lower half of the small-medium business folks that just need a small IT "department".
 
The other problem for Apple is how being a major player is measured. If in terms of revenue ... they'll likely never catch up... [ unless jump up and buy Sun. Not holding my breath. ]. In terms of units sold they can be decently in the game. Primarily selling to the lower half of the small-medium business folks that just need a small IT "department".

I don't think Apple will ever play in a market above what you're saying (smaller end of SMB), because they lack flexibility not only in the hardware, but also their licensing. I can't have a physical XServe running some stuff, with an ESX server running another instance of OS X Server as backup, because they won't allow it.

I can't even install ESX on a couple XServes to then run multiple instances of OS X Server, because even though it's physically possible, they won't allow it. That sucks.
 
I am not disputing the fact that you can do some cool stuff with virtualisation, I am just saying that for quite a few tasks 1U servers are better than 4U servers.

What I'm getting at is that 1U is not necesarrily more space efficient just because it is physically smaller. That's how things to into this explosion of boxes that folks used in the late 90's to early 2000's . 1 ,000's of boxes all running at 5<% utilization are a waste of space, not conserving it.

Nor am I saying everyone has to use 4U all the time. It is just a tool in the tool belt to use where appropriate. And yeah virtualization with bad security and bad resource management has problems. Just as any other tool not properly utilized.

What appears here is that folks are taking experiences of consolidating "mac mini" loads and generalizing that into 1Us are generally good for VM consolidation. I think that is a leap to generalize that to consolidation in general. In the past most of the x86 server vendors have capped the 1Us on memory and I/O bandwidth. It is better now with Nehalem era (and AMD quads) 1U but the 4U similarly improve too. ( and chuckle the mac minis are better too. ).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.