Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Cromulent

macrumors 604
Oct 2, 2006
6,802
1,096
The Land of Hope and Glory
Well gee ... if you excessively oversubscribe the physical resources you will run into problems. If you point 10,000 users at a database running on a single 1U box it will suck too.

Of course, that is when you would get a second one. 2x 1U servers generally perform better than 1 4U server anyway, plus they have the added advantage of being more fault tolerant (one server falls over then the other server can pick up the load temporarily while you bring the other one back online).

One thing you can do is use a hypervisor that has resource service level commitments you can set. Secondly what you can to is dynamically move VMs off to another machine to reduce the oversubscription. Take the slower moving stuff off machine or move this soon to be peaker load to hit a off to a machine with alot more resources .... errr instance say a 4U box. Then move it back after the peaker load subsides. Other than a couple of seconds on movement back and forth customer doesn't see the difference.
Probably will think browser is doing something funky for second.

I've never argued against virtual machines, they have their place and they are certainly useful. All I have said is that they are not the answer to everything and there is certainly a large dedicated server market out there. Just spend any amount of time on the popular web hosting forums.

Most people realise that a VM while good has some rather important limitations, for a start web hosts generally try and stuff as many VM's onto one server as they possibly can to maximise their profits, this has a huge detrimental effect on the performance you get. The only way to make sure you get the performance that you require is to get a dedicated server, otherwise you have absolutely no idea what resources your host is allowing you.

one this is a niche problem (many sites punt processing off to an external site anyway ... primarily because locking down a server to the extent this implies is hard. ). Second, chuckle ... yeah right IBM mainframes and high end unix boxes are out of the processing business because they don't run OS on raw iron. Please, do tell Visa and Mastercard because their core operations run on virtual machines.

You obviously missed the point here.

The reason you can't do credit card processing with VPS's is because multiple indepentant customers are using the same machine. Obviously Visa and Mastercard may well be using virtualised services on their main frames but the important point here is that only Visa and Mastercard respectively have access to that / those servers.

Perhaps dedicated I/O and some requirements on qualifying hypervisors, but I'd like to see these rules. Again there are some somewhat insecure ways folks have craved out pseudo virtual setups with stuff like BSD jail and Solaris Containers but not quite the same thing. Staying away from the paravirtualization stuff and going to be pretty hard for one VM to see content from another VM.

Here is information about the rules:

http://www.pcicomplianceguide.org/

Operation wise though a host would be blocked because their sysadmins have access at the hypervisor level and could bypass security in the hosted OS. But that's isn't because of the software. That is because the trust network is not controlled. If that anal about security though, they should allow them physical access either. So not only dedicated but secure from the host personnel too.

Yep, that is in the rules too. Only explicitly authorised personel are allowed access to the servers so you would need to colo your own hardware in a secure cage at a data centre.
 

bartzilla

macrumors 6502a
Aug 11, 2008
540
0
Oh for crissakes, where did I say everything was? Where did I speak in absolutes? I know it isn't in all cases. But it is in many cases, including ours.

Everyone's different. I don't dispute that. I realize that. Never said otherwise.

Here's me:




Notice how it doesn't say:

Bad day? You're jumping up and down defending yourself from an attack that never actually happened, there.
 

bartzilla

macrumors 6502a
Aug 11, 2008
540
0
And in my experience you'll need Dell's excellent support, because server components on Dells fail a lot!

I've had some odd experience with Dell servers and reliability. From what I've seen their servers tend to be either nearly completely reliable or "haunted". I'm not talking of product ranges here but a rack of identical servers where some don't seem to fail for years on end and the one next to it appears to always have some minor nitpick error. Wonder if one of them was just put together on Friday afternoon.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,257
3,860
Which is exactly the point with the boot drive sitting in the chassis separate from the magnetic drives. It is a low power device capable of responding quickly when needed. When not needed it sips power.

As for swapping and logging I'm really hoping Apple isn't doing this to a SSD.
if the SSD drive is the only drive in the box when shipped, where else would the swapping and logging go???

Swapping is a form of caching. So that's fine as long as the swap block size is some multiple of the SSD write block size. Although you are wearing the drive out if excessively swap. The bigger problem is that the log ( security and all the normal unix/BSD logs ) that get written out likely in much smallere chunks and recycled are likely to chew up write lifetime also at a rate larger then of the amount of data written. Also probably not much of a problem over a 3 year lifespan as long as not many other larger write lifetime consumers.





What the OS does with the disk is very dependent upon use. For some applications the SSD could be a win if boot and applications run from it.

launching applications is more of a sequential read problem than a random read one. Boot is slightly different because lots of different files are being read. But boot faster..... If you booting your server alot you have bigger issues than the disk drives. Should be seeing uptimes in terms of months if not years. Optimizing what do in those kind of time frames is a curious priority ordering.




I think you are missing two important points. One is that a SSD can lower your power budget. Some apps thrive on platforms that have data and code storage separated.

How did a miss the point when I said power was a benefit right there at the top? And if your sever ships with just on OS boot drive that is SSD internal to the box... exactly where is all the data the applications consume being stored?

SDD drives have been and will continue to be MORE expensive than Hard drives. This is a very similar hyperbole that happened several years ago when folks said tape drives were dead and that all back up would be done to hard drives. Hard Drives aren't likely to completely disappear. As long as folks requirements for retained data keep going up into the terabyte and up range never going to be completely able to store all of your data onto SSD drives. Just a subset. SSD still has an order of magnitude to come down in price.


Read this article by Adam Leventhal
http://blogs.sun.com/ahl/entry/hybrid_storage_pools_in_cacm
http://mags.acm.org/communications/200807/?pg=49



I can't disagree with the idea that PCI-Express is a better place for SSD storage and by extension that SATA is pretty much a dead end.
Again spinning hard drives are not dead. Neither are CDs and DVDs. Spinning media is going to be around for a long while. What I am saying that to maximize performance from a SSD drive SATA and SAS suffer. Primarily because their upper limits are bounded buy the rotational limitations.

The higher than 15K rpm drives are probably dead. But the slower spinning ones can be made cheaper and more dense than the SSDs.

However, just like there is a market for SAS and SATA drives there will be tiers. Some folks who don't need max speed and are more price sensitive so may go with the cheaper drive and get SATA SSD drives. However, doing RAID 0 of SSD SATA drives... if the SATA interface is the bottleneck... chuck it. Not buy more "drives".


The problem is as you point out no Mac hardware. In the meantime we are very much on the bleeding edge of SSD systems and as such better hardware is arriving every few months. Most of this hardware is on SATA so ideally you would be able to readily implement it on an Apple server.

Don't need "mac hardware", need drivers. Fusion I/O cards work in Linux and Windows. If there was a mac driver (on card for EFI and in OS so can present as drive) would have the hardware also.

Again can use SATA. It certainly appears that Apple is stick there SSD drive on a SATA interface. However, you are NOT going to get max performance out of it. If you want to use it to speed up the the other parts of the disk storage hierarchy you'd want to.



As to SSD performance in a server that requires knowing the specifics to determine value. In any event for the right app SSD can be a performance advantage even on SATA.

You never get something for nothing. The downside on SSDs is that they can wear out faster than hard drives. Especially if do an unusually large amount of writing.


Long term yes there is the fact that SATA is a dead interface. It simply can't keep up with the fastest SSD technology.

Spinning disk technology has had 2-3+ interfaces that have lived in parallel over the years. Right now that is SATA , SAS , and FC. Nothing says that SSD just has to have just one. In fact, the opposite. SDD is going to have very similar segmented economic driving forces.



However until a standard card format comes around (supporting hot swap for example) we will have to live with SATA and it's limitations.

Can hot swap PCI cards now in servers and OS that support it. Apple doesn't, but that doesn't mean can't right now. IBM, Sun, etc. boxes do that now.


The thing that bothers me is the apparent restriction on running alternative drives. At some point the person implementing the alternative drive will have a better idea than Apple as to it's suitability. Frankly excessive restrictions just limit the potential applications for the hardware.

You can run alternative hard drives now. Poke around the internet for directions. Just don't ask Apple for support if it doesn't work.






KVMs certainly work but not all operations adopt that approach. Some places simple use a stand on wheels to place a keyboard at the troubled station.
Right and how exactly do you hook up the video monitory without a visit to the back end of the machine? The USB port on the front isn't the only port.
There are more on the back. If you have to go to the back of the machine anyway to hook up the video just use one of the ones back there while you are there. All of this is mainly being too lazy to go to the back of the machine if want to hook up more than one.

Secondly with a cart, if hauling around KVM on the hard how much harder to haul around a USB port expander on the same cart? Take the one port, multiple it by 4 or 5 and no have even more ports than standard server only in this temporary situation. If your DVD drive is flakey, plug one in on USB. etc. etc.



With a more permanently hooked up KVM set up then you probably don't have to go to the back to get "turned on". So you are left with no keyboard, no mouse and just this flash drive. One port. Done.




A good choice if it cuts on a DVD and all your machines reliably read the disk.

You are in an exceptional situation. The server is hosed in some way. Otherwise have the network to get any files there. Apple's tools and the OS recovery all ship DVD ... how do you use those with flakey DVD drives?


Given how Linux performs on similar hardware I suspect that there is much Apple could do to enhance server performance. That doesn't even include working OpenCL into the equation.

OpenCL is not going to be some panacea. Sure there maybe some splashy demos that show speed ups, but it isn't going to make MySQL or SQLite or Word, etc. go substantially faster. Questionable whether it would speed up basic file or deep kernel operations either.

Secondly, if thinking going to gain equivalence with Linux then they'd need to make some hardcore changes to the Mac OS kernel. The Mach + glued on FreeBSD set up has overhead that Linux has forgone precisely because because of some of the performance downsides.
 

polaris20

macrumors 68020
Jul 13, 2008
2,491
753
Bad day? You're jumping up and down defending yourself from an attack that never actually happened, there.

eh, probably. My point apparently didn't come across too well to more than just you, so I apologize.

I've had some odd experience with Dell servers and reliability. From what I've seen their servers tend to be either nearly completely reliable or "haunted". I'm not talking of product ranges here but a rack of identical servers where some don't seem to fail for years on end and the one next to it appears to always have some minor nitpick error. Wonder if one of them was just put together on Friday afternoon.

Dunno, but I've had similar experience as an networking consultant in Chicago. A rack of 10 identical Dells at one client; all work great except one, which has the RAID controller die, all drives die at least once, bad memory, bad PSU.

Then another client, same sort of thing. Different model, different year made. Some clients had more "haunted" Dells than others.

Of course it isn't a scientific study, but certainly more than enough to steer me clear of Dell for good.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,257
3,860
I don't think Apple will ever play in a market above what you're saying (smaller end of SMB), because they lack flexibility not only in the hardware, but also their licensing. I can't have a physical XServe running some stuff, with an ESX server running another instance of OS X Server as backup, because they won't allow it.
Where does it say they don't all it. All the license outlines is that:
  1. You have to run in on Apple Label Hardware
  2. You have to buy/license a copy of Server for every instance of Server you have running on said Apple label hardware

The HUGE blocker is more so that ESX doesn't boot (without hacks) and is not supported by VMWare on XServe. Apple hardware boots with EFI. The vast majority of the x86 server world is either booting BIOS or indirectly into a hypervisor (via firmware attached to the motherboard.. but that too is bootstrapped by the BIOS.). It is still the case that EFI hardware is exceptional. Maybe that will change if Windows7 and the server version that comes after Windows7 boots EFI but there is tons of inertia blocking that. ( PCs still coming with PS/2 keyboard and parallel ports on them.) When and if EFI becomes pervasive that is going to Apple some fits too. ( so they are certainly not in a hurry to pull the rest of the market that way. But it does make Intel happy because EFI was one of their initiatives that hoped folks would pick up. So Apple is a "good" partner. )

Since ESXi is now free and Apple has all these extra quirks to get around, I doubt VMWare is going to cert XServe unless Apple subsidizes the effort. You'd think with billions in the bank that wouldn't be a problem, but I wouldn't hold my breath. I don't think it is a priority for them and there is a sizable, "dongle the OS to the hardware" faction inside of Apple.




I can't even install ESX on a couple XServes to then run multiple instances of OS X Server, because even though it's physically possible, they won't allow it. That sucks.

Given that you can only buy an XServe with a Mac OS X license I can' see how you loose that license because you wipe the disk and resinstall the same OS with the OS recovery disk that Apple gave you. If your hard drive dies and you put in a new one did you loose your Mac OS X server license. nope.
What is the difference? You bought the license and it is running on Apple labeled hardware. What are the other requirements to meet put forward by Apple? For your 2nd , 3rd copy you bought those too presumably. Again where is the disconnect in the licensing terms?

As long as all the servers in your virtual machine cluster are all Apple labeled hardware and you bought as many as the maximum copies of VMs you create and run (i.e. images that aren't purely backups images). Where is the disconnect?

Doesn't microsoft make folks buy each copy of the OS also? Isn't stopping that from running on ESX.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,257
3,860
Yep, that is in the rules too. Only explicitly authorised personel are allowed access to the servers so you would need to colo your own hardware in a secure cage at a data centre.

How is this a web hosting provider situation when you put your own hardware in a locked down colo location at the site and host's job is solely to provide internet bandwidth , power , and cold air?

It is now YOUR box (or boxes). All the host is doing is aggregating the ultility providers into a single bill. That is not providing dedicated machines , there is no machine and/or sysadmin services there at all. Maybe you get a cheaper deal somehow by leasing the box through them (could happen is they have a better bulk rate relationship with the hardware vendors ). So perhaps technically don't "own" the box. But the "host" isn't doing much in that case with the box itself.
 

polaris20

macrumors 68020
Jul 13, 2008
2,491
753
Where does it say they don't all it. All the license outlines is that:
  1. You have to run in on Apple Label Hardware
  2. You have to buy/license a copy of Server for every instance of Server you have running on said Apple label hardware

The HUGE blocker is more so that ESX doesn't boot (without hacks) and is not supported by VMWare on XServe. Apple hardware boots with EFI. The vast majority of the x86 server world is either booting BIOS or indirectly into a hypervisor (via firmware attached to the motherboard.. but that too is bootstrapped by the BIOS.). It is still the case that EFI hardware is exceptional. Maybe that will change if Windows7 and the server version that comes after Windows7 boots EFI but there is tons of inertia blocking that. ( PCs still coming with PS/2 keyboard and parallel ports on them.) When and if EFI becomes pervasive that is going to Apple some fits too. ( so they are certainly not in a hurry to pull the rest of the market that way. But it does make Intel happy because EFI was one of their initiatives that hoped folks would pick up. So Apple is a "good" partner. )

That is true, and I also forgot that Apple allows you to run OSX Server VM's on an XServe running Parallels Server. So yeah, it's more on VMWare's part, not Apple's. But I doubt that'll change any time soon.

Since ESXi is now free and Apple has all these extra quirks to get around, I doubt VMWare is going to cert XServe unless Apple subsidizes the effort. You'd think with billions in the bank that wouldn't be a problem, but I wouldn't hold my breath. I don't think it is a priority for them and there is a sizable, "dongle the OS to the hardware" faction inside of Apple.

ESXi "installable" is pretty well neutered though compared to even the Foundation edition, as well as comparing it to XenServer. But that's another thread altogether.

Given that you can only buy an XServe with a Mac OS X license I can' see how you loose that license because you wipe the disk and resinstall the same OS with the OS recovery disk that Apple gave you. If your hard drive dies and you put in a new one did you loose your Mac OS X server license. nope.
What is the difference? You bought the license and it is running on Apple labeled hardware. What are the other requirements to meet put forward by Apple? For your 2nd , 3rd copy you bought those too presumably. Again where is the disconnect in the licensing terms?

As long as all the servers in your virtual machine cluster are all Apple labeled hardware and you bought as many as the maximum copies of VMs you create and run (i.e. images that aren't purely backups images). Where is the disconnect?

Doesn't microsoft make folks buy each copy of the OS also? Isn't stopping that from running on ESX.

You're correct; but I don't want to run the VM'd version of OS X on top of OS X on an X Serve.....too inefficient. I want a bare metal hypervisor running OS X vm's.

And MS allows unlimited VM's of 2008 on an individual machine, provided you buy Data Center edition, IIRC. Then again that's a $6000 license.
 

Cromulent

macrumors 604
Oct 2, 2006
6,802
1,096
The Land of Hope and Glory
How is this a web hosting provider situation when you put your own hardware in a locked down colo location at the site and host's job is solely to provide internet bandwidth , power , and cold air?

It is now YOUR box (or boxes). All the host is doing is aggregating the ultility providers into a single bill. That is not providing dedicated machines , there is no machine and/or sysadmin services there at all. Maybe you get a cheaper deal somehow by leasing the box through them (could happen is they have a better bulk rate relationship with the hardware vendors ). So perhaps technically don't "own" the box. But the "host" isn't doing much in that case with the box itself.

That is a separate issue. It does not change the fact that many websites (this one included I would assume) are hosted on one or more dedicated servers they rent from a web host.

Ecommerce sites require seperate rules that I have already linked too.

Just check out this forum for dedicated server information: http://www.webhostingtalk.com/forumdisplay.php?f=2
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,676
The Peninsula
I'd assume they're 2.5".

I wouldn't assume that they're anything.

For internal use, there are bare boards with SATA connectors. Room is tight inside a 1U, and there's no value in having the form factor of a 1.8" or 2.5" standard drive. (And Apple might not want you to be able to install a standard laptop drive.)

6_171_451.jpg



There are also SATA mini-PCIe drives, which plug into to a motherboard connector.

6_158_437.jpg



The XServe support docs aren't online yet to check for sure.
 

polaris20

macrumors 68020
Jul 13, 2008
2,491
753
I wouldn't assume that they're anything.

For internal use, there are bare boards with SATA connectors. Room is tight inside a 1U, and there's no value in having the form factor of a 1.8" or 2.5" standard drive. (And Apple might not want you to be able to install a standard laptop drive.)

6_171_451.jpg



There are also SATA mini-PCIe drives, which plug into to a motherboard connector.

6_158_437.jpg


The XServe support docs aren't online yet to check for sure.

That would be why I assumed it was, because the support docs aren't online, and it clearly doesn't go in the 3.5 slots. No need to be rude, Aiden.
 

polaris20

macrumors 68020
Jul 13, 2008
2,491
753
And yes I realize it could possibly be a PCIe solution, or something else attached directly to the logic board.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.