Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

jumpcutking

macrumors 6502
Original poster
Nov 6, 2020
300
182
I was tempted to name this thread, "GoFundMe: help me replace all of my servers..." :D

Hypothetically, has anyone seen impressive benchmarks on the M1 performing as a server? API/Web/Database/FTP/Email... etc?

Traditionally speaking, service providers use our favorite flavor of Linux and load it up with only the necessary packages to keep the OS footprint low and the ram resources high... however, I've developed on a Mac for years. I always wondered if moving to a series of Mac servers is a good idea or not. Before, it wasn't an option. Too much footprint and no real performance increase.

However, with M1, and it's separated computational architecture (that was impressive to type...), would it perform well under high stressful environments. Could it stand the calculations of querying a database? Could it out perform an intel chip in that arena? For all those "neural network" and AI algorithm developers... could it out perform a current server? Would it be worth the premium price tag - if the benefits of computation were given to the user?

These questions haunt me, as a service provider, your thoughts?
 

pldelisle

macrumors 68020
May 4, 2020
2,248
1,505
Montreal, Quebec, Canada
Hypothetically, has anyone seen impressive benchmarks on the M1 performing as a server? API/Web/Database/FTP/Email... etc?
See https://www.phoronix.com/scan.php?page=article&item=apple-mac-m1&num=1
raditionally speaking, service providers use our favorite flavor of Linux and load it up with only the necessary packages to keep the OS footprint low and the ram resources high.
Exactly. You get rid of the GUI. You can’t on macOS. So you always have to deal with the ”very” high memory consumptive GUI of macOS Even when using it in server mode. I’m not aware of any mean one can boot without UI on macOS.

Too much footprint and no real performance increase.
You still have the huge memory footprint of the GUI.

would it perform well under high stressful environments.
If everything is run natively without Rosetta 2, I don‘t see any reason why it wouldn‘t perform well and stable, accepting the fact that a huge proportion of the memory is used by the loaded GUI.

Could it out perform an intel chip in that arena? For all those "neural network" and AI algorithm developers... could it out perform a current server?
No.
I’m machine learning engineer, and I can say it doesn‘t outperform a true dedicated GPU (RTX 2080, RTX 3070, 3080,n 3090, Tesla V100, Tesla A100) and it’s VERY far from out performing it. The 2080 RTX only has 10 TFLOP of compute power in FP32, 90 TFLOP in FP16 and MatMul. The M1 only has 2.5 TFLOP peak FP32 on the GPU, while the Neural Engine is barely unknown and not yet very implemented. MAYBE it could beat it in SOME cases, but in general, the M1 stays a low end accelerator. And for the moment there‘s only TensorFlow 2.4 (and a special fork from Apple) that is compatible with it. Where it shines is in it performance / Watt. There is no 2.5 TFLOP GPU that consumes less than 20W of power.

Docker is also still not available, and you certainly won’t have access to other parts of the SoC inside a Linux container.

For a home personal server, it could be just fine, thought I’d still prefer a Synology NAS. For a production environment, it’s not.
 

jumpcutking

macrumors 6502
Original poster
Nov 6, 2020
300
182
I guess that depends on whether the stuff you need runs natively on ARM or under Rosetta2.
That is so true. I already asked my database people... so when will you support M1?

See https://www.phoronix.com/scan.php?page=article&item=apple-mac-m1&num=1

Exactly. You get rid of the GUI. You can’t on macOS. So you always have to deal with the ”very” high memory consumptive GUI of macOS Even when using it in server mode. I’m not aware of any mean one can boot without UI on macOS.


No.
I’m machine learning engineer, and I can say it doesn‘t outperform a true dedicated GPU (RTX 2080, RTX 3070, 3080,n 3090, Tesla V100, Tesla A100) and it’s VERY far from out performing it. The 2080 RTX only has 10 TFLOP of compute power in FP32, 90 TFLOP in FP16 and MatMul. The M1 only has 2.5 TFLOP peak FP32 on the GPU, while the Neural Engine is barely unknown and not yet very implemented. MAYBE it could beat it in SOME cases, but in general, the M1 stays a low end accelerator. And for the moment there‘s only TensorFlow 2.4 (and a special fork from Apple) that is compatible with it. Where it shines is in it performance / Watt. There is no 2.5 TFLOP GPU that consumes less than 20W of power.

For a home personal server, it could be just fine, thought I’d still prefer a Synology NAS. For a production environment, it’s not.

Very impressive results. If the GUI could be tapered (or maybe a linux Distro built for M1!), I feel like running a complete datacenter on these new chips would be intriguing. Of course... I want to see more ram options.

Save me on power, and compute the world... thank you, Apple!
 

jumpcutking

macrumors 6502
Original poster
Nov 6, 2020
300
182
Only macOS can fully exploit this chip. No Linux kernel (now) could ever take full advantage of every parts of the SoC. The integration with the OS is far too high to any other OS to fully exploit the chip.
Interesting, I suppose I'm a little ignorant of how Apple Silicon works... maybe that's because I'm used to standard processors as opposed to ARM ones.
 

pldelisle

macrumors 68020
May 4, 2020
2,248
1,505
Montreal, Quebec, Canada
A SoC is a highly integrated chip Having multiple accelerators on it. Each accelerator requires an API to access it (Metal for the GPU for instance). Other OS must implement every single API to allow the kernel to « talk » with the accelerator. It‘s not just an x86 core crunching instructions anymore with an IO bridge. It‘s a tightly coupled arrangement of accelerators.
 

jumpcutking

macrumors 6502
Original poster
Nov 6, 2020
300
182
So they are just software drivers for the acceleration? In theory one could write accelerator support into their OS. But it sounds like it's hardware isolation and ambiguity... lots of trial and error to figure out what does what, I suppose.
 

jumpcutking

macrumors 6502
Original poster
Nov 6, 2020
300
182
Funny use of terminology. There are far more Arm processors in the world than x86 processors. So which one is really the “standard?”
Quad-Core Intel i7... lol! J/k. Non ARM processors - I guess...
 

pldelisle

macrumors 68020
May 4, 2020
2,248
1,505
Montreal, Quebec, Canada
So they are just software drivers for the acceleration? In theory one could write accelerator support into their OS. But it sounds like it's hardware isolation and ambiguity... lots of trial and error to figure out what does what, I supp
In theory, yes. Will Apple allow it, I doubt. It will allow to use the processing cores which are standard ARMv8 cores and the GPU-related parts for encoding/decoding and image processing, but the rest, I doubt.
 

MacUser2525

Suspended
Mar 17, 2007
2,097
377
Canada
I was tempted to name this thread, "GoFundMe: help me replace all of my servers..." :D

Hypothetically, has anyone seen impressive benchmarks on the M1 performing as a server? API/Web/Database/FTP/Email... etc?

Traditionally speaking, service providers use our favorite flavor of Linux and load it up with only the necessary packages to keep the OS footprint low and the ram resources high... however, I've developed on a Mac for years. I always wondered if moving to a series of Mac servers is a good idea or not. Before, it wasn't an option. Too much footprint and no real performance increase.

However, with M1, and it's separated computational architecture (that was impressive to type...), would it perform well under high stressful environments. Could it stand the calculations of querying a database? Could it out perform an intel chip in that arena? For all those "neural network" and AI algorithm developers... could it out perform a current server? Would it be worth the premium price tag - if the benefits of computation were given to the user?

These questions haunt me, as a service provider, your thoughts?

Smaller workloads definitely possible but the memory options would limit the possible uses to low ram amount using applications. There is no way around that with no upgrades possible, that would be the limiting factor. All those things do run on ARM now but from the experience I have had from replacing an intel haswell based box with a Pi 4. The limits of the ram and on it connection via USB3 for the data and boot drive it just does not cut it for the data coming out of it faster. Now that is the limit of the interface but it still does only double the rate if being directly connected via a SATA port. With an M1 you would get faster data movement with the thunderbolt connections but the ram will limit the processes that can run and the size of them. There is a reason all those servers have massive amounts of ram in them, the programs need it to run efficently. In short small lighter loads sure, larger not a hope in hell.
 

4sallypat

macrumors 68040
Sep 16, 2016
3,494
3,300
So Calif
M1 Mini for server ? Maybe in a year after our organization vets the OS and apps plus if Apple comes out with 10GBe or Fiber port ?

Currently I work at an organization where we have 20 of the Intel Mac Mini 2018 with optional 32GB RAM and 10GBe CTO/BTO. These 20 Minis are tied to each location's MDF for Apple caching service (Apple apps & OS update) as well as 12TB local user storage for iPad & iPhone users (authenticated VLAN users only).
 
  • Love
Reactions: jumpcutking

Yebubbleman

macrumors 603
May 20, 2010
5,789
2,379
Los Angeles, CA
I was tempted to name this thread, "GoFundMe: help me replace all of my servers..." :D

Hypothetically, has anyone seen impressive benchmarks on the M1 performing as a server? API/Web/Database/FTP/Email... etc?

Traditionally speaking, service providers use our favorite flavor of Linux and load it up with only the necessary packages to keep the OS footprint low and the ram resources high... however, I've developed on a Mac for years. I always wondered if moving to a series of Mac servers is a good idea or not. Before, it wasn't an option. Too much footprint and no real performance increase.

However, with M1, and it's separated computational architecture (that was impressive to type...), would it perform well under high stressful environments. Could it stand the calculations of querying a database? Could it out perform an intel chip in that arena? For all those "neural network" and AI algorithm developers... could it out perform a current server? Would it be worth the premium price tag - if the benefits of computation were given to the user?

These questions haunt me, as a service provider, your thoughts?
What are you trying to serve? Seeing as you can only run macOS on bare metal and only virtualize ARM64 VMs (and on Hypervisors that are not yet ready for prime time), I think it depends on what you are wanting to host.
That is so true. I already asked my database people... so when will you support M1?



Very impressive results. If the GUI could be tapered (or maybe a linux Distro built for M1!), I feel like running a complete datacenter on these new chips would be intriguing. Of course... I want to see more ram options.

Save me on power, and compute the world... thank you, Apple!

You're not going to run a datacenter on 16GB of RAM. I don't care if it's holy RAM infused by Wozniak himself with holy light. One or two VMs, sure. Also, again, the only OS running bare metal on these Macs at the moment is macOS Big Sur.

Interesting, I suppose I'm a little ignorant of how Apple Silicon works... maybe that's because I'm used to standard processors as opposed to ARM ones.
Apple Silicon isn't even standard ARM. It's heavily modified ARM customized to Apple's designs and specifications. It's like saying you browse on the Chromium-based Microsoft Edge. Technically, it's Chromium, but it's heavily modified for Microsoft's designs and specifications and therefore isn't the same browser.

Also, x86 isn't "standard". It's just the most popular for desktops and notebooks. Your smartphone doesn't have x86 CPUs like Core i5 or Ryzen 7, for instance.
 

leman

macrumors Core
Oct 14, 2008
19,197
19,055
Most relevant server software has been running on ARM for a while. I don't see Apple machines as commercial servers, aside from software testing — too expensive, too awkward to maintain. Companies like Ampere make high-performance ARM server chips for serious applications, look into those. And more (Nuvia, ARM Neoverse V1/N2 etc.) are in development. I expect ARM server marker share to increase significantly in the next couple of years.
 

pldelisle

macrumors 68020
May 4, 2020
2,248
1,505
Montreal, Quebec, Canada
Most relevant server software has been running on ARM for a while. I don't see Apple machines as commercial servers, aside from software testing — too expensive, too awkward to maintain. Companies like Ampere make high-performance ARM server chips for serious applications, look into those. And more (Nuvia, ARM Neoverse V1/N2 etc.) are in development. I expect ARM server marker share to increase significantly in the next couple of years.
True. Apple chips stay consumer oriented chips, not server grade.
 
Last edited:
  • Like
Reactions: jumpcutking

jumpcutking

macrumors 6502
Original poster
Nov 6, 2020
300
182
All fair points, as far as virtualization - I'm thinking of avoiding it and running the apps natively on the OS.
 

leman

macrumors Core
Oct 14, 2008
19,197
19,055
The cores are HEAVILY customized. That's why Apple's designs outperform ARM's.

This is not necessarily a correct thing to say. It’s not that the cores are customized, they are a completely independent, original design that does not use any of ARM CPU IP. Apple chips are fully custom devices that just happen to implement the standard ARM ISA (plus some additional features, but they are of no importance to the general performance).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.