Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You mean apart from no support in macOS to host containers, or run software within containers? AKA point 1 in the post you quoted.


I honestly don't know if you don't know what containers are or if you are trolling.

Virtualisation is literally about running a virtualised, entire computer. While it may know, it's running on virtualised hardware (i.e. if it's PVM), it's doing everything a regular OS on hardware does, from the kernel through to the software you install and run.

Containerisation is running programs in a different execution context, under the control of the kernel. On Linux this is built around cgroups. I don't know what the name of the API in windows for it is, but it's the same concept. The kernel runs some programs in isolation from other programs.


.. So not containers at all, in any way, shape or form.


The main reason it really isn't likely yet, is that it isn't possible yet, because macOS has no containerisation support. It's nothing to do with market demand or investment cost.


Once again, you're conflating containers and virtualisation. Apple could send every person on the planet a gold-stamped letter saying "You are allowed to run as many virtualised copied of macOS as you want, on anything you can make them run on, including but not limited to a Whopper with cheese", and that would have zero impact on macOS being able to host containers.


No, they really ****ing aren't, and please stop making claims that you have zero idea about.
I think, perhaps, we're looking at this from different perspectives. At the moment I'm more interested in being able to use MacOS in a docker like way, in the same pipelines, within an abstracted layer, I'm less concerned with if it's truly fully properly containerized.

You can combine native virtualization and docker to force a middle road if wanted now, there's been some work done on running MacOS virtualized within a linux docker container, and the newer Big Sur virtualization layers should make that a more viable path to do it legally for something like that going forward without losing too much performance (because to meet licensing it has to run on Apple hardware, and for most end consumers at least and so far entirely on ARM that means running MacOS as the base). Likewise I know there's some limited work being done on porting runc to MacOS, though I know it hasn't progressed very far, which would be true native containerization (but yes, will need support from Apple to proceed).

AWS could provide a docker container that ran MacOS transparently as if a standard docker container on ECS right now if their licensing allows it. It would have an extra layer virtualizing MacOS but to the end user it would work as expected. Yes, that's not what really makes a container different from a VM, functionally it's basically containerizing the underlying host shim with a VM on top, but to the end user for many uses, to my hypothetical use as an ephemeral build container within a CI/CD pipeline, it wouldn't be practically different

But yes, you are correct, and fully proper docker containers would be better - in my enthusiasm I may have overstated what I would hope can be done now. Maybe Apple working with Amazon on this will crack open a door to that in the future.
 
Last edited:
This just shows that Apple preference for proprietary everything has major drawbacks. As far as cloud computing is concerned (which is taking over more and more computing domains) Apple is years behind the industry norms. Eventually it may lead to total irrelevancy. Adding neural processing unit to iPhone processor can only help that much. It's not a replacement for the vast resources needed (and available in the cloud) for many AI (and other) applications.
 
At the moment I'm more interested in being able to use MacOS in a docker like way, in the same pipelines, within an abstracted layer, I'm less concerned with if it's truly fully properly containerized.
It sounds like what you want is just better orchestration.
 
  • Like
Reactions: seek3r
This just shows that Apple preference for proprietary everything has major drawbacks. As far as cloud computing is concerned (which is taking over more and more computing domains) Apple is years behind the industry norms. Eventually it may lead to total irrelevancy. Adding neural processing unit to iPhone processor can only help that much. It's not a replacement for the vast resources needed (and available in the cloud) for many AI (and other) applications.

Apple owns the front-end. AWS owns the back-end. There's no real drawback there. In fact, it shows that Apple's active avoidance of the server market was probably a good thing, since the server market is pretty much going to be AWS/Azure and a bunch of smaller providers.

At some point AWS will become custom ARM chips running custom AWS hardware. They're already eating their own dog food except for storage for the most part, and building all their own hardware.
 
> Most AWS failures that companies have are because their staff screwed something up.

If a single company using AWS screws something up, we don't get "half the internet is down" posts on HN and tech news sites, where it turns out the root cause was one failing system, which ballooned into 35 failing systems, regardless of whether the customers use the original system that failed.

The only way to be truly resilient to outages, is to be multi-DC, multi-vendor. And if you're multi-vendor, you'd be crazy to use vendor-specific services, because even when they're intended to do the same thing, they're never going to behave the same across vendors.

So then you're just using basic units of compute and storage... and at that point, AWS makes zero sense, cost-wise.

As I said, us-east-1 is a nightmare and should be avoided.

Edit: and as a note, it's hard to remove a SPF. What about your power company? Have you tested your battery/UPS/failover lately? Has water condensed into your generator's gas tank? Will your front-end load balancers really fail back correctly? Did your data really sync?

Multi-vendor means another set of crap that can fail, and another set of resolution protocols you have to follow. Let's get real, for 99% of the world that level of uptime isn't worth it.
 
Last edited:
This will be useful. Sort of surprised apple let it happen
Why? What is your mental model of Apple - that they sit around stroking white cats and pondering further ways they can screw over their customers?

I do not understand this insane paranoia around Apple (and most companies)! No matter what they say, no matter what they do, anything anything good happens we get a chorus of "I'm amazed they allowed this -- but just you wait, it's a trick I tell you". What's the evil trick? They will make you enjoy using the products so that, OMG how can they be so wicked, you will want to buy more of them?
 
This just shows that Apple preference for proprietary everything has major drawbacks. As far as cloud computing is concerned (which is taking over more and more computing domains) Apple is years behind the industry norms. Eventually it may lead to total irrelevancy. Adding neural processing unit to iPhone processor can only help that much. It's not a replacement for the vast resources needed (and available in the cloud) for many AI (and other) applications.
Here's why
<whatever just happened>
shows that
<I have always been right about everything>
...
 
This will be useful. Sort of surprised apple let it happen
Why? The only possible result is more growth, sales, and profits for Apple! It's not at all like Hackintosh, no one can use this instead of buying an Apple for they personal use. Developers using it will still need a Mac dev machine to work with it.
 
  • Like
Reactions: name99
My take on this is that Amazon will offer it with M2 Mac minis, when they become available.

With the key difference vs the M1 being the amount of DRAM that can be supported.

My best guess is that the M2 will include a PC-oriented DDR PHY Controller, instead of what Apple currently offers in ALL their A-series & M1 chips.

From a business perspective, it wouldn't make sense for Amazon to offer an M1 Mac service with ONLY 16 GB per box.

Unless they charged way-more-than 2x more for it.

Disclaimer: EE who has worked @ Qualcomm on DDR PHY Controllers on their SnapDragon processors.
 
Disclaimer: I work at AWS. There's some more details in Jeff Barr's blogpost about this launch, including that "EC2 Mac instances with the Apple M1 chip are already in the works, and planned for 2021".
"10Gb/s VPC network bandwidth" suggests to me that the Intel Minis AWS bought use 10 Gb Ethernet. If 10 Gb is a minimum AWS requirement, the fact that AWS is planning to add M1-based instances indicates that 10 Gb Ethernet will be a future option on the Mini.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.