Can anyone chime in on why, from a software engineering perspective, it might be beneficial to replicate these systems locally?
I don't think direct cost is a great comparison because it's so variable, but as a rough guide, if you're at the point where 64GB opens up possibilities, that means you likely want to allocate ~25GB + in memory across one or more instances. At this point you're in the range of $50-$150/month (prices vary wildly) - from the base model MBP16 up to 64GB is $800 USD, so you're better off cost-wise in somewhere from 5 to 24 months, depending on what remote instances you need, how long they run for, etc.
How much data does it need to store, to work against? If you need to say, process a 100GB data set on a remote machine, that data has to be accessible somehow, so you need to store it remotely, and unless you can upload 100GB in the blink of an eye (I know
I certainly can't) you need to store it permanently in a location close to the virtual machine(s), and hopefully in a manner accessible to them. (If you're curious, a 100GB EBS volume adds ~$12 a month to an EC2 instance, 100GB of Linode Block storage is ~$10 a month)
In terms of
technical benefits:
You have more choice locally - if you work on a project that uses say Vagrant, or Docker for a dev env, trying to run that remotely with the same level of local interaction (i.e. project files updated instantly in the VM, port mappings, direct network access to the instance, etc).
With a local VM(s) you are 100% in control of the environment it runs in - disk space available, memory available, CPU core count, network type, network addressing, even the type of hypervisor it runs on, the management layer if any, etc.
Then consider how reliable/fast your network connection is? Do you work from the same place all the time?
How latency critical is what you're working on?
Do you work on concurrent projects? On my aforementioned mini (but usable on a MBP if it had enough memory) I have 40 Vagrantfiles - each one represents a separate environment for a project (a few have two Vagrantfiles, e.g. one for normal dev/build work and one to simulate specific test conditions) - which collectively define about 50 VMs. I would expect most people work on less things than my example, but is it literally 1? If not, is it even remotely feasible to run different projects on a 'shared' remote VM?
Without knowing what you do or how you (like to) work, it's a bit of a guessing game, but that's
my experience at least, in a heavily VM-focussed workflow.
[automerge]1574332293[/automerge]
no current cloud provider be it AWS, Azure, GCP
FYI (excluding the context of, I think local VMs are a better solution
if you have enough memory) if you're just running a VM (i.e. EC2 instance in AWS terminology) and you don't actively spin it up/down as you need it, a regular VPS host like Linode or Digital Ocean/etc is likely
significantly cheaper.