We have a decent GitHub CoPilot subscription at work, which is of great help in coding (but not only) tasks.
At home, I have a decent Mac Mini M4 Pro 14c/20c/64GB RAM build and playing with local LLMs via LM Studio is quite astonishing, what's possible locally already. Obviously, privacy and unlimited tokens being main factors here. But I consider myself as a very light-weight AI user for personal usage, thus haven't hit a limit yet in cloud free plan usage.
In LM Studio, also the newly available Qwen3 Coder Next 80B model popped up, thus downloaded it yesterday. With any 80B model, be it Qwen3 Coder Next or Qwen3 Next 80B, with my 64GB RAM Mac Mini M4 Pro, I'm in yellow memory pressure VERY quickly. The 80B model / LM Studio alone, according to Activity Monitor, is somewhere at 42GB+ RAM usage, I guess depending on how much tokens you have thrown in and out etc. Thus close to zero headroom if you want to have additional stuff running like an IDE, one or another docker container maybe etc., for a typical DEV setup. 64GB RAM is the absolute minimum for a local 80B model, IMHO, without considering additional memory intense processes, thus you may want 96GB+.
But local LLMs + surrounding tools are fun. You can get quite something out of it, e.g. even image generation. Although e.g. LM Studio doesn't support image generation OOTB, you can get some decent prompts, which you then can feed DiffusionBee with on a Mac. A recent LM Studio version even supports being the "runtime" / model host for a local Claude Code installation, thus you can point your local Claude Code installation to a local model hosted via your running LM Studio. Haven't tried that yet though.
But seeing what's possible with GitHub CoPilot at work, resp. at home, what's possible with some variant (stuff is confusing there edition-wise IMHO) of MS CoPilot included in our M365 Family subscription plan, which my kids are using for various things, I'm torn between local vs. jumping onto a cloud offering/subscription for personal usage, especially also after installing the Claude Desktop App on my Mac yesterday, with the Free plan so far, which gives me access to Sonnet 4.5. Got already great input while looking for a student home for my kid for autumn 2026. But likely the same with MS CoPilot.
Claude doesn't support image generation though, thus back to MS CoPilot in the M365 Family subscription or the local LM Studio + Diffusion Bee combo.
Claude privacy, not sure, there is some opt-out by disabling the "Help improve Claude" toggle, but can't say what still happens when I upload maybe personal documents in a chat etc ... something similar is available for MS CoPilot as well.
I'm torn between local and cloud, but I'm a very light-weight user for personal stuff at home, thus maybe a free plan of something is good enough, not like others, if they hit limits with Claude's MAX plan 😎. But for an Apple 64GB to 128GB RAM upgrade alone, you will get quite a few months of subscriptions somewhere, especially for not super heavy usage resp. a maxed out subscription plan. Also a bit the question how up-to-date data in local LLMs is, if one need to work on super recent stuff.
One super bonus for me with a cloud plan would be that I have my chats available everywhere, on-the-go, on client devices like an iPad etc. But as said, I'm torn between local vs. cloud, local, as I also want to utilize my 64GB RAM investment on my Mac Mini M4 Pro.