Just give it a go.
Install Ollama, learn how to load the
models you can't simply select in the GUI, and see what happens.
Like I said earlier, I've just spent a lot of time evaluating these things, and what ChatGPT tells you might not always match up with what reality is when you try it yourself. But nothing will break your machine.
Worse case scenario is that it lags and it takes forever for Ollama to produce a result. Then you know, and you try a smaller model. And once you've dropped down to the size of models that your machine can handle, then it's smooth sailing as far as speed.
Then you need to evaluate if those models actually can consistently produce good enough results for what you need. And that's a very personal type of a thing that you just have to explore yourself.
In my case I needed javascript code for a specific environment, and they weren't good enough to produce full solutions from prompts wanting a wider type of solution requiring several multi-step things to fit together. But being a programmer myself they'd still be good enough to speedier get individual parts of the solution that I need. So I'll be happy working locally for some of the more sensitive parts, while I'll still use third-party/online models for more generic type frameworks etc.