I am int the camp where I find LLMs very useful, especially for quickly finding stuff since google search sucks more and more. Also used in the right way, understanding how they work makes routine tasks a lot faster. Just restrain yourselves when using it.
By design, LLMs hold a "compressed database" of a lot of information, basically the whole web + some other unknown stuff. Commonly recurring patterns will hallucinate very little. So as long as one works with that kind of stuff, it will be quite awesome. Seldom write python scripts from scratch anymore, and for web it seems very good ( although I am not a web coder). For more niche knowledge, there is more randomness, or lock down to the patterns it has seen.
I mainly code in C99 and there is has been quite sucky. But still, for simple things, it will produce very useful output even if it will not be up to date with the latest developments for apis that change etc.
My impression is that for coding, GPT5 is a lot more solid. At least if you iterate/collab.
Biggest downside is if you let it write too much, you will not be able to change code manually since you do not understand what it is doing etc.
My main gripe is that the really good models are online and owned by large corps. So I am also looking into having my own backend servers. A Mac is quite capable these days, especially for smaller size models targeting well defined tasks.
But tbh, very unsure on what to invest in. At work we have 4090s and H100 for ML on prem training, but we do not use any LLMs atm, it has just been for my own experiments.
It is so odd(but I know the reasons oc) that a Mac can run LLMs quite reasonably while they positively suck for doing things like "classic cnn" visual models. Did some benchmarks on m3 max, 4090 and h100 and we are talking a difference in magnitude.
Really hope this changes.
So for a personal professional backend I am not at all sure what to get. For visual work, I would get a dual 5090 prebuilt by some integrator. For running LLMs on a budget? well, that's hard, seems the m4 max is at a sweet spot right now, otherwise a Blackwell pro with 96GB ram...