I don't think a base M5 Mac Mini is going to provide a lot more useful performance than a base M4 Mac Mini. The M5 Pro and Max should have a greater improvement for local LLMs.
I have a few base M4 Mac Minis, a MBP M4 Pro, Mac Studio M4 Max and Mac Studio M3 Ultra.
In comparison to models that you are running, like Claude Sonnet, locally hosted models are quite inferior. Using these lower end models are even more problematic when running them in an agentic fashion. These are my opinions, from a developers perspective, who has been running LLMs locally (though beginning to use paid models more frequently).
I don't consider the models that can be run on the base Mac Mini to be very useful or performant for most serious work.
The minimum machine level for local LLMs, I feel is the M4 Pro. My MBP M4 Pro only has 24GB RAM, which is too little IMO.
I also feel the Mac Studio Mx Max is really the best deal going for running LLMs locally.
That being said, even my M3 Ultra with 96GB RAM doesn't have enough RAM to run open models that can begin to come close (though still a level below) the frontier models. I feel like a high quality, low cost model, like Gemini 3 Flash is a more cost effective way to go as opposed to investing in hardware solely for LLM usage.
Don't get me wrong, I'm not saying it is not fun or useful to run models locally. Local models continue to improve, and with MoE models they are becoming much faster to run locally as well.
I'm just saying, from my perspective, that you should temper your expectations and perhaps not try to chase hardware that might not get you where you want to be.