well, for your dev, 24GB's fine.
But you also said local LLMs.
Take a look at a list of available models.
Here is one:
https://ollama.com/library?sort=popular,
just a subset of all available models,
Then see if there are any you want/need to use but cannot on 24GB machine.
For example, gpt-oss 20b is 14GB but gpt-oss 120b is 65GB.
Another, deepseek-r1 14b is 9GB, 32b is 20GB, 70b is 43GB.
Llama3.1 8b fp16 is 16GB, 70b q4km is 43GB, 70b q8 is 75GB.
and so on. This will count in addition to system resources.
I would see if you can ask around and try a friend's 24GB machine for a couple of days to get the feel for what these models provide and at what speed. You cannot do just off the size of the models, as some you can squeeze into working, but slowly, and others you think will fit just fizzle.
fyi I just traded in my M4 studio 36GB for one with more RAM in order to run more local llms. At some point you have to settle for what you really want/need.