Agreed. Many power users are in that boat. We don't need a lot of cores, but we do need a lot of RAM. Unfortunately, even when spec'd with maximum RAM, the M-series devices offer a much lower ratio of RAM capacity : performance cores than Apple's last Intel-based desktops. For instance, with the 2019 and 2020 27" iMacs, you could get 128 GB RAM (officially 64 GB on the 2019) with an 8-peformance-core processor (heck, you could get it even with the min-spec'd 6-core i5). With an M-series processor, that's only available with the top of the line Ultra, and its 16P + 4E cores.
Not the M series, the M1 series.
Apple have multiple plans for ways to increase DRAM substantially in a flexible way.
One direction has the DRAM attached to a "spine", a sliver of silicon that is attached to one or two points on the die. This allows the creation of a denser block of compute (2x2 or larger blocks are now possible) along with a flexible number of spines depending on how much DRAM you want.
Examples look like this:
The CPU "pairs" are items like an M1 Ultra. They are joined together by bridges (as you can see) which could be EMIB-like or just a simple BEOL RDL. You also see the "spines" allowing for crazy amounts of memory attachment.
The fascinating thing is that the design is somewhat mix-and-match at the packaging level, so that Apple can grow to extremely large designs, if a customers asks for them, without having to spin new masks beyond the current Max-sized masks (which are also used to create Ultras; Ultras are created as a BEOL layer on top of a reconstituted wafer of Max's, they are not fabbed as a different part.)
On a different angle, Apple have done serious work on looking at augmenting the current DRAM only design with alternative types of RAM, possibly Optane-like, possibly HBM-like. In each of these cases, this is not just idle-talk. For example if you want to use an Optane like memory (much larger capacity than DRAM, but also slower) you need to redesign the memory controller to cope with the fact that some memory requests will return much slower than others (and you don't want the slow requests delaying the fast requests). This redesigned memory controller work has been done. There are similar issues if you want to use 3D-stacked DRAM (like HBM) -- an optimal memory controller design looks rather different from just slapping in an existing memory controller (which will work, sure, but not optimally).
On a third angle, Apple have patented a new, scalable, cache coherence protocol. Every company (IBM, Intel, AMD, ...) has had to go through this stage as they grow their designs; now it's come for Apple. They did an astonishing job of growing MOESI from the simple initial two-core CPUs up to M1 Ultra, but the current design will start to be a pain point as the chips grow larger, hence a new cache protocol. (Based on MOESI, but allowing for various ore sophisticated shared states.)
Not all this stuff will come to pass. But there are enough pieces, that all seem to fit together in a coherent fashion, that the big picture seems clear. People think I am crazy when I evangelize this stuff, but honestly, the entire computing world (Intel, AMD, even IBM big iron) have no idea what's about to hit them over the next few years. Intel's worrying about the next Zen is going to look absurd by 2027 as they start to lose data centers and supercomputers to Apple.
These are just a few of many patents suggesting the direction:
https://patents.google.com/patent/US20190319626A1 Systems and methods for implementing a scalable system
https://patents.google.com/patent/US20170242798A1 Methods for performing a memory resource retry
https://patents.google.com/patent/US20220083472A1 Scalable Cache Coherency Protocol