Memory bandwidth is nearly doubled and inference speed tracks closely to that... so no, absolutely addressed here by M4. Storage speed has nothing to do with that.
Inference speed absolutely is lagging, which was my point.
Let me rephrase, there are 2 main areas that Apple Silicon Macs fall short compared to PCs: inference speed (and some associated ML/AI tasks – as long as they can fit within the memory space on a PC which all models can’t), and storage speed. They are distinct things.
M3 Max with far more memory bandwidth is still miles behind now 2 year old nvidia architecture as far as inference throughput. Apple needs to address this, and will probably do so with next year’s offerings. Look up the benchmarks for yourself and you’ll see, it’s not a small difference it is enormous. The fastest Apple Silicon inference speed can’t even match a 4 year old 3090, they are way behind, which leads to my next point directly.
Tthe decision to limit the neural cores to whatever the base M chip has is short-sighted and looks like it will continue for this generation.
I expect both things to be addressed within the next year or two, along with a MBP redesign, that will spur some upgrades. Maybe graphics performance too since that is barely improved YoY this time.
M3 was truly a stopgap and the M4 makes that very apparent, but it doesn’t make those machines bad, they do have significantly improved graphics cores vs. M1/M2.
My point mentioning any of this is that these drawbacks make it difficult to justify spending a ton of money, if you care about any of these aspects or need them for your workflows which not all people do. It actually points to favoring the Mini because it can get you (or me, perhaps) to the next real advancement that holistically addresses these things.