First, I hate the Geekbench 5 scale... like mixing F with C.
Second, my main concern is RAM. iPhone 8 has 2GB of RAM, now that larger and more performant apps can benefit from increased RAM now. Limiting it hinders performance and causes unnecessary paging and cache miss.
I hope Apple can remove the concept of RAM altogether. Replacing RAM with enhanced L3 and distributed L4 cache on the chip, that is, on the processor die, inside A-series SoC. This can drastically improve battery life, memory throughput and bus delay. This will also be cheaper as it shrinks the PCB, eliminates RAM supplier premium, and it increases the die size, but that is actually a good thing because as CPU-cores become more performant, clocks higher, and contain more cores, thermal limit (TDP) kicks-in a lot faster. If we increase the die size (less thermal density for better colling action), while shrinking the lithography (7nm->5nm, consume less power, smaller size), we would end up with a lot of unused silicon real estate, which is a waste of space, lowers yield (as bigger die means more likely to get a dust particle, fewer chips per silicon wafers/disks), and massively increases material cost. But if you think about it, RAM is also silicon, specifically, they are just MOSFET logic gates, built on exactly the same material as the CPU die. If we use the extra space and scatter L3 and L4 cache on the processor die among the compute cores, that can not only shorten the memory bus length, and therefore, delay, but also equalize the thermal distribution in the processor die. RAM is nowhere as hot as the cores and each cache block can be run at variable clock frequency based on load and they don't have to match the clock speed of the cores they work with, which saves power dynamically. This is not possible on external RAM modules even if they are soldered on the PCB.