M3 is an architecture for an expensive, dead-end process (1st gen 3nm). Processes going forward cannot put as many transistors on a die - the chip size is decreasing faster than the process density increases.
This is incomprehensible nonsense. I can't even figure out what you might be trying to say here.
Just in case you actually meant something like what you said: Processes going forward will CONTINUE to put MORE transistors per area than previous processes... in general. However, as has been the case in the past, some specific processes will relax certain design rules (making things a few % larger) in exchange for better manufacturability (lower cost) or better performance.
N3E (the M4's process) is marginally less dense than N3B (used by the A17 & M3) because it relaxes design rules somewhat. This buys marginally better performance (though FinFlex complicates this a lot, so you could perhaps wind up with an even denser design, though likely at lower performance), but more importantly lowers mfg cost and cycle time by a substantial amount. N3P will be denser, while N3X will be slightly less dense. But those are all offshoots of N3E. The next major node, N2, is coming in 2025, and it will be substantially DENSER than any N3 variant. And the next process after that (A14? I forget what they're calling it) will be denser yet.
Meanwhile, reticle size will not decrease (in case that's what you were thinking about).
This is why we aren't going to see last year's iPhone Pro chip in this year's iPhones. It is also likely why the Vision Pro, which they already know will have a long time until they can offer a meaningful upgrade (gated on cost/volume of the display technology) stayed with M2.
Interesting notion about the AVP. I don't think you're right about that - it would obviously be easy to respin it with an M4 instead of an M2, without doing much else - but it's definitely a plausible argument that they don't want to do that.
The M3 shipped quickly after the M2 because Apple had bought out that 1st generation 3nm node, but they need to focus on process changes around more (standardized) interconnects to be able to scale up complexity on future nodes and to also do things like mixed process SoCs, and to consume things like third-party in-chip parts.
Um. That's a tasty looking word salad there, but it's sadly deficient in meaning. Once again I'm not going to try to guess what you're shooting at with the first part. As for "consume things like third-party in-chip parts"... are you talking about them buying IP blocks from other vendors like Renesas or Synopsys? Because AFAIK they aren't buying anyone's designs for anything on the SoC, nor have they since they moved to their own GPUs. And they certainly aren't going to be buying more in the future. They are constantly bringing MORE design in-house, not less.
You must not have watched the event video. The double-OLED screen needs the added processing power of the M4 chip.
No, not processing power (in the sense that that term is generally used, which is, something using the CPU, or possibly GPU/NPU). However it does need the new display engine.
I didn't know we have camera continuity. That's awesome if true. I'll be researching that this week!
Of course it's true. I wasn't fantasizing. I just used it a few weeks ago to scan in all my tax docs. However, as I wrote, that's on the Mac. They'd have to extend that feature to the iPad, and I don't think they have (though I haven't checked).