The M3 and A17 are on N3B. The M4 is on N3E. N3P won't be shipping until late this year, and likely won't be seen until the M5, though Apple has many paths forward.The M3 is on a more expensive node (version) on the 3nm process (N3E). The M4 is on a later 3nm node (N3P) that is less expensive. Apple used N3E for the A17 Pro and M3 chips to be able to get on 3nm quicker but they seem to have decided that was for a limited run and and now moving their newer generated to the less expensive N3P.
If Apple can only run AI on the newest chips, that will severely hinder their ability to get this in front of most of their customers. There doesn’t seem to be anything in the M4 that is really unique for AI, just more faster components. At most, you would have the AI running faster on newer chips and slower on older chips. That is a familiar pattern. There are also likely to be some AI processes that can only run on server side.
The big difference for AI seems to be that the M3 can't run dual INT8 instead of single FP16 ops. The M4 can, just like the A17, giving it double the performance for the majority of tasks it'll be doing (plus a few % more, due to minor improvements or better clocks).