How does synthetic benchmark performance help with real world applications? For example, can it software decode VP9 YouTube videos above 1080p that even newest iPad Pro is still limited to? For comparison, a six year desktop CPU can do 4Kp30 VP9 YouTube software decoding.
Performance/watt isn't a thing when iPhone Xr and even XS Max trail the competition. Apple really need to up their game.
Apple has made the decision not to support VP9 in favor of forcing the transition to AV1 (similar to how they didn't support flash to force html5). So whether the hardware can but isn't accessible (like their intel desktops) or cannot (the A series) is irrelevant.
And apparently that VP9 performance isn't helping it other areas where in the 'real world' the A12 clobbers it in the 4k Adobe processing test here https://www.tomsguide.com/us/snapdragon-855-benchmarks,news-29129.html
They are both good chips with performance that would be acceptable for desktops from just a few years ago. Each one has areas where it is better than the other. In many cases it is going to be how well the OS integrates with it. In the end, it doesn't matter. Are the masses really going to switch from iOS to Android (or vice versa) over a few benchmarks or specific use cases?
Personally I like the new S10 series because it will force Apple to make a better product.