I found a paper that google published about how they achieve bokeh on their Pixel phones. It’s pretty interesting, as it appears to be an even smarter solution than the dual camera setup because it works just as good and saves up space in the phone. They are using the dual pixel autofocus system to create a depth map basically, of course in conjunction with ML, but it’s not just that the neural engine understands what’s foreground and what’s not, but there is still a depth map in play, just not from a second lens but created by the AF system somehow. Fascinating stuff!They're using machine learning to tell what's the subject and everything else is background that can be blurred, that's my guess. Google Pixel phone is able to fake bokeh pretty well with a single lens so Apple with its new A12 and the incredible computing power on the neural engine part of the chip should be able to deliver a similar experience, or even better.
Source: https://www.dpreview.com/news/48503...ulate-shallow-dof-from-a-single-mobile-camera