it's done in code / software, which can continually be tweaked, enhanced, and refined. even something as simple as google's homepage, which is essentially just a logo and search field, is constantly updated.
the groundwork is there, give them time to work it out as a larger sampling of images can now be produced and analyzed for improvement.
What? A much hyped feature is "an extra credit project"? lol
I'm glad Siri is still in beta in your mind..even though it's been out for almost 5 years now. I really hope you were joking. It's sad that Apple applogists make all kinds of excuses for their half-baked features, but give other companies hell for less.
computer vision is a non-trivial challenge, especially for the fine edge details people are expecting from their portrait photos.
personally i just think they could have done more to make it compelling in the post-processing options. if they're going for the instagram crowd, offer some custom bokeh shapes like hearts, stars etc. let people simulate changing f-stops to increase or decrease blur after the fact. maybe even allow people to create little rack-focus live photos.
there's lots of potential that they've overlooked at this point.