They used machine learning to identify important aspects of the room, thereby making it easy to deterministically map the sound to the listening area in a way that gives the illusion of good room acoustics. Of course, deconstructing the audio signal into 7.1 channels is non-trivial. I can think of a few approaches to try, but none of them stand out as an obvious solution.
Previous attempts (at reasonable consumer prices) were largely limited to volume and (sometimes) EQ adjustments. Previous versions of DSP were novel, but rarely sounded like the space they claimed to replicate, e.g., a concert hall. Dolby Pro Logic and its variants are a notable exception, but they required a properly encoded source and a well arranged room, neither of which the HP needs.
The Bang & Olufsen Beolab 90 looks the most similar to the HomePod in audio functionality, albeit without automated setup, and with enough power to get virtually any renter evicted, and, of course, over 120x the price of a pair of HomePods. (I would expect the drivers and amps of the Beolabs to be far superior to the HomePod, while B&O is probably well behind Apple with respect to ML.)
Incidentally, I live close to a B&O showroom, so I plan to check them out in the next few weeks to see how they compare to my HomePod (which I still haven't had the opportunity to unbox).
No. That doesn't address beamforming at all, or how Apple is employing the technology. It's not about "deconstructing the audio into 7.1 channels."