I've never used magnetic tape or database/SQL, but I would think, now that we're 1/4 of the way through the 21st Century, that developers would have learned to abstract their application code as far as possible away from any assumptions about the underlying hardware. Is that not the case?In the long-term I do see declining support for HDD. At the application level and the filesystem level in Apple's case. Developers optimize their programs for the hardware they have and they tend not to have HDD these days. So the algorithms they use are not HDD-aware. Same thing happened decades ago with tape storage, which used to be dominant and people developed all sorts of algorithms optimized for sorting data assuming the performance characteristics of tapes. Those still worked on HDD but when people developed new algorithms assuming HDD, they were unusable on tapes. I saw the equivalent of that for SSD in a now several versions old update to MS SQL Server. They changed the default sorting algorithm to run more efficiently on SSD and/or in-memory but it's painful out-of-memory on HDD. Like 50x slower. There's a "secret" (hard to find) flag to use the old sort algorithm which restored performance to large databases stored on HDD in those situations, but you can see where things are going.
Why would a Mac app developer need to know or care whether the files they're working on are stored on an HFS+ HDD, an APFS SSD, a NAS, a cloud service, or on the moon for that matter?