casperes... thanks, once again, you offer some great advice!
As for data preservation.. I learned, decades ago, to have redundant copies of important files in numerous different places. Way back in the day, I learned the lesson that a HDD could perform well for a long time and then unexpectedly fail - I learned this lesson well.
Since then, I save everything I do in multiple locations. These days, I save to the local drive and then send a copy up to my iCloud Drive. Then, when I jump on any of my other Macs, I do a quick save of the new files back down from the iCloud Drive to the local drive of that machine... so, I've got copies of everything important to me on five local Macs (with SSDs) AND up on the iCloud Drive. Taking no chances with data loss!
Very true.
I use iCloud Drive myself to preserve some data off-site, along with a Time Capsule, and data existing on both my Macs. But if, knock on wood, my house were to burn down, it would likely take everything not on iCloud all at once - hence why off-site is an ideal state. - I feel like 99% of the data I have that isn't on iCloud is reproducible though.
Anyways, just for the sake of information completeness, if you or anyone else reading along feel they have enough data backups elsewhere, and want to merge two drives, the way to do so under APFS, is to boot through Internet Recovery, a USB installer or other bootable medium not part of the drives to be fused, and run the command
diskutil apfs createContainer -main [disk1] -secondary [disk2]
where [disk1] is the fast one and [disk2] is the slow one. -main and -secondary can be omitted and the system will try and automatically assign roles, but it can sometimes make incorrect decisions.
You're right, I was just looking at the amount of used vs. unused memory. And at one point did notice what I seem to remember being about 500MB of data "swapped".
So, if I understand you right, that status, showing that perhaps 500MB was swapped, isn't necessarily an indicator that I need to get more memory?
Not necessarily no. Even if the 500MB of swap are actively used swap files, it may still not be an issue in real world cases.
Here's an example situation. You have 30 tabs in Safari. You switch over to Photos, where you are editing a photo. The tabs the system deems it least likely you'll visit soon again are written to disk. to free up more memory for the edit, if it becomes necessary. Now let's say it does become necessary, and the Safari tabs are removed from meory, so the system will have to retrieve them from disk when it needs them again.
Now when you go back to Safari, it's on the tab you last left it at, which is crucially still in memory. Now the memory manager sees Safari is in the foreground again, and before you go to the tab that has been paged to disk, it is already in the process of getting it from disk, so it's back in memory before you consider maybe going there.
To do this, it pages out some of the intermediate results of your edits, so maybe if you click undo 100 times it'll take a little extra to load, but otherwise everything else is in memory.
Hope this illustrates how memory management systems have gotten really intelligent, especially in macOS and Linux (and increasingly under Window, but still not quite to the same degree). The downside is it's a tad harder to determine when more memory really would help in a real world scenario, but that's why Apple made the memory pressure graph - to help people see how much the system is relying on pulling things from disk. If it gets yellow, more memory could help speed up to a certain degree, and in the red you barely/don't have enough memory for active apps.
If it's in the green, you'll probably see close to 0 improvement, perhaps outside of caching a bit extra for "maybe you'll need this sometime in future", but even then, with the speed of the SSDs and the expectation of cold storage for unlaunched apps and whatnot it's likely not a massive difference there either.
Speaking of the speed of the SSD; RAM is still many times faster than the SSD, but if the data that needs to be retrieved is a small chunk here and there, the speed of the SSD can mean that even if you don't have enough RAM and you're going to something that needs immediate access to something on disk, it may happen quickly enough that it doesn't really make a big difference.
For larger chunks of data it starts mattering more and more. Can't remember the newer speeds, but dual channel DDR3-1333MHZ at CL11 gives around 25.6GB/s, which is a heck of a lot more than even the best SSDs reaching just around 3GB/s. But if what you're retrieving from disk is a 4MB chunk of data, it may not be a very big percieved effect.
And remember, even if you have 500MB of VM data stored on disk, A) Not all of it may be needed or accessed in a manner that's meaningful - like with the example I gave of hidden accesses. And 2) It's likely not 500MB of data that's all required in one singular place. It can be tiny chunks spread accross multiple places, so Spotify may swap 2MB, Safari 10MB, Photoshop 100 and so on. And in some cases they may never need that memory again. It may be thumbnails that were allocated to virtual memory "just in case" or something.