"Quarterly" makes it seem planned. It was more a serendipitous event.
I was working on the scripts, and two of the numbers involved are DAYS_PREV1 (number of days since previous stats capture) and DAYS_NEXT (number of days to next semi-annual capture). I saw they were converging, and the exact mid-point date was 03 Octother 2018. So I kept the captured data from that day, and continued working on the scripts to automate the production of this breathtaking panoply of thoroughly engaging tables. When those scripts were sufficiently tested, I used the new master script to process the data from 03 Oct, then I edited up the files with all the supporting verbiage (Remarks don't write themselves), and made the posts.
One of the advantages of having nearly everything automated (i.e. scripted) is it becomes so much simpler to collect the data and produce the tables. The actual data collection was automated for the 03 July capture; there's no way I was going to do that manually for 500+ users. Still, it took some last-minute hammering and tweaking to make sure it worked. I then spent the rest of the day at the beach.
Turning the captured data into tables took roughly 1500 lines of assorted bash scripts, awk scripts, and several hours of tweaking and copy/pasting command-lines into a Terminal.app window. If I'd spent the time in July to make it completely automated, it would have been closer to August before I got things posted. Even so, I had some bugs, so I ended up making corrections to the first tables I posted.
Now, however, there's about 1300 lines of bash & awk scripts, with effectively nothing done manually at all. After I've captured a new file of user data, here's what it now takes to produce all the tables:
Code:
bash 03c_all_tables_bash.txt 2018-10-08
This master script will automatically determine where the new capture file is by date, which of the dated previous captures to use for calculating the differential stats like delta-rank, post-rate, and delta post-rate, and proceed to automatically build all the tables from the selected data. That process runs in well under 5 seconds, and other than pasting a line into Terminal, none of it is manual. The data capture itself takes ~20 minutes, but a lot of that is due to an intentional delay inserted to avoid slamming the MacRumors servers with a kazillion HTTP requests.
There are some additional improvements I'm considering, and maybe some other things I could do with the data, but those will have to wait a bit. One thing I'd like to do is more graphing, but I want it automated (along with some other constraints), so I'm still pondering the best approach to take on that front.