Yes, the Ripjaw memory by itself runs at 1867 MHz. Just like the original Apple RAM.
I decided to do a little speed test to see how much performance I'm really losing by combining sets and stepping down to 1600 MHz. I tested a scenario where increased speed, if any, would have a measurable impact - encoding a video with ffmpeg. I'm sure this is not the best pure test of memory speed, but it's more relevant to my real-world usage. If there's not much difference, then do I really care?
Below are my results. I encoded the same file 4-6 times in each configuration and took the fastest run. 4 GHz i7 with 1 TB SSD. I also tested my 2012 Mac mini for reference (2.6 GHz quad-core i7, 16 GB memory at 1600 MHz, SATA SSD).
- Ripjaw 16 GB @1867 MHz: 4:22.65
- Apple 8 GB @1867 MHz: 4:22.75
- Combined 24 GB @1600 MHz: 4:27.33
- Mac mini 16 GB @1600 MHz: 6:54.19
Results for each configuration varied by +- 1 second, so the first two are essentially equal and within the noise. The combined set was only about 1.8% slower. For a 2-hour encode, this translates to about 2 minutes longer.
Compared to the Mac mini I was using previously, this is basically lost in the noise. The mini is about 58% slower - that hypothetical 2-hour encode would take an extra hour and 9 minutes on my old machine.
Based on this, I think I'm ok with keeping the combined sets running at 1600 MHz. Eventually when prices fall in a couple years, I'll jump to 64 GB, and hopefully at 1867 or 2133 MHz.
It might be interesting to test some other long encodes, such as with iMovie. I suspect this would depend more on the GPU than memory speed, but you don't really know until you try it. Even different ffmpeg options could stress memory speed differently, but in the end do I want to stress over every possible optimization or do I want to just use the machine?