It makes sense to do it, but they have to be exactly the same machine and conditions, or the test is void.Has anyone run benchmarks on a clean install of macOS Ventura versus a clean install of macOS Big Sur for example?
For example:
- If the test is going to check startup times immediately after the clean install has completed, then the two different OSes will be doing different things (including initial setup and calibration), so the test is invalid. Ventura will be doing things that Big Sur doesn't do on startup, and there may be improvements in the Ventura code that makes things quicker or slower.
- If not, and the test is done after a user has been setup, were the two users setup with exactly the same settings? What about new things that Ventura has that weren't available on Big Sur? Differences like this might mean extra things running in the background. While that may look like Ventura is slower it's just the OS doing what it needs to for that user. For example, extra security processes that run in the background.
I have no inside knowledge, but I doubt Apple has slowed down their older machines on purpose. If Apple executives are telling their subordinates to deliberately write code that slows down machines, that would make Apple's employees complicit in a very dodgy scheme that would likely be illegal. And if that's the case, why has no ex-Apple employee mentioned this?
Last edited: