1.2 or 2.5 M hours is not really all that relevant: it's many, many more years of continued use than the useful lifetime of the disks [1.2 M hours is almost 137 years ...]
For the rest: it's the engineering requirements that are different for server use and raid array use.
I know the MTBF sounds like one is more reliable than the other, but most failures in will happen not in a gauss distribution but in a "bathtub" distribution: either they fail early in their life, or they fail when something wears out (and in a RAID array they all wear out more or less at the same rate = VERY dangerous for your availability (or data [for the fools running without backups]) to keep them too long as you risk double failures).
MTBF is a prediction, at best guessed by an effort to age the drives artificially [obviously none were ever tested for 137 years - if they did: calling shotgun on the next time machine trip] mostly however it's simply an engineering goal or even merely a marketing number. If it's an order of magnitude higher than what you need: don't worry about it.
NAS drives are designed to live their entire life close together in a frame that will transmit vibration for one disk to the other, that are designed to power on and spin up with other drives doing the same at the same time so they are more resilient in a RAID array (at least in theory). Now all (decent) disks are tolerant to other things drawing much power when they try to spin up, all of them are tolerant to being permanently on, and all of them are somewhat resistant to vibration, but NAS ones have this all as their main requirement.
Server disks typically are sturdy drives but withstanding raid usage isn't their main requirement.
HTH