Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

mward333

macrumors 6502a
Original poster
Jan 24, 2004
575
34
I am planning to remove the 5 Toshiba hard drives from the Pegasus J2i and R4i, and to use (instead) 6 of these hard drives:
Any difficulties that I should anticipate before doing so?
 
As an Amazon Associate we earn from qualifying purchases.
The R4i might be better off with NAS rated disks than server (enterprise) rated ones. They're close together in there ...
Something like this:
https://www.amazon.com/dp/B07SLPTK17/
I appreciate your advice about this, @s66 ! Can you please explain further? I've spent a lot of time reading/thinking about this, and I would like to better understand your insights, please. In the R4i, are you worried about heat? vibration? I'm just curious! The drives I mentioned have 2.5M-hr MTBF, and the ones you mentioned have only 1.2M hours MTBF. I would sincerely appreciate your insights and opinions about this issue. Thank you in advance for your help.
 
As an Amazon Associate we earn from qualifying purchases.
1.2 or 2.5 M hours is not really all that relevant: it's many, many more years of continued use than the useful lifetime of the disks [1.2 M hours is almost 137 years ...]

For the rest: it's the engineering requirements that are different for server use and raid array use.

I know the MTBF sounds like one is more reliable than the other, but most failures in will happen not in a gauss distribution but in a "bathtub" distribution: either they fail early in their life, or they fail when something wears out (and in a RAID array they all wear out more or less at the same rate = VERY dangerous for your availability (or data [for the fools running without backups]) to keep them too long as you risk double failures).

MTBF is a prediction, at best guessed by an effort to age the drives artificially [obviously none were ever tested for 137 years - if they did: calling shotgun on the next time machine trip] mostly however it's simply an engineering goal or even merely a marketing number. If it's an order of magnitude higher than what you need: don't worry about it.

NAS drives are designed to live their entire life close together in a frame that will transmit vibration for one disk to the other, that are designed to power on and spin up with other drives doing the same at the same time so they are more resilient in a RAID array (at least in theory). Now all (decent) disks are tolerant to other things drawing much power when they try to spin up, all of them are tolerant to being permanently on, and all of them are somewhat resistant to vibration, but NAS ones have this all as their main requirement.

Server disks typically are sturdy drives but withstanding raid usage isn't their main requirement.

HTH
 
  • Like
Reactions: kittiyut
@AidenShaw : Which drives do you recommend? I have *always* loved your posts on Macrumors (I've been a long-time reader on here), and I highly respect your opinion!

@s66 : When I look at the Seagate page, there are 4 categories of hard drives:
and the Exos Enterprise drives are listed with the NAS drives:
Everything I see in the specs / description leads me to believe that the Exos Enterprise drives are more durable than the IronWolf or IronWolf Pro.
For example: The # of Bays is "unlimited" for Exos Enterprise drives, and is limited to 24 bays for IronWolf Pro. Everything I see in the specifications makes me feel that the Exos Enterprise drives are better suited for this purpose than the IronWolf Pro drives. Of course, I am asking here, in case I am wrong!
 
IMO you will just fine. For the R4i I would advise you setup at least RAID-5 for safety. On the J2i you need to have a backup strategy to protect its 32TB of data.

I have the 16TB Seagate Exos and it's running fine without issues.... but strongly suggest whatever you do, have backups in mind as all HDDs can fail at any time; within hrs or at some later point in time.
 
Last edited:
  • Like
Reactions: mward333
The # of Bays is "unlimited" for Exos Enterprise drives, and is limited to 24 bays for IronWolf Pro
Exactly. "Enterprise" > "NAS".

Check https://www.seagate.com/files/www-content/datasheets/pdfs/exos-x16-DS2011-1-1904US-en_US.pdf

exos2.jpg
 
  • Like
Reactions: mward333 and s66
and the Exos Enterprise drives are listed with the NAS drives:
Everything I see in the specs / description leads me to believe that the Exos Enterprise drives are more durable than the IronWolf or IronWolf Pro.
It has honestly been a while since I bought spinning drives [I'm retired] but indeed it seems these Exos are a bit more than the Ironwolf ones. [Back when I bought and used loads of spinning disks, you really did not want what they marketed for servers in you raid arrays - done that and got bitten]
 
  • Like
Reactions: mward333
It has honestly been a while since I bought spinning drives [I'm retired] but indeed it seems these Exos are a bit more than the Ironwolf ones. [Back when I bought and used loads of spinning disks, you really did not want what they marketed for servers in you raid arrays - done that and got bitten]
The main issue with NAS/array drives vs. system drives is that arrays get some really nasty vibration issues. The Exos drives are specially designed to handle the vibration issues.

Perhaps in the distant past there were enterprise drives without vibration management - but those are from days of legend. All of my servers come with embedded HW RAID controllers and 8 to 24 drive bays.
[automerge]1579058200[/automerge]
I have the 16TB Seagate Exos and it's running fine without issues.... but strongly suggest whatever you do, have backups in mind as all HDDs can fail at any time; within hrs or at some later point in time.
And in addition to backups, any important data is on RAID-6 (or RAID-60) arrays with hot spares. The best backups are those that you never use. And, I've never had to restore a full disk volume from backups (since 2001). I've had to restore sometimes when the wetware typed "rm -rf" on the wrong path.... ;)
 
Last edited:
  • Like
Reactions: mward333
You have all been extremely helpful to me! Thank you very much!
OK, I'll probably go for these drives. Yes, I recognize the importance of RAID schemes,
and we have some massive (offsite) storage mechanisms at our university, which I can use for backups.
Thanks again for these insights!
 
1.2 or 2.5 M hours is not really all that relevant: it's many, many more years of continued use than the useful lifetime of the disks [1.2 M hours is almost 137 years ...]

For the rest: it's the engineering requirements that are different for server use and raid array use.

I know the MTBF sounds like one is more reliable than the other, but most failures in will happen not in a gauss distribution but in a "bathtub" distribution: either they fail early in their life, or they fail when something wears out (and in a RAID array they all wear out more or less at the same rate = VERY dangerous for your availability (or data [for the fools running without backups]) to keep them too long as you risk double failures).

MTBF is a prediction, at best guessed by an effort to age the drives artificially [obviously none were ever tested for 137 years - if they did: calling shotgun on the next time machine trip] mostly however it's simply an engineering goal or even merely a marketing number. If it's an order of magnitude higher than what you need: don't worry about it.

NAS drives are designed to live their entire life close together in a frame that will transmit vibration for one disk to the other, that are designed to power on and spin up with other drives doing the same at the same time so they are more resilient in a RAID array (at least in theory). Now all (decent) disks are tolerant to other things drawing much power when they try to spin up, all of them are tolerant to being permanently on, and all of them are somewhat resistant to vibration, but NAS ones have this all as their main requirement.

Server disks typically are sturdy drives but withstanding raid usage isn't their main requirement.

HTH

disagree. I’ve had two iron wolf nas drives die (in a nas) and the exos drives outlast them. My nas is just an 8 bay Synology. This is across 6 years of usage. I only buy exos now. As always your mileage may vary.
 
Last edited:
  • Like
Reactions: mward333
If I were to buy a Seagate drive, I would definitely go Exos. The best deal I see currently is from Provantage: https://www.provantage.com/seagate-st16000nm001g~7SEGE1K1.htm

Strangely, not only is the Exos X16 much better specced than the IronWolf / IronWolf Pro, but they're also almost universally *less expensive*.

A couple other features I look at on these huge spinning disks is Self Encrypting Disk (SED) / Instant Secure Erase (ISE), and 4K native sectors vs. 512 emulation. Although there is a trade-off between fail safe and fail secure with SED/ISE, I really like being able to wipe a drive quickly before repurposing it. And for Pete's sake, how long do we need to keep emulating 512B sectors? 4K native is more efficient and should be supported by pretty much everything at this point. Unfortunately, ISE 4Kn units often cost a lot more because they're less popular than the lowest common denominator models.

Generally I tend to prefer HGST drives (now integrated into WD) over Seagate, but until the 18TB Ultrastar DC HC550 becomes more widely available, Seagate is definitely winning on both overall capacity and price/TB.
 
  • Like
Reactions: mward333
And for Pete's sake, how long do we need to keep emulating 512B sectors? 4K native is more efficient and should be supported by pretty much everything at this point.
If your software reads/writes multiples of 4KiB, your 512e/4Kn drive does no emulation - the physical sectors on the disk are always 4KiB.

Only when the OS reads/writes 512B sectors, or misaligned larger transfers, does the disk firmware emulate the smaller sectors.

 
  • Like
Reactions: mward333
If your software reads/writes multiples of 4KiB, your 512e/4Kn drive does no emulation - the physical sectors on the disk are always 4KiB.

Only when the OS reads/writes 512B sectors, or misaligned larger transfers, does the disk firmware emulate the smaller sectors.

OK, I'm sure you're right about this, and I should just accept that having the emulation layer available in the firmware is a win all around. I just shudder at the thought of letting Windows XP and its misaligned writes anywhere near a new 16TB HDD.
 
  • Like
Reactions: mward333
OK guys I bought 6 of these 16 TB Exos drives. They are great. I installed OpenZFS (https://openzfsonosx.org/) on them and I was as happy as a clam.... but then I realized that they are 512 byte sectors, and it would be preferable to have 4096 byte sectors. So I checked out the Seagate "SeaChest" tool, which is only available for Windows and Linux. I couldn't get my Mac Pro to easily boot into Linux (after lots of trying). I removed all 6 of the drives and I borrowed a USB hard drive enclosure from my colleague, and I wrestled with SeaChest all afternoon (using a Windows laptop), only to discover that SeaChest will only perform the sector size reformatting if the drive is internal to the Windows computer. The fact that the USB enclosure's vendor is detected by Windows is a barrier here. I even called Seagate, and they confirmed that this is the case. Argh! So I'm pretty stuck, on trying to change the sector size from 512 to 4096. One of my colleagues thinks that I will get slightly better performance in OpenZFS if I'm able to ensure that the sector size is 4096.
 
One of my colleagues thinks that I will get slightly better performance in OpenZFS if I'm able to ensure that the sector size is 4096.

Without detailed, accurate measurements of how your actual application(s) use those drives, there's no way to tell if 0.5K or 4K sectors will be faster. There are far too many variables: caches, load, size of the writes, size of the reads, sequential access vs random access, OS, 3rd party tools, rail levels, ... it'll all play a role.

If you spent a day on it already, that last drop of performance might not be worth it. Any SSD solution is going to be way faster anyway. So I kinda fear you've already invested more time trying to get at best a marginal improvement that you'll never get back with the improvement you might be able to achieve in a reasonable pay back timeframe.
 
Last edited:
  • Like
Reactions: mward333
Without detailed, accurate measurements of how your actual application(s) use those drives, there's no way to tell if 0.5K or 4K sectors will be faster.
Also note that these drives always have 4KiB sectors - but their firmware can emulate 512 byte sectors if it gets 512 byte operations.

If your disk formatting is aligned for 4KiB sectors, and your software issues commands for multiples of 4KiB - there's no emulation and no performance loss.
 
  • Like
Reactions: mward333
Well, I couldn't help it! Today I borrowed a PC from a colleague, and I swapped the drives (one at a time) into it and out of it, and changed the sector size from 512 to 4096 on each one (using the SeaChest software from Seagate). Now I put all of the drives back into the Mac Pro, created the zfs pool for them again, and I'm all set! All in all, this probably cost me two more hours of time today, but I will rest well, knowing that this is finished.
 
Well, I couldn't help it! Today I borrowed a PC from a colleague, and I swapped the drives (one at a time) into it and out of it, and changed the sector size from 512 to 4096 on each one (using the SeaChest software from Seagate). Now I put all of the drives back into the Mac Pro, created the zfs pool for them again, and I'm all set! All in all, this probably cost me two more hours of time today, but I will rest well, knowing that this is finished.
And, considering that it takes much of a day to simply do a read scan on a 16TB drive - you know that it didn't change anything on the platters. You changed a bit (probably literally one bit) of meta-data in the NVRAM on the drive to disable the 512e emulation. It was a drive with 4KiB sectors before, and a drive with 4KiB sectors after.
 
@AidenShaw thank you very much for this information. I'm going to share this with our sysadmin.
I am learning a great deal from you! (I realize that I perhaps wasted a great deal of time on this.)
 
You guys helped me so much in preparing these drives for their Pegasus enclosures.
Now I'm having difficulties that neither me nor my sysadmins can figure out; all of us have used zfs for a long time on Linux servers, but I'm really struggling with OpenZFS on these drives in the Pegasus enclosures on the Mac Pro. Just FYI, I posted a thread about this here; I am definitely open to suggestions.

 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.