Some noobie questions on NAS

Discussion in 'Mac Accessories' started by el-John-o, Jan 26, 2013.

  1. el-John-o, Jan 26, 2013
    Last edited: Jan 26, 2013

    el-John-o macrumors 65816

    Joined:
    Nov 29, 2010
    Location:
    Missouri
    #1
    Hey all,

    So someone gave me an old server they were using in their office, and I am using it as a home server. It's an older single core Xeon machine with a 4 port RAID controller, and 4 more Sata 1.5gb/s connections on the MoBo. I have several drives I was using, via USB into my time capsule or even just tossed aside and not used. But this had enough space for me to stick all of my drives in it!

    So my question was in regards to speed, and speeding up performance over gigabit ethernet. I'm not working with 4k video or anything else like that, but I am working with a lot of DSLR images so being able to saturate GbE would sure be nice.

    Now, maybe my math is all wrong or there is a major concept I'm not understanding, but here's the issue. Internally, most of my drives (I'm running Ubuntu server, so via OS X terminal I SSH into the machine and run hdparm) run around 100MB/s. Currently, I have a RAID 1 array of two of the original drives (80GB) included in the machine, which is for the OS. Then I have a 2TB, 1TB, and 500GB drives mounted inside. The fastest is the 2TB drive, I just ran hdparm and got back a 'buffered disk speed' reading of about 114 MB/s.

    Now, via GbE to either my desktop or my MacBook pro (Router is an Apple Time Capsule, which feeds into a GbE switch at my desk that feeds both the desktop and the laptop, router is on the other side of the room directly attached to server, printer, and modem), I get about 57 MB/s transfer speeds moving large files (on any drive or the RAID array). Now my math says 57 MBytes a second is about 450MBit per second. That's less than half the theoretical speed of GbE (close to the theoretical speed of 5GHz Wireless N, though I know you'll never reach that!). I also know that, in theory, I should be able to hit 125MBytes per second over GbE, a smidge faster than the speed of my drive internally. (It's all SATA 1.5 inside that server, but it's a GbE NAS so, I wouldn't benefit from any faster speeds internally, nor are these 7200RPM drives likely to saturate SATA II or SATA III)

    So here's my question;

    1) Am I misunderstanding hdparm's readings? Are the drives actually running slower than that?

    or;

    2) Is there something wrong with my math? Am I misunderstanding how these things convert in the term of bits and bytes?

    or

    3) Am I having some performance issue somewhere, by which I'm not able to see more than around 500Mbit/s throughput? Is there anything I can do to fix that?

    Thanks!

    EDIT:

    Over wireless N I'm getting a pathetic 14MB/s, if that matters. Not worried about the performance in that spectrum though, whenever I need speed I'm also attached to my 27" LED Cinema Display, and GbE.

    Also, to eliminate parts from the equation, I plugged the MBP directly into the Time Capsule and got the same speeds as I got through the switch at my desk. My PC gets the same speeds as well (It's also behind the switch at the desk, the switch connects to the time capsule via a 50ft Cat6 cable)

    EDIT 2:

    It may also be worth mentioning that I get the same speeds via the Time Capsules internal 2TB drive (right around 57MByte/s)
     
  2. drsox macrumors 65816

    drsox

    Joined:
    Apr 29, 2011
    Location:
    Xhystos
    #2
    The throughput from a RAID array depends on the controller as well as the drives. Is your controller a real hardware RAID or a software RAID that runs in a dedicated card ? Some controllers that pretend to be hardware are actually running in software. If so, then it's the CPU/Memory and the OS that's the limiting factor as well as the slowest drive. Try benchmarking the oldest/slowest drive - that's the best you will get (theoretically).

    I have a fast Core2Duo RAID NAS with 5x 5900rpm drives. With dedicated hardware RAID and Gbit NICs I get 90+MBps (720+Mbps) read and write rates.

    When I did some Data Rate benchmarking between 2 2011 Win7 PCs with SSD drives running through a Gigabit switch, I achieved 110 - 120MBps (880 - 960Mbps). So part of your problem will be the old 80GB drives that you used for the OS. When I ran the same tests between a PC and a 2012 MacMini with SSD I saw similar rates.

    If you want to run data rate benchmarks, then you might want to get a copy of IOMeter - it runs fine in Ubuntu, but not in OSX. See : http://www.iometer.org/doc/matrix.html.
     
  3. el-John-o, Jan 26, 2013
    Last edited: Jan 26, 2013

    el-John-o thread starter macrumors 65816

    Joined:
    Nov 29, 2010
    Location:
    Missouri
    #3

    I wouldn't know how to tell the difference between a hardware-hardware RAID controller, and a software-pretending-to-be-hardware RAID controller. But presently, only the OS is on the RAID controller. The other drives are attached directly to the motherboard, not in a RAID at all. In the future, I'd like to buy some new drives and have a RAID1 or RAID10 configuration. Presently, DSLR images get copied to both the 2TB and 1TB drive, as I haven't yet hit 1TB worth of images, and the 500GB drive is just used for some other miscellaneous storage (iTunes library, for one, but I have iTunes match to redundancy isn't an issue). Obviously, a much better data redundancy solution would be a RAID1 configuration, but that's not what I've got right now. The 2TB Time Capsule drive is used for Time Machine backups.

    So, with that in mind, considering they are on a separate 'controller' (the storage drives), is my slow/old RAID still slowing me down? Furthermore, why is it that internally (via SSH) I'm able to get a benchmark of around 110MB/s, but externally through GbE half that? If the hard drives, the OS, or even the CPU was the issue, (It's a single core Xeon 3.2GHz, that should be plenty fast for a NAS shouldn't it?), wouldn't the speeds be reflected with in-OS benchmarks?

    Total newbie to this stuff, so I'm trying to learn as I go. If performance was paramount, I'd go out and buy new equipment. But, still, I figure; no reason to have equipment I currently have running the best it can right?

    So basically what I'm asking is, why would I get 110ish within Ubuntu, and 57 to the client computers? Would OS/Slowest Drive/CPU still be the cause there?

    By the way, is there any way to benchmark the throughput of an NIC without being bottlenecked by the drives? It's a gigabit NIC, obviously, but is it 'possible' it's just a poor performing NIC? This is an IBM eServe from I think 2004, so it's quite possible the NIC isn't performing up to snuff.

    Edit: Also, Iometer APPEARS to be a GUI dependant tool, unless I'm looking at it wrong. I'm running Ubuntu Server which has no GUI. Is Iometer something I can run purely from the terminal?

    Edit2: For the record, the RAID array is running at about 57MB/s. Right about the speed I hit over GbE to the faster drives. Is that my bottleneck? If so, just to satisfy my curiosity, could you explain why? The drive the OS is on is separate and on a separate controller.

    If that's the case though, it's an easy fix. I can ditch those dusty old drives and just put Ubuntu server on the 500GB drive or something.

    By the way, is there any advantage in a GbE NAS environment to faster than SATA 1.5gb/s? Just curious. My thought is that with a 1gbps network connection, 3gbps or 6gbps is still just 1gbps in relation to the clients.
     
  4. drsox, Jan 26, 2013
    Last edited: Jan 26, 2013

    drsox macrumors 65816

    drsox

    Joined:
    Apr 29, 2011
    Location:
    Xhystos
    #4
    1. To benchmark using IOMeter you need to run it on a Client device. It doesn't run in OSX, so if you have a Win PC or a Ubuntu PC, then run it on that. Once you have it running, then you can get it to test any config of drive that you can put a "target" on. So, you can test one of the old 80GB drives (take it out of RAID and just test it) and then the whole RAID config as a single entity. Then test the other drives.

    This will show you whether the RAID is limiting or the old(er) drives.

    2. NIC benchmarking would require you to put a test file in RAM and then run it to/from RAM in another machine. You can create a RAM drive in Ubuntu (think so) and you used to be able to create one in WinXp. Otherwise you need to use SSDs to eliminate drive latency etc.

    There are also some parameters in the NIC that might need changing. I forget what they are for now, but I'll look at the Intel Pro/1000 that I still use. You might find the equivalent in the IBM. 2004 might still be OK but if you aren't using a PCIe NIC then it will already be slow. Maybe your Server has a PCIX bus (very long connector).

    UPDATE : The NIC parameter I used to mess with is called "Flow Control". Depending on what the other end of the cable is directly connected to (switch, PC, NAS etc), you might adjust the settings ON BOTH ENDS.

    UPDATE UPDATE : Have you enabled Jumbo Frames ? If you have, disable it everywhere. In my experience it's no use at all and can cause all manner of throughput issues.

    3. SATA2 vs SATA1 is useful for the RAID controller and has pretty much no impact on NIC transmit/receive rates.

    4. If your RAID and your NIC are hitting the same lowish rates, then the RAID is likely to be the limiter. You said that your "data" drives are unRAIDed but the older 80GB drives ARE RAIDed. Why have you RAIDed the OS ? - capacity ? If they are only 80GB drives than they will be slow.

    Do you have the means to do the following ?

    Put your fastest single drive as the OS drive.
    Install only your second fastest drive as the sole data drive.
    Then do some tests.

    IMO you might get a BIG improvement if you just bought a small SSD and used that as your OS drive. I've been doing that for years - OS=SSD; Data=NAS.
     
  5. el-John-o, Jan 26, 2013
    Last edited: Jan 26, 2013

    el-John-o thread starter macrumors 65816

    Joined:
    Nov 29, 2010
    Location:
    Missouri
    #5
    Gotcha. I had been using Black Magic speed test for Mac OS X, selecting the network drives as a target. It appears to get the same numbers as IOMeter in Windows, but I'll continue to use IOMeter in the future since you recommended it.

    No Jumbo frames. I'll mess with flow control. It's an onboard NIC though. Is that the issue then? This board does have PCI-X, it's what the RAID card is on. I may look for a new NIC. Dunno if I want to spend any money on it though, it works 'good enough' for now, and I've been toying with the idea of building a new server, or buying a purpose built NAS unit. I'm running Ubuntu Server right now, but it's way more than I need. Just a simple atom or i3 or AMD Sempron machine running FreeNAS is all I need. I do have subversion installed so that I can access files remotely and securely using a subversion client, but I've used that exactly 0 times, and with a 4mbit upload speed at home, it's as useless as a screendoor on a submarine!
    [/quote]


    Right, I understand that it won't affect NIC speeds, but I was wondering if it offered any speed advantages to a NAS. My thinking is, it doesn't matter how fast the drives are if I can only read them at 1gbps. Or am I not thinking about it correctly?


    I raided the OS.. 'cuz'. No REAL reason at all. When I got the server it had 4 hot-swap 80GB drives and a RAID controller. Had enough room for other drives as well, connected to the motherboard. A single 80GB drive is plenty of storage though. I was thinking about buying a second WD Green 2TB drive, getting rid of two of the 80GB drives, and running two RAIDs, maybe a RAID0 for the boot drive for a bump in speed, and a RAID1 of the 2TB drives as a 'data' drive, and the 1TB still out on it's own as a spillover non-redundant data drive (again, stuff like iTunes library)

    Can you explain to me (just so I understand) why the speed of the boot drive matters? I'm under the mis-understanding that the OS drive just needs to load the OS, and when I access the other drives I'm accessing them independently. That's wrong right? So how does a faster boot drive make the data drives faster? (I'm not at all disagreeing, just wanting to know the 'why' is all)

    Anyway, I can do those things. I'll burn a liveCD of DSL linux which runs entirely in the RAM, and do tests on the RAID controller and on the motherboard, to see if either are the bottleneck, and how the performance stacks up.

    The reason I ask about the performance of SATA1 as a NAS controller, is when/if I build a new server, I like the idea of having a hardware RAID controller. It's PCI-X, and I HAVE seen a few motherboards still with PCI-X on them. So I could hang on to the RAID controller. I don't know that I would want to BUY a new RAID controller though.


    UPDATE:

    Under DSL linux I was able to get around 70 megs down over GbE. It's faster than before, it's also probably the 'speed limit' for that older technology, but it's a usable speed. It means I can fill a couple 8GB SD cards (I don't like big cards, smaller cards means less lost in a failure!) in my DSLR, and transfer them to the NAS in just a few minutes. That's the goal anyway, with this budget. Maybe I'll win the lottery, convert everything to 100GbE, get a thunderbolt breakout box to connect the MBP to 10GbE, and build a fully redundant SAN using nothing but dual CPU server motherboards and PCI-E x16 SSD's! LOL

    I went ahead and pulled all of those old 80GB drives and stuck them on a shelf. I'm now running the 1TB and 2TB drive as data, and booting off of a thumb drive. I'm using FreeNAS though, which is DESIGNED to boot off of the thumb drive. From what I read, it works differently and optimizes itself in the fastest drive, etc. Currently waiting on all of my files from the 2TB drive to copy over (the 1TB is just a faux-mirror) so that I can initialize the two drives and format them in FreeNAS's file system to get the best performance. From everything I read, FreeNAS should optimize performance despite having a slower boot drive. However, it can still benefit from an SSD being used as part of the file system (but not data or as the boot drive, still reading and learning more about that). So I will strongly consider that as well. I think in the short term I'm going to get a second 2TB drive and create a RAID1 mirror with it, and a second Hitachi 1TB drive so I'll have a 2TB and 1TB redundant network volumes, and a small SSD. Then, maybe later on, build a new machine to house the drives, or buy a purpose built NAS device.

    I have to say I'm really liking FreeNAS. I'm fairly comfortable with the terminal, but FreeNAS has a built in web interface and it's really nice! It's also got all of the features I imagine me needing in a NAS. Works well!

    UPDATE 2:

    Well I think I'm pretty well satisfied with what I've got here. A little tweaking and I'm running around 80MB/s, so right around 640mbps. Faster would be better! But I don't think I'm going to get much more out of consumer grade equipment. I also suspect the Apple Time Capsule may be a bit of a bottleneck too. My next 'test' is going to be moving my GbE switch and connecting my MBP (since it's portable!) to the switch and the server to the switch, and seeing if my speeds improve. If they do, then I may re-arrange my network connections so that the MBP, Desktop PC, and server are all on the same gigabit switch, which is then connected to the Time Capsule.
     
  6. drsox macrumors 65816

    drsox

    Joined:
    Apr 29, 2011
    Location:
    Xhystos
    #6
    Good news. Seems that you have fixed the main problem.
    I thought the old 80GB drives were the problem. 80GB is a long time ago for drives and they will have a high latency as well as low data transfer rates.

    I never used FreeNAS, but I did build a NAS from Ubuntu Server. If worked fine, but I switched back to a ReadyNAS.

    Good luck with your tweaking.
     
  7. el-John-o thread starter macrumors 65816

    Joined:
    Nov 29, 2010
    Location:
    Missouri
    #7
    Ironically, one of the 80GB drives returned!

    The USB ports on this thing are USB 1.1, so FreeNAS was running very slowly as you might imagined. So I re-installed freeNAS on one of the 80GB drives, but FreeNAS runs a little differently, from what I gather, in the way it handles drives. For one, the boot drive is unusable in the NAS, it's completely seperate.

    Anyway, drive speeds are the same, directory loads are quicker now and so is interfacing with the OS. Couldn't be happier! Would still love to saturate that gigabit connection but I don't think it'll be possible with the equipment I've got.

    Curiously, I get the same speeds to my NAS as I do to my Time Capsule. So my first thought was, Time Capsule was a bottleneck. But connected directly to the server via switch (no time capsule) speeds are once again the SAME as they were before. SAME on the desktop. So I'm thinking about trying some different cables and things. Right now, there is a 50ft Cat5e cable running from the Time Capsule to my desk, and on my desk is a 4-port gigabit switch for the MBP and the desktop, so I'm wondering if that 50ft cable is a bottleneck.

    Or I may just quick while I'm ahead and accept this 'good enough' performance. It's what, like, 70 or 80% the speed of the drive? That's pretty decent for a home network with free/cheap equipment!
     
  8. drsox macrumors 65816

    drsox

    Joined:
    Apr 29, 2011
    Location:
    Xhystos
    #8
    Yes, great !

    When I had to do some serious troubleshooting I made sure I tested ALL the links in the chain. You might also get a network link tester to see how good your cables are. Not expensive - 20 or so.

    Get a good quality CAT6 cable (doesn't have to be long) and use that to test your existing cable. 50ft isn't a problem but if your cable is iffy, then replace it with good CAT6.

    I guess you don't have any wall plugs in your network or any pre-installed cabling. These can also be an issue (I installed my own with punch-down frames).

    I still think that an SSD will help, but it might be marginal. 80MBps is OK for most things.
     
  9. el-John-o thread starter macrumors 65816

    Joined:
    Nov 29, 2010
    Location:
    Missouri
    #9
    Yeah FreeNAS recommends an SSD, you don't put the OS on it but it becomes absorbed into the ZFS volume and has the same advantages as having the OS on the SSD in a more traditional OS environment from what I gather.

    No, no in-wall ethernet. I don't own the home I live in right now (Just here for a little while during grad school), so I just have the 50ft cable running along the baseboard trim. It's all in the same room (home office) but the cable modem/printer/time capsule are all on the opposite side of the room!

    As far as the SSD goes, I think I'll end up building a machine here in the next few months. Newer CPU, USB 3.0 with a USB 3.0 thumb drive for FreeNAS (like I said they recommend booting the OS through a USB thumb drive, not a hard disk), a single SSD and a set of 4 2TB drives in a mirrored RAIDZ (so a 4TB redundant volume). I'd love to get a NAS setup that'll saturate GbE!
     
  10. drsox macrumors 65816

    drsox

    Joined:
    Apr 29, 2011
    Location:
    Xhystos
    #10
    Sounds like a plan.

    Just a few comments - you may well know these :
    1. RAID is not a substitute for a backup - it's only a form of resilience against drive failures. If your mobo etc fails then you are stuffed. (Excepting a duplicate NAS or a helpful colleague).
    2. A 4 drive RAIDZ is a form of RAID5 and you will get 6GB of usable space. If one drive fails, then the data is preserved and accessible but is not redundant again until you replace the defective drive. If you want resilience against 2 failed drives, then use RAIDZ2 but then you lose more storage space.

    See this for more RAID info :

    http://en.wikipedia.org/wiki/RAID#RAID_1
    http://en.wikipedia.org/wiki/RAID#RAID_1http://en.wikipedia.org/wiki/Non-standard_RAID_levels#RAID-Z
     
  11. el-John-o thread starter macrumors 65816

    Joined:
    Nov 29, 2010
    Location:
    Missouri
    #11
    Well I tend to follow the most redundant I can for the amount of data I have. For example, right now I have a RAID1 (essentially) volume, despite it causing me to lose 2TB of data (It's a 2TB and a 1TB disk in a RAID1 volume, which is 1TB in size), because I don't need more than 1TB right now. So I imagine I'll run RAIDZ2 and add drives, or if push comes to shove, as I understand it it can be rebuilt into a RAIDZ RAID5 equivalent later for more storage.

    Also, I do an off-site backup. I'm a big believer in backup and data redundancy. The main reason for wanting a RAID is so that I don't have to stop working. My off-site backup is intended to be there in case of catastrophe, fire, or total and complete equipment loss. But in the more likely event of a single drive failure, I don't want to have to wait hours and hours (on top of having to source another drive) to be able to have my data and my workflow again. My reasoning for a redundant array is so that if I lose a drive, I can keep working even if I don't have another drive handy or have to wait for a drive to be RMA'ed (and I'm not forced to buy another drive to keep working even if the one that failed is under warranty). Also another reason why I'll likely go with RAIDZ2. Basically, I want to be able to lose a drive and keep going, and be able to replace the drive at a convenient time and not have it keep me from my files or force me to access them via the internet (30mbit) instead of my LAN (theoretical 1000mbit).
     
  12. drsox macrumors 65816

    drsox

    Joined:
    Apr 29, 2011
    Location:
    Xhystos
    #12
    Good Idea. Glad to see someone who knows what they are doing.

    Me, I have a slightly different approach. By a circuitous route I have ended up with 3 identical NAS units. Two are in daily use with the 3rd as a backup that is turned on once a week for mirroring. I only use single drive redundancy. One NAS has 5x 2TB drives, one has 2x 2TB drives and the backup has 4x 2TB and 2x 3TB drives.

    So I am protected against a single drive failure and PSU/Mobo failure (just move the RAID sets to another unit). I have 2 drives ready for swap out and the NAS will rebuild automatically. I also have a 1TB USB drive in the bank.

    Most of the stuff is Video (6TB) with 400GB of Music and 250GB of actual data.

    Last comment : Are you using a UPS ? I have all 3 NAS units on UPS - 2 on the same one. I only expect this to allow a graceful shut down, not to continue operating.
     
  13. el-John-o thread starter macrumors 65816

    Joined:
    Nov 29, 2010
    Location:
    Missouri
    #13
    Not yet, it's on my shopping list though (this stuff is expensive!) I had one for years but it failed, I was going to replace the SLA battery but then I misplaced it in a move (luckily that and a few nicknacks is all I lost, nothing important).

    I have disabled all motherboard Hard Disk cache and similar features (which did drop my speed a bit, but not a lot), I was just thinking of that today! I figure this will offer me some better protection in the event of sudden power loss.

    You've given me an idea though, I may build a new NAS (or buy a purpose built one) and use this old server as a mirror backup. I may even hook it up at my office and use it as an off-site mirror and have it just mirror over secureFTP somehow! hehe.
     
  14. drsox macrumors 65816

    drsox

    Joined:
    Apr 29, 2011
    Location:
    Xhystos
    #14
    What fun !
     
  15. el-John-o thread starter macrumors 65816

    Joined:
    Nov 29, 2010
    Location:
    Missouri
    #15
    Well DANG! I did it!

    I stumbled upon something that said that Unix shares are much faster than Samba shares, especially from FreeBSD (which is what FreeNAS is based on). So I disabled the Samba share and configured an NFS share. Did NO tweaking, and am nailing a 115-120MB/s read speed.... Directory's open pretty quick (I've been on an SSD for a while now though so I don't think I'll ever be satisfied with directory read speed)/

    Wowza! All that tweaking I did to push 80 gigs, and a vanilla NFS share is near saturating gigabit ethernet... it also happens to be running basically at internal speeds. I couldn't be happier! My 13" laptop (and my desktop) have an array of redundant drives operating at internal speeds! Woot!

    Who knew a Microsoft product would be bloated with unncessary features and tweaks and operate at an unnecessarily slow speed compared to simple Unix alternatives? hehe.

    Oh and uh, here's the kicker. I'm hitting 30MB/s over Wireless N which means Wireless N is now usable... Tells ya a lot about Wi-Fi though. I have a source proven to operate at 1000mbit/s, connected to a 450mbit/s wireless router that is about 12 feet away in the same room with no obstructions. I'm operating that 450mbit/s connection at around 240mbit/s...

    I'm looking forward to gigabit Wi-Fi, but, it should be criminal how inflated they market these wireless standards. I GUESS it's a theoretical max, but in my experience they come nowhere near...
     
  16. drsox macrumors 65816

    drsox

    Joined:
    Apr 29, 2011
    Location:
    Xhystos
    #16
    Well done you.

    I have found that SMB is much poorer than AFP for Mac to/from NAS. I'll have a go at enabling NFS to see what that does. Raw data rates aren't normally so important to me as my MBA is not that powerful. When I want to do some heavy lifting, then I fire up my i5 box and run that through VNC. That's LAN connected.
     

Share This Page