Go Back   MacRumors Forums > Apple Hardware > Mac Peripherals

Reply
 
Thread Tools Search this Thread Display Modes
Old Apr 27, 2012, 12:32 PM   #51
murphychris
macrumors 6502a
 
Join Date: Mar 2012
Quote:
Originally Posted by funkyc View Post
I've done a bit of research into NAS and am willing to invest in a good 4 or 5 bay enclosure. I'm also open to the idea of DAS using either a Mac Mini (have always wanted to buy one as a Media center attached to my tv) or using my MBP (I'm planning on buying a new 15" when they get refreshed) as a server...

What's the best solution out there? Thanks for all the ideas so far - keep them coming! They're all v.useful but I'm kinda just getting more confused lol
I don't think you'll find much agreement on what's the best solution.

If you aren't into tinkering, I'd look at QNAP and ReadyNAS for commercial products with support.

If you tinker, you could get something like this and drop either FreeNAS or Nexenta on it.

Either way, you've got an R&D project on your hands, because there are so many ways to do what you want. Actual implementation is a separate project, but the R&D project should model the implementation.

It's important to understand that RAID 1, 5, 6 or Z are not backup strategies. They're fault tolerance strategies, to allow disk failure while data is still available. You'd still need a way to do data replication either to another NAS or DAS or off-site. With some customization you can choose to replicate only certain things locally, and other things both locally and off-site. And this can happen automatically.

Other than familiarity, the Mac Mini doesn't bring much to the table. It's a greater foot print than an all in one enclosure. And since there's no eSATA you have to sort out what connectors to use and the consequences of that choice: USB is slow, Thunderbolt is still kinda expensive, and FW800 can be saturated by a single disk but also the common bridge chipsets don't pass through ATA commands to the disks, meaning you'd lose the ability to use SMART monitoring to maybe get some advance notice of disk failure.
murphychris is offline   0 Reply With Quote
Old Apr 30, 2012, 05:48 AM   #52
funkyc
macrumors regular
 
Join Date: Sep 2008
thanks for the info murphychris, I had originally wanted to get something simple like a drobo but after reading lots of not very positive reviews, have decided to go along the lines of a synology or qnap.

Now to choose which one wwill be easiest to set up and also provide me with the abilities I'm after. I think for future-proofing I may go for a 4 or 5 disk setup - I believe there's a 5-disk one (DS1512+) but I think it may be way overkill for me. I like the hotswappable drives and the ability to expand storage if necessary one day. However I'm thinking by the time I have THAT much data stored on the NAS that there'll be newer, faster better ones, so I may just save a little bit and get the DS412+ which has basically the same features but is not expandable.

Do the upgraded cpu and more ram make much difference in a NAS setup? OR should I save even more money and get one of the older models that are significantly cheaper?
__________________
funkyc is offline   0 Reply With Quote
Old Apr 30, 2012, 06:19 AM   #53
DavoteK
macrumors regular
 
Join Date: Jan 2012
Reading what the thread starter was after was like a reading what I required prior to getting a Synology DS411J

4 Bay, up to 12TB of storage, multiple options.

Takes a little bit of time getting set up, but oh so worth it. Can't imagine not having one now.

Only using 2 bays currently. Setting up a time machine backup to it was like "is that it?".

While it can seem daunting, the end result is well worth it, I'd advise another crack at it. Everything you've said you wanted to do so far can easily be done by a NAS and be accessible for multiple users.

Best purchase I've made for my network (aside from the Macs themselves obviously )
DavoteK is offline   0 Reply With Quote
Old Apr 30, 2012, 07:40 AM   #54
radiogoober
Banned
 
Join Date: Jun 2011
Davotek, how do you backup your Synology? Ie, if the Synology fails, where is your data safe at?
radiogoober is offline   0 Reply With Quote
Old Apr 30, 2012, 01:04 PM   #55
murphychris
macrumors 6502a
 
Join Date: Mar 2012
Quote:
Originally Posted by funkyc View Post
have decided to go along the lines of a synology or qnap.
Check how upgrades are to be performed. Both firmware upgrades, and if you want to grow the storage. Best to learn how these processes work in advance as growth is a key element in your decision. I am not specifically familiar with how upgrades and array growth work on Synology or QNAP. I have read with some products (off hand can't remember which) people finding that to grow an array they had to blow it away and start from scratch; although more rare I've also seen this for firmware upgrades.

Quote:
Do the upgraded cpu and more ram make much difference in a NAS setup?
No. These are not CPU bound processes at all. RAM will make some difference for caching performance. You're probably fine with 2G RAM and whatever CPU these come with. The OS will take up maybe 500MB at most including the web server.

Quote:
OR should I save even more money and get one of the older models that are significantly cheaper?
As long as they support big drives, I'd be willing to consider an older model. You don't want to get stuck with something with an unusual BIOS that gets fussy when it sees disks over 2.2TB. Most BIOSs out there are pretty stupid and don't care about such things, allowing at least non-boot drives to use the GPT partition scheme, and thus use all available space on the disk.
murphychris is offline   0 Reply With Quote
Old May 12, 2012, 05:52 PM   #56
marc.garcia
macrumors regular
 
Join Date: Jul 2010
Quote:
Originally Posted by flynz4 View Post
Personally, I prefer DAS for application libraries. In general, such libraries are not meant to be shared... and if another user on your LAN mucks with them, they could make your own database unstable.

Does your NAS allow DAS modes by connecting via Firewire or eSATA? Personally, I am switching to Thunderbolt.

/Jim
I've read contradictory information with regard to this in several places. Even in this very thread, some say it is no problem but I think I agree with you. iPhoto was not meant for accessing its Libraries stored in a network share... and I think it all gets worse if the host file system is linux based (which operate EXT3 and EXT4 File Systems).
marc.garcia is offline   0 Reply With Quote
Old May 12, 2012, 06:14 PM   #57
marc.garcia
macrumors regular
 
Join Date: Jul 2010
Quote:
Originally Posted by flynz4 View Post
Backing them up was always either impossible, or unstable.
Hi again, my QNAP NAS seems to be able to deal with offsite cloud services like Amazon S3, Elephant Drive or, even Crashplan.
Quote:
I currently have 4 perfectly good NAS boxes that are all powered off. I doubt that I will ever use them again.
Damn, I should have tried to buy one off you before I settled on mine
marc.garcia is offline   0 Reply With Quote
Old May 13, 2012, 01:11 AM   #58
flynz4
macrumors 68040
 
Join Date: Aug 2009
Location: Portland, OR
Let me clarify.

I think that NAS boxes are a great solution if you want to share data across several computers. They are also the best solution if your computer is a laptop, and you want access to your full data when at home or office... and a subset (that fits on your laptop drive) when traveling. This is because your laptop remains mobile.

For me... I use a iMac when at home... and a MBA when mobile. For me, I would MUCH prefer DAS because the lack of mobility does not affect my iMac.


/Jim
flynz4 is offline   0 Reply With Quote
Old May 13, 2012, 01:25 AM   #59
murphychris
macrumors 6502a
 
Join Date: Mar 2012
Quote:
Originally Posted by marc.garcia View Post
I've read contradictory information with regard to this in several places. Even in this very thread, some say it is no problem but I think I agree with you. iPhoto was not meant for accessing its Libraries stored in a network share... and I think it all gets worse if the host file system is linux based (which operate EXT3 and EXT4 File Systems).
Ok well I've tried it and it works. The thing is, the library databases (itdb and xml) are always local. But you can redirect iTunes to an iTunes Music folder that contains the actual music, to a NAS. I've done it with NFS. I haven't tried it with AFP but with automount AFP it should be fine (otherwise you have to remember to manually mount it before launching iTunes).

A gotcha might be what happens if you go back and forth between wired and wireless. I haven't tested that yet.

And file system doesn't matter, I just copied 50GB of music from JHFS+ to ext4 and then back to JHFS+ and used diff -qr and it came up with no differences.

As for simultaneous sharing of the music library files (mp3, aac) this is also not much of a problem because most of the time they're just being read. Two computers can certainly read the same file at the same time with no difficult, usually two users won't be accessing exactly the same music file anyway. But nothing says you have to share the same folder, everyone in the family can have their own logical volume, with their own stuff.

Quote:
Originally Posted by flynz4 View Post
For me... I use a iMac when at home... and a MBA when mobile. For me, I would MUCH prefer DAS because the lack of mobility does not affect my iMac.
DAS is easier so long as your storage requirements don't exceed one disk. A NAS box obviously has DAS, which it makes available onto a network, so you have all of the issues of DAS plus network issues.

But if you care about data integrity, and need more than a couple TB of storage, the aggregation of disks into a single pool of storage rather than named disk icons, makes some management aspects better.
murphychris is offline   0 Reply With Quote
Old May 13, 2012, 04:36 AM   #60
flynz4
macrumors 68040
 
Join Date: Aug 2009
Location: Portland, OR
Quote:
Originally Posted by murphychris View Post

DAS is easier so long as your storage requirements don't exceed one disk. A NAS box obviously has DAS, which it makes available onto a network, so you have all of the issues of DAS plus network issues.

But if you care about data integrity, and need more than a couple TB of storage, the aggregation of disks into a single pool of storage rather than named disk icons, makes some management aspects better.
I think that you are confused. DAS vs NAS has nothing to do with how many drives you have aggregated... or if you are using RAID or other techniques to provide data protection. DAS vs NAS only refers to how it is connected to your machine, or network.

For example: My 8TB Pegasus array is a DAS box... I can run it in RAID 0, RAID1, RAID 5, RAID 10... or I can have subsets of the available drives show up as different logical drives. It is a DAS, because it is directly attached to my computer and even tough it is external, it still is a local disk. It is also blazing fast. It is currently (by far) the fastest way to access my Aperture 3 library.

Depending upon which computer I am using, my only choices right now are to keep my data on an SSD, an internal HDD, my Pegasus RAID array, or some NAS boxes that I own. My Aperture 3 library is too large to fit on any SSD that I own, so that leaves me with my internal HDD, my NAS, or my Pegasus RAID. Keeping my A3 library on a NAS renders it to be essentially unusable. The data bandwidth is fine, but the latency is horrid. It would be faster if I used iSCSI or other advanced network protocols, but in my experience, it is problematic and unstable.

Performance wise, the Pegasus smokes my internal HDD. Hence... for me, the best place to keep my A3 library is on a Pegasus Thunderbolt DAS. It makes my computer feel like a totally different machine.

Personally, I am much more bullish on SSDs (over the long haul) vs any HDD solutions... but it will take several years before I can count on enough SSD capacity to match my storage needs. Having said that, there is no way in hell I would ever buy another computer without an SSD. Those days passed by about 2-3 years ago.

/Jim

Last edited by flynz4; May 13, 2012 at 07:28 AM.
flynz4 is offline   0 Reply With Quote
Old May 13, 2012, 01:41 PM   #61
murphychris
macrumors 6502a
 
Join Date: Mar 2012
Quote:
Originally Posted by flynz4 View Post
I think that you are confused. DAS vs NAS has nothing to do with how many drives you have aggregated... or if you are using RAID or other techniques to provide data protection. DAS vs NAS only refers to how it is connected to your machine, or network.
You are correct that neither DAS nor NAS by itself hinge on platform. But in context, absolutely, there are features available to a NAS that you cannot get with DAS due to operating system limitations.

Conventional RAID does not provide data protection, it provides data availability. It is not a backup strategy. It does nothing to ensure data integrity beyond that of what the drive firmware ECC provides. RAID (other than e.g. ZFS or btrfs) and JHFS+ defer entirely to the drive firmware for ECC and have no means of determining if the data returned by the drive is actually correct or not.

Quote:
For example: My 8TB Pegasus array is a DAS box... I can run it in RAID 0, RAID1, RAID 5, RAID 10... or I can have subsets of the available drives show up as different logical drives. It is a DAS, because it is directly attached to my computer and even tough it is external, it still is a local disk. It is also blazing fast. It is currently (by far) the fastest way to access my Aperture 3 library.
I'm curious how you're implementing logical volume management on Mac OS. I'm not seeing this as an explicit feature of the Pegasus RAID Thunderbolt array.

So what I'd be curious about with this array, is if it has either a RAID 6 option, or a JBOD option. Due to the RAID 5 write hole, I personally wouldn't rely on it without a very aggressive backup strategy. With JBOD, you could eventually manage the drives with ZFS when Ten's Complement ships their multi-disk (pooling and RAIDZ) product.

Quote:
Keeping my A3 library on a NAS renders it to be essentially unusable. The data bandwidth is fine, but the latency is horrid. It would be faster if I used iSCSI or other advanced network protocols, but in my experience, it is problematic and unstable.
Putting the Raw files on a NAS is workable, as the latency should not be a problem. However, the database itself including the JPEG previews should be on faster storage. This is possible with Lightroom, I'd like to think it's possible with Aperture to separate the two.

It's been demonstrated that even FW800 single disk makes little difference over SSD for the storage of Raws when it comes to Lightroom, as the dependency for rendering is CPU and RAM, not bandwidth. And the dependency for working with images quickly (sorting/editing) is CPU, RAM, and small file performance due to database and preview access. So GigE should also be comparable, but all of this depends on myriad other factors.


Quote:
Having said that, there is no way in hell I would ever buy another computer without an SSD.
The OS and apps and any cache, database files, all are small random IO bound and thus benefit mostly from SSD's higher IOPS capacity rather than the bandwidth. Whereas you're still IOPS limited with drives, even if you put them in a striped RAID to get good bandwidth. So the SSD is best for "hot" files, and small randomly accessed files.

It'd be nice to have a more modern file system though. ZFS's ZIL and L2ARC would allow us to combine an SSD and HDD in a single computer, and it would automatically hot file locate on the SSD, and cold file locate on the HDD but they'd appear as a single volume. Kinda nice...
murphychris is offline   0 Reply With Quote
Old May 13, 2012, 05:14 PM   #62
flynz4
macrumors 68040
 
Join Date: Aug 2009
Location: Portland, OR
Quote:
Originally Posted by murphychris View Post
Conventional RAID does not provide data protection, it provides data availability. It is not a backup strategy. It does nothing to ensure data integrity beyond that of what the drive firmware ECC provides.
Correct. It is not backup. I personally use dual backup strategy. Locally backed up via Time Machine, and Cloud backed up via Crashplan+. Both offer versioning control... with CP+ offering unlimited versioning control, even for deleted files. I previously was using Mozy, but at the time, they only offered 30 day retention of deleted files, which I found unacceptable.

Although I am not a fan of any backup strategy that involves human interaction... I do have a pair of 1.5 TB portable HDDs that I use to keep A3 vaults and rotate between home/office. Whenever I make large changes to my A3 database, I will update the vault on the drive at home, and switch it with my office HDD next time I go to the office. That way my A3 library has a 3rd "semi-recent" backup in case of a total catastrophe.

Quote:
Originally Posted by murphychris View Post
I'm curious how you're implementing logical volume management on Mac OS. I'm not seeing this as an explicit feature of the Pegasus RAID Thunderbolt array.
It just appears as another drive on the computer... similar to plugging in a FW800 drive. I can keep my A3 library (or whatever I desire) on the Pegasus and just use it. It acts just like any other permanently attached drive.

Quote:
Originally Posted by murphychris View Post
So what I'd be curious about with this array, is if it has either a RAID 6 option, or a JBOD option. Due to the RAID 5 write hole, I personally wouldn't rely on it without a very aggressive backup strategy. With JBOD, you could eventually manage the drives with ZFS when Ten's Complement ships their multi-disk (pooling and RAIDZ) product.
From the Pegasus manual: RAID level support: RAID 0, 1, 1E, 5, 6, and 10

Personally, I am switching to RAID 10 which will give me 4TB of storage with my R4.

Regarding JBOD: You first have the physical drives that you manage... and then a logical drive management panel which lets you combine various physical drives into logical drives. From there, you place various RAID levels and policies on the logical drives. I believe that if you were to map the logical drives onto physical drives 1:1, then you would have a JBOD.

Quote:
Originally Posted by murphychris View Post
Putting the Raw files on a NAS is workable, as the latency should not be a problem. However, the database itself including the JPEG previews should be on faster storage. This is possible with Lightroom, I'd like to think it's possible with Aperture to separate the two.

It's been demonstrated that even FW800 single disk makes little difference over SSD for the storage of Raws when it comes to Lightroom, as the dependency for rendering is CPU and RAM, not bandwidth. And the dependency for working with images quickly (sorting/editing) is CPU, RAM, and small file performance due to database and preview access. So GigE should also be comparable, but all of this depends on myriad other factors.
Aperture gives many options on how to manage your library... including keeping your masters wherever you choose. Personally, I prefer to use managed masters, because it becomes easier to migrate your entire library as a single package... plus A3 vaults operate on managed masters. If you use referenced masters, then you own full control over backup and management of your masters. Although I have a good backup strategy, I also ilke to use A3 vaults, which only operate on managed masters. Hence, I like to keep my A3 database and masters together.

I have NOT found that GbE and FW800 perform similarly, despite the fact that the BW is similar. FW800 (and TB) seems to be much better integrated into the file system. As you state, for large files are less of a problem, but they are still too slow for my taste.

With my A3 library on my Pegasus... I can pull up 10's of thousands of pics on my screen, and they keep up no matter how quickly I scroll through the thumbnails. I can scroll through so they are a blur, and the Pegasus keeps up. With a NAS... I am quickly looking at empty frames waiting for the NAS to populate the images.

Quote:
Originally Posted by murphychris View Post
The OS and apps and any cache, database files, all are small random IO bound and thus benefit mostly from SSD's higher IOPS capacity rather than the bandwidth. Whereas you're still IOPS limited with drives, even if you put them in a striped RAID to get good bandwidth. So the SSD is best for "hot" files, and small randomly accessed files.

It'd be nice to have a more modern file system though. ZFS's ZIL and L2ARC would allow us to combine an SSD and HDD in a single computer, and it would automatically hot file locate on the SSD, and cold file locate on the HDD but they'd appear as a single volume. Kinda nice...
Correct. The vast majority of client disk I/O accesses are small random reads. This is where SSDs really shine... and the technology is still in its infancy. I personally think this is one of the hottest areas in the field of computer technology. I have been using SSDs exclusively since 2008, except for one of my iMacs (2009) which was purchased before SSDs were offered CTO. I'll be replacing that machine with a '12 iMac once they are released, and I will CTO with the largest SSD available. I have no desire to open my iMac. I'd rather replace it every year than muck around inside.

I would be delighted if Apple offered smart disk caching where the SSD/HDD appeared as a single volume, with the cold data automatically moved to the HDD when space was required in the SSD.

/Jim
flynz4 is offline   0 Reply With Quote
Old May 13, 2012, 10:09 PM   #63
murphychris
macrumors 6502a
 
Join Date: Mar 2012
Quote:
Originally Posted by flynz4 View Post
Correct. It is not backup. I personally use dual backup strategy. Locally backed up via Time Machine, and Cloud backed up via Crashplan+.
This is almost certainly easier than NAS, especially if one were to consider either of the free NAS systems that use ZFS (FreeNas and NexentaStor), because you're kinda on your own to actually build the system, install the software, and figure out a strategy.

I don't know how Crashplan stores data, if there's any checksumming to make sure what you download is the same as what you uploaded, should recovery be needed. That is something the ZFS based NASs do, and they also do remote replication. FreeNAS has a GUI (web browser) for managing this, I'm not sure NexentaStor does. Periodic (scheduled) snapshots act as version control.

Quote:
It just appears as another drive on the computer... similar to plugging in a FW800 drive. I can keep my A3 library (or whatever I desire) on the Pegasus and just use it. It acts just like any other permanently attached drive.
OK it does this internally, as a feature of the array's controller. When I think of logical volume management, I think of Linux LVM. Core Storage is the same idea, although the tools presently are limited.

Quote:
From the Pegasus manual: RAID level support: RAID 0, 1, 1E, 5, 6, and 10
I was looking under the Specifications tab and it doesn't list 1E or 6, but under the Models tab I now see RAID 6 also. So with dual parity there's no ambiguity potential like RAID 5, but still it's not the same thing as checksummed data. Unless there's a reason to suspect the data (disk read errors) the RAID software (or controller) doesn't use parity to confirm the data is valid. Parity is used to rebuild data should a disk drop out of the array.

Quote:
Regarding JBOD: You first have the physical drives that you manage... and then a logical drive management panel which lets you combine various physical drives into logical drives. From there, you place various RAID levels and policies on the logical drives. I believe that if you were to map the logical drives onto physical drives 1:1, then you would have a JBOD.
Guess it depends on how it's implemented. I know the way Drobo does things, they explicitly say on their web site that ZFS isn't supported. Since it has volume management, and RAID capabilities integrated, usually underlying logical volume management isn't used. Same for btrfs (although I know it works fine on top of LVM).

Quote:
I have NOT found that GbE and FW800 perform similarly, despite the fact that the BW is similar. FW800 (and TB) seems to be much better integrated into the file system. As you state, for large files are less of a problem, but they are still too slow for my taste.
There are all sorts of reasons why this might be. A faulty FW cable is unlikely to work at all, whereas a faulty ethernet cable might just get you poor performance. Most people don't tune their networks at all. With two MacbookPros, Cat5e and a basic linksys router running dd-wrt firmware, I get ~88MB/s which saturates the disk in the MBP with the slower disk. Cat6 cables do make a difference with GigE. So does a better router/switch.

Quote:
With my A3 library on my Pegasus... I can pull up 10's of thousands of pics on my screen, and they keep up no matter how quickly I scroll through the thumbnails. I can scroll through so they are a blur, and the Pegasus keeps up. With a NAS... I am quickly looking at empty frames waiting for the NAS to populate the images.
It's not exactly an apples/apples comparison, 10Gbps vs 1Gbps, so this experience should be expected. How is it compared to 10 GigE? People who work on HD video collaboratively go this route because it's actually quite a bit faster for such workflows to share the storage rather than having to push/pull files to local fast arrays.

Quote:
I would be delighted if Apple offered smart disk caching where the SSD/HDD appeared as a single volume, with the cold data automatically moved to the HDD when space was required in the SSD.
This is a bit frustrating because my understanding is we're using Intel motherboards, some of which support Intel Smart Response. While proprietary, it can do this at a hardware level. It would be nice if we could have access to this.

While I prefer open storage, I'd probably accept an Intel proprietary solution because if my motherboard dies, I'm certainly getting another Mac, to regain access to the data. For RAID, I shy away from proprietary solutions, where if/when the hardware dies, I've lost access to the entire array unless I buy a product using the same RAID implementation and possibly even down to the firmware version. Yes there are backups but it takes a while to rebuild from backup.

The trend in storage management is, data replication and self-healing, rather than depending on backups, just because it takes so long.
murphychris is offline   0 Reply With Quote
Old May 13, 2012, 10:59 PM   #64
radiogoober
Banned
 
Join Date: Jun 2011
I'm so frustrated and annoyed with random backup schemes and random hardware that sucks (my multiple USB2.0 drives that are attached to an airport extreme, they just suck and are miserably slow, and I don't trust them anymore than I trust a camel with a yeast infection.)

I'm about to buy the following:

- Synology RackStation RS812 (4 bay, rack mount - can be expanded with other rack mount 4 bay units)
- 4 x WesternDigital or Seagate 3TB drives
- 2 x USB 3TB drives

The 4 drives in the Synology unit would be RAID 1, and the two USB drives would backup this mirrored RAID. Further, I'd just pay whatever the monthly fee to Amazon S3 for "remote backup" that Synology inherently supports. (Well, I'd probably do CrashPlan+, as it seems there's a way to hack the synology to run a crashplan+ client.)

So for the whole packaget, with both local and remote backup. ........... It'd be a big expense - I just calculated it around $1700-$1800, and almost $100/month paid to amazon (I already spend hundreds a month there!).... But it'd be convenient and relatively reliable. You could do TimeMachine backups to the Synology unit too, as well as backup whatever junk on your HD.

.... Flynz, what do you think? (rum & coke)
radiogoober is offline   1 Reply With Quote
Old May 14, 2012, 12:25 AM   #65
murphychris
macrumors 6502a
 
Join Date: Mar 2012
Quote:
Originally Posted by radiogoober View Post
- Synology RackStation RS812 (4 bay, rack mount - can be expanded with other rack mount 4 bay units)
- 4 x WesternDigital or Seagate 3TB drives
- 2 x USB 3TB drives
Concerns I have.

The RS812 specs say 512MB memory. That's pretty minimal, and will limit its caching and thus performance. It's also not ECC memory.

I can't tell what the base OS is for DiskStation Manager, so I can't tell if it's open storage. Could you plug the disks into other hardware and extract data? It's not unreasonable to want to recover all data since the last backup if possible.

Quote:
The 4 drives in the Synology unit would be RAID 1, and the two USB drives would backup this mirrored RAID.
I'm guessing you mean the 4 drives would be RAID 10, total 6TB of storage. How are you going to use two USB drives to back up 6TB of data? Whether the two USB disks are striped in RAID 0, or managed by LVM, to appear as a single volume, the small problem is the lack of any warning if either disk is about to die. The overwhelming majority of USB bridge chipsets do not pass through ATA commands, which renders SMART monitoring impossible for those disks. Not a great idea in my opinion. I would want whatever potential warning I can get, especially considering USB 2.0 is kinda slow. Backing up fully 4-6TB of data to USB drives?

Quote:
and almost $100/month paid to amazon
If you're considering a 6TB NAS I'm thinking you're around 3TB now? $93 for 1TB, $83 per additional TB. Based on reduced redundancy pricing. And that does not include put, copy, post or list requests. If you're thinking of storing large database files, that's one thing. Time Machine backups, quite another. A single computer time machine backup, single instance is $7.50 for that full backup. Incremental backups, per hour, for each computer? Adds up fast. It probably makes sense for nothing valuable to be on the workstations, their data is just an accepted and inconsequential loss.

What availability requirement do you need for the data? When I think of S3, I think of data availability not just off site backup of data. What if the NAS itself goes bellyup? What's the turnaround time on regaining data availability?
murphychris is offline   0 Reply With Quote
Old May 14, 2012, 12:36 AM   #66
LNYMRKO
macrumors regular
 
Join Date: Nov 2010
I'm not sure where all the Drobo 'slow' remarks and rep comes from, mine has been nothing but bliss! I have the Drobo 4-bay second gen with 8TB of space on it, connected to my MBP via FireWire, streams HD content all day everyday fine to my Apple TV
LNYMRKO is offline   0 Reply With Quote
Old May 14, 2012, 12:55 AM   #67
radiogoober
Banned
 
Join Date: Jun 2011
Dang, you shot a lot of really big holes in that plan.

You're totally right. And I did mess up the S3 calculation... $300/month is insane, it'd be cheaper to buy new 3TB drives and store them in safe deposit box, lol. (kidding)

... For backing up the synology to the 2xUSB drives, you can set it up so that it would backup, for instance, just ~/Movies to one usb drive, and everything else to the other drive, so at least the core data is there.

But you're right. I don't trust a NAS unit to not die. If the 4 bay was set up in RAID 10, there's still a chance the unit could fail when rebuilding the raid, losing everything.

Truth be told, I have about 1 TB of data that is dear to me, but of course that will slowly increase over time. I currently have the WD My Book Thunderbolt Duo, the two 3 TB disks are in JBOD, so everynight I have carbon-copy cloner copy the first disk over to the second, so I have an immediate backup of my data. (I'm not worried about revisions, etc, but I do have that via TimeMachine -> Airport -> USB 2.0 external). So if I used a 2-bay synology, with 2x3TB disks, raid 1, and I could use about 500 gigs for the timemachine backup on it, and the rest of it to backup my important data, and that still leaves about 1.5 TB of available space. I know RAID isn't for backup, but it's a relatively fast NAS, can be kept in the opposite part of the house, and I still have a primary backup before the NAS.

Does that sound ridiculous?
radiogoober is offline   0 Reply With Quote
Old May 14, 2012, 02:45 AM   #68
flynz4
macrumors 68040
 
Join Date: Aug 2009
Location: Portland, OR
Quote:
Originally Posted by murphychris View Post
I don't know how Crashplan stores data, if there's any checksumming to make sure what you download is the same as what you uploaded, should recovery be needed. That is something the ZFS based NASs do, and they also do remote replication. FreeNAS has a GUI (web browser) for managing this, I'm not sure NexentaStor does. Periodic (scheduled) snapshots act as version control.
With Crashplan+, all of the data is encrypted on your own machine using 448b blowfish encryption. I am very comfortable from the security perspective.

I don't have a clue how the data is stored on their servers, and how it might be protected against data corruption... but I know that I have unlimited versioning of my data... so if I end up with data corruption at any point in time... I can pull up previous versions of my data.

One of the reasons that I am big advocate of dual independent backups... is that having two sets of backups, made by two independent backup programs, stored in two different locations, minimizes the chances of catastrophic failure due to a programatic error in a single system.

It is interesting that you bring up data replication. That is the way many new internet data centers are being set up... but it is more difficult for consumers. I am looking at picking up a Mac Mini Lion Server within the next year and using that as a "backup" computer, as well as a media server for the household. I am starting to think about ways that I can do a "one way sync" of key data (ex: my A3 library) from my main machine (iMac) to the Mini strictly as a data replication backup of my library and other key data. I know there are several good sync programs out there, but I have not started looking at them yet. I would NOT want a 2 way sync. I do not want anything else touching my main libraries. However, I would like to automatically sync those libraries to another machine. Heck... I might even sync my A3 library to my wires iMac... to keep her out of my library.

/Jim
flynz4 is offline   0 Reply With Quote
Old May 14, 2012, 02:57 AM   #69
flynz4
macrumors 68040
 
Join Date: Aug 2009
Location: Portland, OR
Quote:
Originally Posted by radiogoober View Post
I'm so frustrated and annoyed with random backup schemes and random hardware that sucks (my multiple USB2.0 drives that are attached to an airport extreme, they just suck and are miserably slow, and I don't trust them anymore than I trust a camel with a yeast infection.)

I'm about to buy the following:

- Synology RackStation RS812 (4 bay, rack mount - can be expanded with other rack mount 4 bay units)
- 4 x WesternDigital or Seagate 3TB drives
- 2 x USB 3TB drives

The 4 drives in the Synology unit would be RAID 1, and the two USB drives would backup this mirrored RAID. Further, I'd just pay whatever the monthly fee to Amazon S3 for "remote backup" that Synology inherently supports. (Well, I'd probably do CrashPlan+, as it seems there's a way to hack the synology to run a crashplan+ client.)

So for the whole packaget, with both local and remote backup. ........... It'd be a big expense - I just calculated it around $1700-$1800, and almost $100/month paid to amazon (I already spend hundreds a month there!).... But it'd be convenient and relatively reliable. You could do TimeMachine backups to the Synology unit too, as well as backup whatever junk on your HD.

.... Flynz, what do you think? (rum & coke)
My primary data set (Documents, Photos, Music, Home Videos, etc) is currently 1.4 TB as measured by my actual backup to Crashplan+. Since that still fits within my iMac's 2TB HDD... I just do direct backups using TM and Crashplan+. It is simple, redundant, automatic off-site storage for disaster recovery etc.

CP+ is only $3/month for unlimited data (consumer only... not business). If you have more than one machine (I have six being backed up)... you can get a CP+ family plan for $6/month. At those inexpensive prices, I think it is foolish not to use the service even if you have other backup strategies. Personally, I have been really happy with them.

When I get my new iMac (after the '12 model is released)... I will load all of my data on it (about 1.4TB as mentioned previously)... and I'll pay the $120 or whatever they charge for a seeded backup. The data gets encrypted on my machine and put onto one of their HDDs they send me. I Fed-x it back to them and they can load all of my encrypted data on their servers and all 1.4 TB is backed up. CP+ does use data de-duplication, but the do that on a "machine" level... not an account level. Hence, even though my computer and my wifes will have pretty much the same data... CP+ will actually store both sets of data independently and not de-dub across the account. When I asked them why, they claimed it was for security reasons. This is probably more important for enterprise (vs consumer) data... but it seemed like a fair response.

Personally... if you can get all your data to fit on a single machine... I think that simplifies backup considerably. If you direct attach storage (USB, FW800, TB, etc)... then the backup remains simplified. You can create backup sets on your computer and let that backup the data using the very inexpensive methods available to client computers.

If you needs are beyond what I described yourself... then the job typically becomes harder, and more expensive.

/Jim
flynz4 is offline   0 Reply With Quote
Old May 14, 2012, 03:24 AM   #70
AppleDApp
Thread Starter
macrumors 68020
 
AppleDApp's Avatar
 
Join Date: Jun 2011
What makes people so trusting of services like Crashplan. Why do you allow your data to be one someone else's servers?

Also are there any other opinions as to whether to get a NAS or a DAS for my data storage?
__________________
If you are a MacRumors newbie, chances are I will disregard your post.
AppleDApp is offline   0 Reply With Quote
Old May 14, 2012, 03:45 AM   #71
flynz4
macrumors 68040
 
Join Date: Aug 2009
Location: Portland, OR
Quote:
Originally Posted by AppleDApp View Post
What makes people so trusting of services like Crashplan. Why do you allow your data to be one someone else's servers?

Also are there any other opinions as to whether to get a NAS or a DAS for my data storage?
With an app like Crashplan+... your data is encrypted on your own computer, and if you set it up right... using a key that Crashplan does not have access to.

It is encrypted using 448bit blowfish encryption... which means that the grandchildren of your unborn grandchildren will not be able to crack it in their lifetime.

The fact is... you trust a lot of your data to other companies that can actually see and use your unencrypted data. For example, the average person has plenty of date in their own email accounts to easily perform identity theft... yet they do not worry that their email provider will allow their identity to be stollen.

By contrast... CP+ has no access to your unencrypted data. It is about as safe as you can get.

The paranoia about cloud backup is tantamount to the fear people used to have about keeping money in banks... except in the case of cloud backup... it is orders of magnitude more safe.

/Jim
flynz4 is offline   0 Reply With Quote
Old May 14, 2012, 06:24 AM   #72
SOLLERBOY
macrumors 6502a
 
Join Date: Aug 2008
Location: Manchester,UK
Quote:
Originally Posted by flynz4 View Post
My primary data set (Documents, Photos, Music, Home Videos, etc) is currently 1.4 TB as measured by my actual backup to Crashplan+. Since that still fits within my iMac's 2TB HDD... I just do direct backups using TM and Crashplan+. It is simple, redundant, automatic off-site storage for disaster recovery etc.

CP+ is only $3/month for unlimited data (consumer only... not business). If you have more than one machine (I have six being backed up)... you can get a CP+ family plan for $6/month. At those inexpensive prices, I think it is foolish not to use the service even if you have other backup strategies. Personally, I have been really happy with them.

When I get my new iMac (after the '12 model is released)... I will load all of my data on it (about 1.4TB as mentioned previously)... and I'll pay the $120 or whatever they charge for a seeded backup. The data gets encrypted on my machine and put onto one of their HDDs they send me. I Fed-x it back to them and they can load all of my encrypted data on their servers and all 1.4 TB is backed up. CP+ does use data de-duplication, but the do that on a "machine" level... not an account level. Hence, even though my computer and my wifes will have pretty much the same data... CP+ will actually store both sets of data independently and not de-dub across the account. When I asked them why, they claimed it was for security reasons. This is probably more important for enterprise (vs consumer) data... but it seemed like a fair response.

Personally... if you can get all your data to fit on a single machine... I think that simplifies backup considerably. If you direct attach storage (USB, FW800, TB, etc)... then the backup remains simplified. You can create backup sets on your computer and let that backup the data using the very inexpensive methods available to client computers.

If you needs are beyond what I described yourself... then the job typically becomes harder, and more expensive.

/Jim

I recently started using cp+. I'm presuming you need to re upload data if you change machine?
__________________
27" iMac i7 3.4 3TBF 680MX Very Late 2012
17" MacBook Pro 2.2 i7 8GB 64GB Early 2011
16GB iPhone 6+ Space Grey 32GB iPad Air
SOLLERBOY is offline   0 Reply With Quote
Old May 14, 2012, 12:47 PM   #73
flynz4
macrumors 68040
 
Join Date: Aug 2009
Location: Portland, OR
Quote:
Originally Posted by SOLLERBOY View Post
I recently started using cp+. I'm presuming you need to re upload data if you change machine?
If you *CHANGE* your machine... you simply attach your new machine to your current backup. All the data previously uploaded data remains. They document how to do this on their site.

If you *ADD* a machine (for example to a family account)... then the data needs to be re-uploaded assuming the original machine will still be backed up.

So for example, when I buy my new '12 iMac when it is released, I will copy my 1.4 TB of data from my current iMac to the new one... but my wife will continue using the older iMac as is. I will have to re-upload the data since it will be a different computer. I will choose the "seed" program that allows this to be done via a portable HDD which will make the upload process complete in a few days.

CP+ does not mix data from different machines... so for example, if my wife and I each add the same songs to our respective iTunes library, then each of us will back up the same data... even though we are on the same family account.

/Jim
flynz4 is offline   0 Reply With Quote
Old May 14, 2012, 05:39 PM   #74
murphychris
macrumors 6502a
 
Join Date: Mar 2012
Quote:
Originally Posted by radiogoober View Post
Dang, you shot a lot of really big holes in that plan.
I like to break things before they break and are a PITA. Not that my points are necessarily right.

Quote:
$300/month is insane, it'd be cheaper to buy new 3TB drives and store them in safe deposit box, lol. (kidding)
May work, I haven't done much research on the aging/reliability of old and unused drives. The likelihood is you pull this drive out of storage a few times a year at best. Might be OK, but I'd choose drives that have had a few months of "break-in".

Quote:
For backing up the synology to the 2xUSB drives, you can set it up so that it would backup, for instance, just ~/Movies to one usb drive, and everything else to the other drive, so at least the core data is there.
Business perspective, if the backups are worth doing, they're worth being warned about as soon as the disks are about to go wonky. So while SMART's advance warning is hit or miss (maybe around 60% of the time you get a warning), it's better than nothing.

Home user perspective, this thing could sit for how many weeks not backing up before I'd notice? Again, if it's worth doing, you want the system to message you if any attached drive is not accessible.

Quote:
Does that sound ridiculous?
No, but the strategy is different depending on context. Is the data business, or personal? What what is the intolerance level for storage non-availability? i.e. existing data is safe, but not available, and by extension you can't add/modify it. The answer affects the strategy.

Home users might (or might not) tolerance days to a week of storage downtime, but a business will not. A business might tolerate loss of data older than 5-7 years old (or as tax and other regulations require), but a home user will not tolerate their photo album vanishing, at all, ever.

Quote:
Originally Posted by flynz4 View Post
One of the reasons that I am big advocate of dual independent backups... is that having two sets of backups, made by two independent backup programs, stored in two different locations, minimizes the chances of catastrophic failure due to a programatic error in a single system.
It increases complexity, increases the chances you'll encounter a problem you have to work through: determine if it's really a problem, and if so, what the work around is. It probably reduces the likelihood of total data loss, however. It's a catch-22, so the strategy should emphasize secondary storage that's low in complexity, even if low in features, so long as it doesn't compromise base requirements.

Quote:
It is interesting that you bring up data replication. That is the way many new internet data centers are being set up... but it is more difficult for consumers.
It's true, but this is a very active area of development for enterprise. Red Hat bought Gluster just last year and made the code open source. It's insanely cool. If this example thread on the QNAP forums could be pushed by any NAS vendor using open storage, it would be a killer feature. Instead of getting 1 monolithic NAS, get 2-3 smaller products, and turn them into replicated storage bricks, including one off-site with asynchronous replication. And Gluster FS integrates with Amazon S3 already.

http://forum.qnap.com/viewtopic.php?p=240381

If Crashplan used, or is using, Gluster FS, and could tie this into their product to make seamless, configuration free off-site replication happen with such hardware, that might be bliss.

Quote:
I am looking at picking up a Mac Mini Lion Server within the next year and using that as a "backup" computer, as well as a media server for the household.
I just don't see the advantage of anything Apple has to offer in this area. The ease of use is marginal compared to the various NAS products out there. In some ways it's not as easy to use.
murphychris is offline   0 Reply With Quote
Old May 14, 2012, 06:30 PM   #75
rhett7660
macrumors G3
 
rhett7660's Avatar
 
Join Date: Jan 2008
Location: Sunny, Southern California
Quote:
Originally Posted by LNYMRKO View Post
I'm not sure where all the Drobo 'slow' remarks and rep comes from, mine has been nothing but bliss! I have the Drobo 4-bay second gen with 8TB of space on it, connected to my MBP via FireWire, streams HD content all day everyday fine to my Apple TV
I think the big speed grip you hear from people regardin the drobo is due to the slow network traffic reads/writes via network not via eSata/FW/USB etc.
__________________
"It's quite an experience to hold the hand of someone as they move from living to dead."
"Times are looking grim these days, holding on to everything, it's hard to draw the line"
rhett7660 is offline   0 Reply With Quote

Reply
MacRumors Forums > Apple Hardware > Mac Peripherals

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Similar Threads
thread Thread Starter Forum Replies Last Post
Managing all the plugs of external hard drives for media server? mazuma Mac mini 7 Dec 29, 2013 01:24 PM
ExFAT: The best solution for external drives? MacNoobGuy Mac Basics and Help 11 Nov 7, 2013 02:36 AM
naming external drives, hard drives, thum drives for best compatability Sossity Mac Peripherals 1 Apr 5, 2013 04:11 AM
Streaming Media for remote hard drives malfromcessnock Apple TV and Home Theater 14 Oct 24, 2012 11:19 AM
Too many 2.5 inch hard drives. Need solution for storing iTunes and iPhoto libraries firefoxnx Mac Peripherals 4 Oct 12, 2012 06:13 PM

Forum Jump

All times are GMT -5. The time now is 07:44 AM.

Mac Rumors | Mac | iPhone | iPhone Game Reviews | iPhone Apps

Mobile Version | Fixed | Fluid | Fluid HD
Copyright 2002-2013, MacRumors.com, LLC