I have the same thoughts as you. I've been waiting for this, but I'd like some reviews. Plus the price could come down a little in a few months.
I agree, i wish i could find the CPU that it uses.I wouldn't ever buy a device like this, but I'm always interested to look at what sort of real world throughput you can expect.
Unfortunately, as with most of these proprietary machines, they don't seem to publish any figures.
I don't think you can power 5 disks + the rest of the hardware off of a USB port ;-)One of the things that does annoy me a little bit is that there is no USB port. I would have loved to see one so that I could have plugged a UPS into the machine. I had 2 power blips last night during a huge thunderstorm and the UPS saved my unRAID server from shutting down. As far as I am concerned no NAS should be without a UPS!
That's fine as long as your closet has adequate ventilation. Without it, IMO, you will be asking for the device to fail early.The price is a little steep for my liking but it is like buying an apple machine (you are paying a little extra for the looks). When i built my NAS I put it in a closet and don't care how it looks.
While I agree that there might not be software available to run the UPS properly and initiate a shutdown it would at least by some time to shut the device down before it crashes.I don't think you can power 5 disks + the rest of the hardware off of a USB port ;-)
Even if it had a USB port so that could communicate with a UPS there's no guarantee that appropriate software would be available for the platform.
No one is arguing with you, and assuming that someone is actually there to shutdown the array.While I agree that there might not be software available to run the UPS properly and initiate a shutdown it would at least by some time to shut the device down before it crashes.
I'd be a bit wary of relying on journalling to keep the data of 5 disks on a proprietary raid safeI would be shocked though, if it were not running a journalled / logging file system which dramatically lessens the need for an orderly shutdown. All that needs to happen is that transactions in the journal need to be applied and the drive will be up to date.
I certainly don't want to actually have to run an fsck against a multi-terabyte volume.
I don't trust software RAID that much, and proprietary versions are really dangerous. Then there's the cost of Drobo's. Just too much $$$ for what you get. You can do much better on your own (better results for less money).I'd be a bit wary of relying on journaling to keep the data of 5 disks on a proprietary raid safe
Sure, if you're technically inclined that's a great solution. Me? I run a Solaris machine with ZFS for my big storage needs. But it has care and feeding that I take care of.Seems like a neat little box, but the price is way too high IMHO. A much better value is the Intel SS-4200 4-bay server which you can find for $135-$160 on sales at sites like newegg or eBay. You can run a stock EMC software on it, unRAID, FreeNAS, or Microsoft Windows Home server. It has 2 eSATA ports with port multiplier too so you can add more drives externally.
While not entirely immune from issues, journalled file systems do much better without graceful shutdown than traditional file systems. With only about 40-50 watts draw you can get a heck of a lot of run time out of a relatively inexpensive 500-750 KVA UPS.Single point of failure : External power adaptor = no power redundancy, leading to disk errors in case of outage. Should have been catered for at that price.
It is still entry level NAS. Neither of your requirements are entry level.Also, single ethernet port only ? Come on, guys !
Not an industrial strength product.
Another big plus for ZFS is that is block level based, not file level like other filesystems including HFS+.That said, if I do go with a software implementation, I'd prefer to build one that's known (behaviorally speaking), such as ZFS in conjunction with a UPS to cover lost writes (and there's no write hole associated with parity based arrays - one of the most desirable aspects of using ZFS).
That's what makes me nervous about most software solutions, such as unRAID that pastrychef uses and likes. I'd have to test it out, and so far, I've not had the opportunity (or inclination if I have to pay for it myself - no existing system to dedicate to testing such things).Another big plus for ZFS is that is block level based, not file level like other filesystems including HFS+.
Especially when we're talking about a backup device, this becomes very handy because only changed blocks of a file have to be replaced, not the complete file.
It has it's limitations, no doubt. Personally, I prefer to use hardware when possible, as even with parity based arrays, you can solve the issues with a proper UPS (and a card battery as well, though these may get skipped) and backup in the event that you do lose power (i.e. during a large write - exceeds cache, so the card's battery can't contain it, assuming it's run with one).I'm currently testing ZFS and the only issue I see with it is that parity is based on copying. So in case that you loose a drive or two, the data is not lost, but if you replace the data, the old data isn't safe either. A solution for that is to copy the data again on the new resilvered array. Not the best if you ask me.
That's a big plus of a hardware RAID. Rebuilding the array really copies the existing data on the new drives, ZFS just includes them in the array but does not copy any data, that's why resilvering generally takes only seconds.
What you've just typed above makes no sense to me.I'm currently testing ZFS and the only issue I see with it is that parity is based on copying. So in case that you loose a drive or two, the data is not lost, but if you replace the data, the old data isn't safe either.
What do you think "resilvering" does? It restores the integrity of the array. If it's a mirror, it rebuilds the mirror based on the allocated data blocks. If it's a raidz (of whatever level of parity) it rebuilds the required data blocks within the context of the array.A solution for that is to copy the data again on the new resilvered array. Not the best if you ask me.
In a traditional hardware RAID, it rebuilds the entire structure from block 0 to the last block of the partition and/or LUN. So if you have a 10TB array it has to rebuild all 10TB. OTOH, ZFS only rebuilds allocated data blocks. So if you have an array that has 10TB, but only 1TB is currently allocated it only needs to rebuild the 1TB worth of used data blocks. This is why resilvering is often faster. With ZFS, resilver time is a function of the number of allocated data blocks, not necessarily the size of the LUN.That's a big plus of a hardware RAID. Rebuilding the array really copies the existing data on the new drives, ZFS just includes them in the array but does not copy any data, that's why resilvering generally takes only seconds.
Of course the amount of drives you can lose () depends on the kind of setup you're running.What you've just typed above makes no sense to me.
If you build a raidz (single parity) if you lose (not loose) 2 drives you lose data. If you build a raidz2 it's 3 drives before data loss. With raidz3, it's 4 drives. Obviously the cost per GB to store goes up with each additional parity drive added. But if you really need that level of redundancy then the cost becomes irrelevant.
I hope I can.When it rebuilds, it does in fact restore the "old" data.
Perhaps you could clarify your statement.
Single point of failure : External power adaptor = no power redundancy, leading to disk errors in case of outage. Should have been catered for at that price.
Also, single ethernet port only ? Come on, guys !
Not an industrial strength product.
It doesn't. ZFS uses asynchronous writes, not synchronous writes as is done in hardware implementations. So only a portion is written (new disk), not the entire array. That's why it's faster.I tested ZFS first with the current FreeNas build as a VM before I installed it on a physical machine. Configured it with 8 virtual drives and set up a raidz2. Then I copied about 100GB of data on that pool (mounted via AFP). Then I disconnected drives randomly. First drive out, everything's fine. Second drive out, 6 left, which is supposed to still work and it indeed does.
Alright, then I replaced the two drives, ZFS automatically resilvered the array (took a second or so, way to less to copy any data at all) and it was labeled online again.
Then I popped out another drive and that did the job. Array offline.
No "restoring" process in the classical sense, is it?
What I'd expect is that resilvering actually makes the old data available on all available discs, but apparently it does not.
I'm not sure if FreeNas screwed it, but I hope that you can clear that up!
Haven't tested the resilvering process on a physical machine yet.