44863 messages in 4615 discussions
Latest 12/7/17 by xSomeguyx
173705 messages in 18459 discussions
Latest Jul-9 by [PC] (BIG_MONTANA)
1040260 messages in 32521 discussions
Latest 1:09 AM by THECOM
If you're going for redundancy, avoid RAID5.
At 4+GB, disks are becoming large enough that you begin running the risk of
And the larger the disks get, the worse the second problem becomes. Even with pro-grade drives.
You're better off going with a RAID10 setup.
Better performance, and more resilient to drive loss (you can, conceivably, lose two drives from the array and still operate).
Also, if you're doing something like FreeNAS/NAS4Free and ZFS? You're going to need more RAM.
ZFS requires 1GB per 1TB of usable disk space, plus you'll want some spare memory capacity for regular functions as well. Plan for 16GB.
I DEFINITELY recommend going ECC. The BIG problem with a filesystem like ZFS is bit rot. So you want to do what you can to minimize errors at ANY point in the pipeline.
Unfortunately, DDR4 ECC is *NOT* cheap or easy to source.
Also, I do NOT recommend the Z270 platform.
It's a nice, enthusiast platform. But trying to get BSD up and running on it properly (with all the "new" equipment loaded in will be like pulling teeth through your urethra...with a backhoe.
Take a look at THESE server-grade boards. They're not anything "special". But they'll deliver what you need at a sensible price point.
These are also micro-ATX boards. So you won't be stuck building in some huge, overkill server case if you don't want to.
It's also expandable. They have 8 RAID-enabled SATA ports. Some of the more expensive ones ALSO have 8 RAID-enabled SAS ports as well. So if you want to play with multiple tiers of provisioning for something like these bad boys.
Thanks a bunch for the detailed response. That's a wealth of information.
I am leaning towards ECC. For that I am looking at
Thanks again for your reply.
I'll reconsider the amount of RAM. May be my mentality is not quite server oriented, I just think that putting in more than 8GB is too much for what I want the box to do - store and serve files for a few devices around the house.
For some reason, none of your links worked for me, so I have no idea about the motherboard and RAM modules that you wanted me to look at. I live in Canada so availability is not the same anyway. I chose the ASUSmotherboard because it is reasonably priced and available. The board seems to be a server grade board, which has the C236 chipset and supports Xenon processors.
Fricking add pass-thrus...
These bad boys: https://tinyurl.com/SeagateCheetah15K
I think the commentary on RAID 5 is correct; with BIG drives, rebuild time for a gigantic array is extensive, and is more prone to completely borking since you're taxing the other disks. I do think that it's overstated though; I've rebuilt RAID 6 arrays (same as RAID 5, but instead of one parity drive there are two) with 12 disks a number of times, and though it's not fast it does succeed. I'm a big fan of RAID 10 though, especially if you can sacrifice the disk space; the speed boost is real nice, especially when you're talking about writes.
The problem isn't the RAID5 itself. It's the size of the disks themselves vs the overall number of disks in the raid.
With a 12 disk RAID, you're spreading out your data MUCH further than you are with a 3 drive RAID. And with double-parity, you have more recovery chances.
With a 4TB drive, the URE rate is something like 10^14. This gives you an almost guaranteed error at 12.5TB (3 plus a bit) complete read cycles of the array.
Since your RAID is still, only 8TB, your chances of tripping a URE aren't terrible. The more drives you add, and the more parity stripes you add, the better your chances of not hitting a truly unrecoverable error.
But, with 6, 8 and even the newer 10+TB drives, your chances of tripping a URE are STILL only about 10^14 (some better drives are 10^15, giving you a bigger margin of safety).
Why are these figures not improving with the improvements in tech? Because the areal density of the drives keep going up.
So, essentially, they're the same mechanism, they're just cramming more into the same space.