863737 messages in 81277 discussions
Latest Dec-12 by THECOM
173686 messages in 18454 discussions
Latest Dec-14 by xSomeguyx
Latest Dec-4 by xSomeguyx
44863 messages in 4615 discussions
Latest Dec-7 by xSomeguyx
1040245 messages in 32519 discussions
Latest Dec-6 by THECOM
1. Pretty much. RAID 5 on an embedded controller like that will be ok on reads, and pretty crappy on writes, especially with those disks. However, it's definitely doable (I have a similar setup on my HTPC, though with older 2tb drives), and as long as you're ok with things not being lightning fast it should be pretty good. You could get a cheap RAID controller if you want some advanced functionality (LSI or HPE for example), though all the benefit you'd see on something like this would just be a snazzier RAID interface, and better overall performance.
2. Yeah, looks like NAS4Free is a Linux distribution geared towards NAS. You could also look at FreeNass.
Unless you're dead-set on the Z270 you could probably get away with something much cheaper. I'm sure the 7100 isn't pricey, but for what this is going to be, assuming it's just a server, you could easily get away with a Intel 6-series, and an older board, or just one that's not as feature rich as a TUF.
Thanks for the reply! I really just need to store files so that I can access them on my PC, etc. around the house. RAID 5 is to give me some measure of security in case of disk failure. I'm also looking at off-site backup e.g. S3 or an external HDD docking station (and put the backup drive (s) in the safety deposit box) . Tons to learn.
The motherboard was chosen based on price and the promise of 24/7 reliability. I'm actually thinking of getting a board that supports ECC DRAM, but haven't decided yet. The Core i3 is not too expensive and it is current enough that I may use for something else down the line.
If you're going for redundancy, avoid RAID5.
At 4+GB, disks are becoming large enough that you begin running the risk of
And the larger the disks get, the worse the second problem becomes. Even with pro-grade drives.
You're better off going with a RAID10 setup.
Better performance, and more resilient to drive loss (you can, conceivably, lose two drives from the array and still operate).
Also, if you're doing something like FreeNAS/NAS4Free and ZFS? You're going to need more RAM.
ZFS requires 1GB per 1TB of usable disk space, plus you'll want some spare memory capacity for regular functions as well. Plan for 16GB.
I DEFINITELY recommend going ECC. The BIG problem with a filesystem like ZFS is bit rot. So you want to do what you can to minimize errors at ANY point in the pipeline.
Unfortunately, DDR4 ECC is *NOT* cheap or easy to source.
Also, I do NOT recommend the Z270 platform.
It's a nice, enthusiast platform. But trying to get BSD up and running on it properly (with all the "new" equipment loaded in will be like pulling teeth through your urethra...with a backhoe.
Take a look at THESE server-grade boards. They're not anything "special". But they'll deliver what you need at a sensible price point.
These are also micro-ATX boards. So you won't be stuck building in some huge, overkill server case if you don't want to.
It's also expandable. They have 8 RAID-enabled SATA ports. Some of the more expensive ones ALSO have 8 RAID-enabled SAS ports as well. So if you want to play with multiple tiers of provisioning for something like these bad boys.
Thanks a bunch for the detailed response. That's a wealth of information.
I am leaning towards ECC. For that I am looking at
Thanks again for your reply.
I'll reconsider the amount of RAM. May be my mentality is not quite server oriented, I just think that putting in more than 8GB is too much for what I want the box to do - store and serve files for a few devices around the house.
For some reason, none of your links worked for me, so I have no idea about the motherboard and RAM modules that you wanted me to look at. I live in Canada so availability is not the same anyway. I chose the ASUSmotherboard because it is reasonably priced and available. The board seems to be a server grade board, which has the C236 chipset and supports Xenon processors.
Fricking add pass-thrus...
These bad boys: https://tinyurl.com/SeagateCheetah15K
I think the commentary on RAID 5 is correct; with BIG drives, rebuild time for a gigantic array is extensive, and is more prone to completely borking since you're taxing the other disks. I do think that it's overstated though; I've rebuilt RAID 6 arrays (same as RAID 5, but instead of one parity drive there are two) with 12 disks a number of times, and though it's not fast it does succeed. I'm a big fan of RAID 10 though, especially if you can sacrifice the disk space; the speed boost is real nice, especially when you're talking about writes.
The problem isn't the RAID5 itself. It's the size of the disks themselves vs the overall number of disks in the raid.
With a 12 disk RAID, you're spreading out your data MUCH further than you are with a 3 drive RAID. And with double-parity, you have more recovery chances.
With a 4TB drive, the URE rate is something like 10^14. This gives you an almost guaranteed error at 12.5TB (3 plus a bit) complete read cycles of the array.
Since your RAID is still, only 8TB, your chances of tripping a URE aren't terrible. The more drives you add, and the more parity stripes you add, the better your chances of not hitting a truly unrecoverable error.
But, with 6, 8 and even the newer 10+TB drives, your chances of tripping a URE are STILL only about 10^14 (some better drives are 10^15, giving you a bigger margin of safety).
Why are these figures not improving with the improvements in tech? Because the areal density of the drives keep going up.
So, essentially, they're the same mechanism, they're just cramming more into the same space.