The Commport

Hosted by [PC][Ch]amsalot (CHAMSALOT)|

Free support to first-time computer users as well as the experts!

  • 92280
  • 2787938
  • 0


Building new home server with RAID 5    Building/Modding/ Overclocking

Started 12/4/17 by Joe PC User (JoePCUser); 392 views.

Hi all,

I'm planning on building a new home file server with RAID 5. At this point, I'm thinking of using NAS4Free, but may go for Windows 10. I'm not an expert when it comes to server, so easy set up and management is preferred. My current server is running Window Home Server 2003.

The components planned for the new server at this point:

  • Motherboard: ASUS TUF Z270 Mark2
  • CPU: Intel CORE i3-7100
  • Memory: 8GB
  • HDD: WD Red 4TB x 4 for data
  • SSD: 240 GB for the OS and other non-data stuff
  • PSU: EVGA B3 450W
  • OS: NAS4Free

I have 2 questions:

  1. The motherboard supports RAID 5. Does that mean I don't need anything else? I still remember from the good old days that RAID cards were required to set up RAID.
  2. Am I correct to understand that NAS4Free is a complete OS, i.e. it is not installed after, say, having installed Windows 10? 

Thanks a bunch for your help.


From: xSomeguyx


1. Pretty much.  RAID 5 on an embedded controller like that will be ok on reads, and pretty crappy on writes, especially with those disks.  However, it's definitely doable (I have a similar setup on my HTPC, though with older 2tb drives), and as long as you're ok with things not being lightning fast it should be pretty good.  You could get a cheap RAID controller if you want some advanced functionality (LSI or HPE for example), though all the benefit you'd see on something like this would just be a snazzier RAID interface, and better overall performance.

2. Yeah, looks like NAS4Free is a Linux distribution geared towards NAS.  You could also look at FreeNass.

Unless you're dead-set on the Z270 you could probably get away with something much cheaper.  I'm sure the 7100 isn't pricey, but for what this is going to be, assuming it's just a server, you could easily get away with a Intel 6-series, and an older board, or just one that's not as feature rich as a TUF.

Thanks for the reply! I really just need to store files so that I can access them on my PC, etc. around the house. RAID 5 is to give me some measure of security in case of disk failure. I'm also looking at off-site backup e.g. S3 or an external HDD docking station (and put the backup drive (s) in the safety deposit box) . Tons to learn.

The motherboard was chosen based on price and the promise of 24/7 reliability. I'm actually thinking of getting a board that supports ECC DRAM, but haven't decided yet. The Core i3 is not too expensive and it is current enough that I may use for something else down the line.

Thanks again.




If you're going for redundancy, avoid RAID5.
At 4+GB, disks are becoming large enough that you begin running the risk of

  • Additional drive failures due to the loads placed on remaining drives.
  • Unrecoverable Read Errors

And the larger the disks get, the worse the second problem becomes.  Even with pro-grade drives.

You're better off going with a RAID10 setup.
Better performance, and more resilient to drive loss (you can, conceivably, lose two drives from the array and still operate).

Also, if you're doing something like FreeNAS/NAS4Free and ZFS?  You're going to need more RAM.
ZFS requires 1GB per 1TB of usable disk space, plus you'll want some spare memory capacity for regular functions as well.  Plan for 16GB.

I DEFINITELY recommend going ECC.  The BIG problem with a filesystem like ZFS is bit rot.  So you want to do what you can to minimize errors at ANY point in the pipeline.

Unfortunately, DDR4 ECC is *NOT* cheap or easy to source.

Also, I do NOT recommend the Z270 platform.
It's a nice, enthusiast platform.  But trying to get BSD up and running on it properly (with all the "new" equipment loaded in will be like pulling teeth through your urethra...with a backhoe.

Take a look at THESE server-grade boards.  They're not anything "special".  But they'll deliver what you need at a sensible price point.

These are also micro-ATX boards.  So you won't be stuck building in some huge, overkill server case if you don't want to.
It's also expandable.  They have 8 RAID-enabled SATA ports.  Some of the more expensive ones ALSO have 8 RAID-enabled SAS ports as well.  So if you want to play with multiple tiers of provisioning for something like these bad boys.

  • Edited December 6, 2017 2:32 am  by  THECOM
In reply toRe: msg 4

Thanks a bunch for the detailed response. That's a wealth of information.

I am leaning towards ECC. For that I am looking at Asus P10S-V/4L motherboard (for price and availability).

Your comment on the amount of RAM required gave me pause. 16GB of ECC RAM for 8TB of storage space with a 4 x 4TB RADI array (according to an online RAID capacity calculator) is a lot of $$. I'll have to shell out even more $$ if I want to expand the array's capacity. I'm thinking of using Windows 10 instead then. That leads to another question: I've been reading on how to set up RAID, but I'm a bit confused. It seems that if there is hardware RAID support (the ASUS mobo above has it) then there is nothing that needs to be done from the OS, specifically Windows 10, which offers software RAID. I read that Windows 10 recognizes RAID controllers (installing drivers as required), in which case nothing further needs to be done (i.e. it is not necessary to set up RAID through Disk Management). SO, putting all these together, is it correct to say that I can use Windows 10 as the OS with a RAID 10 set up as long as I enablethat in the motherboard's RAID controller ROM?

If I can use Windows 10 with hardware RAID, can I install the OS on a separate SSD and leave the RAID array for data only?

Many thanks for your advice and help.




Registered ECC DDR4 isn't THAT terribly expensive.

8GB Modules

16GB Modules

This is why I recommend the server-grade boards instead of the workstation-grade stuff.

And YES, with hardware RAID, you can:

  1. Install to the SSD
  2. Set up the RAID in the BIOS and then make it available to the OS.
In reply toRe: msg 6

Thanks again for your reply.

I'll reconsider the amount of RAM. May be my mentality is not quite server oriented, I just think that putting in more than 8GB is too much for what I want the box to do - store and serve files for a few devices around the house.

For some reason, none of your links worked for me, so I have no idea about the motherboard and RAM modules that you wanted me to look at. I live in Canada so availability is not the same anyway. I chose the ASUS  P10S-V/4L motherboard because it is reasonably priced and available. The board seems to be a server grade board, which has the C236 chipset and supports Xenon processors.





Fricking add pass-thrus...

These bad boys:
There, that should cover the 4 links I gave you, in-order.

I'd give you pointers to SAS SSDs, but their pricing is TOTALLY out of whack.
  • Edited December 6, 2017 6:59 pm  by  THECOM

From: xSomeguyx


I think the commentary on RAID 5 is correct; with BIG drives, rebuild time for a gigantic array is extensive, and is more prone to completely borking since you're taxing the other disks.  I do think that it's overstated though; I've rebuilt RAID 6 arrays (same as RAID 5, but instead of one parity drive there are two) with 12 disks a number of times, and though it's not fast it does succeed.  I'm a big fan of RAID 10 though, especially if you can sacrifice the disk space; the speed boost is real nice, especially when you're talking about writes.

The problem isn't the RAID5 itself.  It's the size of the disks themselves vs the overall number of disks in the raid.
With a 12 disk RAID, you're spreading out your data MUCH further than you are with a 3 drive RAID.  And with double-parity, you have more recovery chances.
With a 4TB drive, the URE rate is something like 10^14.  This gives you an almost guaranteed error at 12.5TB (3 plus a bit) complete read cycles of the array.

Since your RAID is still, only 8TB, your chances of tripping a URE aren't terrible.  The more drives you add, and the more parity stripes you add, the better your chances of not hitting a truly unrecoverable error.

But, with 6, 8 and even the newer 10+TB drives, your chances of tripping a URE are STILL only about 10^14 (some better drives are 10^15, giving you a bigger margin of safety).
Why are these figures not improving with the improvements in tech?  Because the areal density of the drives keep going up.
So, essentially, they're the same mechanism, they're just cramming more into the same space.

  • Edited December 8, 2017 6:50 pm  by  THECOM