I had an opportunity at work to build up a new machine to do our FreeBSD builds at work this quarter and wanted to see how far I could take ZFS on high end OEM hardware.
After evaluating HP and Dell gear, I settled on the Dell r720xd as my platform to move forward. Primarily, this was due to the lack of *real* JBOD support on the HP line of SAS controllers. The Dell H310 has a “SYSPD” option in mfi(4) that allows one to use the raw disks and ignore the RAID capabilites. I went ahead and modified the FreeBSD mfiutil(4) tool to allow run time configuration into this mode. http://svnweb.freebsd.org/base?view=revision&revision=254906
I ended up with 64G of RAM and 2x CPU: Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz (2300.05-MHz K8-class CPU). I stacked 12x3TB SAS drives (really just SATA drives with SAS firmware, but hey, they cost WAY MORE).
Setup the zpool with 2x raidz1 vdevs on this go around. There was some debate between myself and other colleagues if I should have gone with 1 raidz2 pool. It theoretically would have some better failure handling since I would have 2x parity disks in the same pool, but it seemed that I should go with 2x vdevs, each with 1 parity drive in this case because of how much write activity building 7 different FreeBSD distributions simultaneously would generate.
I ended up with a zpool that looks like this:
pool: zroot
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Wed Aug 21 15:55:09 2013
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
mfisyspd1p3 ONLINE 0 0 0
mfisyspd2p3 ONLINE 0 0 0
mfisyspd3p3 ONLINE 0 0 0
mfisyspd4p3 ONLINE 0 0 0
mfisyspd5p3 ONLINE 0 0 0
mfisyspd0p3 ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
mfisyspd6p3 ONLINE 0 0 0
mfisyspd7p3 ONLINE 0 0 0
mfisyspd8p3 ONLINE 0 0 0
mfisyspd9p3 ONLINE 0 0 0
mfisyspd10p3 ONLINE 0 0 0
mfisyspd11p3 ONLINE 0 0 0
Sexy.
Performance wise, this machine now spits out our production images in about 95 minutes as opposed to the 255 minutes from before. Its a complete dead lift of hardware, new cpus, disks, more ram, different F/S, etc. I’m pretty happy with it, but of course, its Cadillac prices, so your mileage will vary.
3 responses to “Cadillac ZFS #FreeBSD”
If you really want to crank up the throughput, consider running a test using 6x mirror vdevs. 🙂 Then you’ll really see what the hardware is capable of. 😀
Just be warned … once you do mirror, you’ll never do raidz again. At least not where performance is concerned. For bulk storage, raidz2 works “good enough”, especially if you can get it to use 4 or more vdevs (we use 24- and 45-bay chassis for bulk storage).
Technically, the ‘optimal’ setup (other than 6x mirrors) for 12 drives would be 2x RAID-Z2. In RAID-Z you generally want to have the number of non-parity drives be a power of 2 (so that the 128kb record is split evenly across the non-parity drives).
Of course this would have left you with only 24 TB usable, compared to the 30 TB you got with the 2x Z1 or 1x Z2 you were considering.
Most of my ZFS boxes have 96 or 144gb of ram, but are otherwise very similar.
Yes, but did you *pay* as much for your machines. 🙂