Quite a few cool posts on home labs lately triggers this post concerning vSphere Homelabs in 2014 – designs and evolution. There was the first one I spotted on Erik Bussink’s blog yesterday which talks about Shift in homelab designs. Erik is right when saying that the requirements for homelab are exploding. And now he just posted a new post on his homelab upgrade for 2014, check it out.
As VSAN does not support AHCI SATA standard, the only way to go is via external IO cards or get MoBo with supported controller (case some of the Supermicro server boards). But if you don’t usually design your homelab with server hardware you might want to check my post My VSAN journey – part 3, where you’ll see how to search the VMware HCL page for compatible IO storage controller.
Note that you don’t have to ruin yourself bying IO card as I picked used ones on eBay for $79 each – an IO controller which is on the VSAN HCL. It might not be the one which is the most performant, but it’s the one which was on the list of VSAN HCL IO cards since the early beta. Check out Wade Holmes’s blog post at vSphere Blog.
As for now, concerning my lab, I tried to upgrade instead of replace. I think it’ll be more difficult in the future as two of my Nehalem i7 hosts built in 2011 are maxing at 24Gb of RAM and you can be pretty sure that 24 Gb RAM per host will not give you much to play with as more and more RAM requirements will be necessary. I can still scale out, right… Put another VSAN node, if in the future I’ll need more RAM and compute.
The second point Erik is depicting in his article is the network drivers support. With vSphere 5.5 many of the drivers aren’t included as VMware is moving to new Native driver architecture. So the same punition, you must search the HCL for supported NICs which will be used as add on NIC cards for PCI or PCI-e inside of your whitebox. That’s what I’m doing as well. I got gigabit NICs which are on the HCL. The trick to installing a VIB for built-in Realtek NICs worked until now, but how about vSphere 6.0?
Now I want to stress further my thoughts on what future design to pick up? You might or might not agree, but you must agree on one (painful) point. The homelab is getting pricey….
First I had single host, then two, three and now I’m running 4 whiteboxes. Not everything is up and running yet. When all the parts I ordered will show up I’ll be able to run two cluster design with first cluster – single host as a management server (hosting vCenter, DC, vCOPs, backup server etc…) and second cluster with three hosts for VSAN.
Shop for vSphere licenses at VMware Store:
- vSphere Essentials Plus – vMotion, HA… 3 Hosts, vCenter
- vSphere Essentials – 3 Hosts, vCenter
- vSphere Standard – Per Physical CPU license
Like this I can “mess” with VSAN and do break/reinstall whatever I like. The goal is to finally have my VSAN back-end traffic (with speedy 10 + 10 Gigs connections) hooked to my Topspin 120 Infiniband switch. (Note that I also ordered more quieted fans on eBay. The device is just “blody” noisy) …. As for now I only tested 10 Gig speed between two nodes with direct connection as you can see in this video……
Quick note concerning the compatibility of the Infiniband PCI-E 4X DDR Dual Port Storage Host Channel Adapter cards. The cards showed up out of the box with the latest vSphere 5.5 U1 release. No tweaks were necessary.
The environment is (still) hooked to an older Drobo Elite iSCSI SAN device via small gigabit switch on separate gigabit network. Mostly for backup purposes and also for testing the PernixData solution. I’ll be getting also larger Gigabit switch as my SG300-10 which I was using until now is great, but is lacking ports… -:). So I think that I just go for his bigger brother SG300-28 which is fanless. (the SG-300P is not fanless).
So the future is scale up or scale out?
Erik Bussink and Frank Denneman both in their posts are going for Supermicro Motherboard builds based on SuperMicro X9SRH-7TF. Nice server motherboard with socket 2011 with a possibility to scale up to 128 GbB of RAM (8×16 Gb RAM). Also the built in LSI 2308 IO controller is on the VSAN HCL which makes the choice almost no brainer, but be ready to pay some cash for it as the board itself is priced about €482. Frank has listed the node (without spinning storage) for about €1800. (with 64Gb of RAM). Oh btw, if an IPMI is something that would change your mind, the board has an IPMI.
In my series of articles that I called with the fancy title My VSAN journey I try to re-think the design of a low-cost VSAN cluster with (if possible) with consumer parts. The vast majority of home enthusiasts might not be able to invest between €1800 and €1900 per node…. and many folks do have some kind of Whitebox in their lab anyway, so why not just to transform those for VSAN?
Scale out? My criteria were Low power and no noise when I originally went for a consumer based Haswell built, which has enough power with quad core i7-4770s which does have VT-d, 8Mb of cache and hyper-threading. See Intel’s ARC site. Now I’ll add the IO card and another SSD, which will bring the costs from the original $700 to something like $1000-$1100 per node. (note that for now I’m still using my old Nehalem i7 boxes) . Yes, the 32Gb RAM barier on Haswell builts is there, no Xeon powerfull CPU and no iPMI. Can you live without?
I know that no hardware is future proof, but until now stayed with consumer based whiteboxes and hardware as the server hardware is more pricey. I haven’t had a SSD failure even if the lab is on 7/7 and I do many sort of things. Is the scale out way to go? If you’re on the budget, yes. You can build a $1000 node quite easy without the wife going through the ceiling…
Scale up? means bigger investment in RAM, CPU and Motherboard. As you could see the entry ticket is roughly €1900 per node. The minimum 3 nodes for a homelab design might refrain some from going this way even if it’s relatively future-proof. But who knows, 12 months from now the prices might be divided by half.
One could still go and buy consumer based board based on socket 2011, Core i7-4820K (4cores) or Core i7-4930K (6cores), then with 64Gb of RAM per node. I’ve was thinking that this might be a way to go at one point….. but for now I’m trying to use what I got. We’ll see when vSphere 6.0 will be released…
I can’t give straight answer, there is many parameters (as usually) and many factors influencing homelab whitebox builds. A choice of supported NIC, IO storage card is definitely to look at much closer now than was the case two years ago. The low power builds are still my priority so the starting point is definitely the CPU which will play big role on the overall consumption if the box will be up 7/7. I know that the best solution for a really low power VSAN lab would be by going nested way… -:)