ESX Virtualization

VMware ESXi, vSphere, VMware Backup, Hyper-V... how-to, videos....

Nakivo Backup and Replication - #1 Backup solution for Virtual, physical, cloud, NAS and SaaS

Menu
  • Certification
      • VCP-DCV vSphere 8
          • vcp2024-125.
        • Close
    • Close
  • VMware
    • Configuration Maximums
    • vSphere
      • vSphere 8.0
      • vSphere 7.0
      • vSphere 6.7
      • vSphere 6.5
      • vSphere 6.0
      • Close
    • VMworld
      • VMware EXPLORE 2024
      • VMware EXPLORE 2023
      • VMware EXPLORE 2022
      • VMworld 2019
      • VMworld 2018
      • VMworld 2017
      • VMworld 2016
      • VMworld 2015
      • VMworld 2014
      • VMworld 2013
      • VMworld 2012
      • VMworld 2011
      • Close
    • Close
  • Microsoft
    • Windows Server 2012
    • Windows Server 2016
    • Windows Server 2019
    • Close
  • Categories
    • Tips – VMware, Microsoft and General IT tips and definitions, What is this?, How this works?
    • Server Virtualization – VMware ESXi, ESXi Free Hypervizor, VMware vSphere Server Virtualization, VMware Cloud and Datacenter Virtualization
    • Backup – Virtualization Backup Solutions, VMware vSphere Backup and ESXi backup solutions.
    • Desktop Virtualization – Desktop Virtualization, VMware Workstation, VMware Fusion, VMware Horizon View, tips and tutorials
    • How To – ESXi Tutorials, IT and virtualization tutorials, VMware ESXi 4.x, ESXi 5.x and VMware vSphere. VMware Workstation and other IT tutorials.
    • Free – Free virtualization utilities, ESXi Free, Monitoring and free backup utilities for ESXi and Hyper-V. Free IT tools.
    • Videos – VMware Virtualization Videos, VMware ESXi Videos, ESXi 4.x, ESXi 5.x tips and videos.
    • Home Lab
    • Reviews – Virtualization Software and reviews, Disaster and backup recovery software reviews. Virtual infrastructure monitoring software review.
    • Close
  • Partners
    • NAKIVO
    • StarWind
    • Zerto
    • Xorux
    • Close
  • This Web
    • News
    • ESXi Lab
    • About
    • Advertise
    • Archives
    • Disclaimer
    • PDFs and Books
    • Close
  • Free
  • Privacy policy

vSphere Homelabs in 2014 – scale up or scale out?

By Vladan SEGET | Last Updated: February 17, 2017

Shares

Quite a few cool posts on home labs lately triggers this post concerning vSphere Homelabs in 2014 – designs and evolution. There was the first one I spotted on Erik Bussink's blog yesterday which talks about Shift in homelab designs. Erik is right when saying that the requirements for homelab are exploding.  And now he just posted a new post on his homelab upgrade for 2014, check it out.

As VSAN does not support AHCI SATA standard, the only way to go is via external IO cards or get MoBo with supported controller (case some of the Supermicro server boards). But if you don't usually design your homelab with server hardware you might want to check my post My VSAN journey – part 3, where you'll see how to search the VMware HCL page for compatible IO storage controller.

Note that you don't have to ruin yourself bying IO card as I picked used ones on eBay for $79 each – an IO controller which is on the VSAN HCL. It might not be the one which is the most performant, but it's the one which was on the list of VSAN HCL IO cards since the early beta. Check out Wade Holmes's blog post at vSphere Blog.

As for now, concerning my lab, I tried to upgrade instead of replace. I think it'll be more difficult in the future as two of my Nehalem i7 hosts built in 2011 are maxing at 24Gb of RAM and you can be pretty sure that 24 Gb RAM per host will not give you much to play with as more and more RAM requirements will be necessary. I can still scale out, right… Put another VSAN node, if in the future I'll need more RAM and compute.

The second point Erik is depicting in his article is the network drivers support. With vSphere 5.5 many of the drivers aren't included as VMware is moving to new Native driver architecture. So the same punition, you must search the HCL for supported NICs which will be used as add on NIC cards for PCI or PCI-e inside of your whitebox. That's what I'm doing as well. I got gigabit NICs which are on the HCL. The trick to installing a VIB for built-in Realtek NICs worked until now, but how about vSphere 6.0?

Now I want to stress further my thoughts on what future design to pick up? You might or might not agree, but you must agree on one (painful) point. The homelab is getting pricey….

Infiniband PCI-E 4X DDR Dual Port Storage Host Channel Adapter First I had single host, then two, three and now I'm running 4 whiteboxes. Not everything is up and running yet. When all the parts I ordered will show up I'll be able to run two cluster design with first cluster – single host as a management server (hosting vCenter, DC, vCOPs, backup server etc…) and second cluster with three hosts for VSAN.

Like this I can “mess” with VSAN and do break/reinstall whatever I like. The goal is to finally have my VSAN back-end traffic (with speedy 10 + 10 Gigs connections) hooked to my Topspin 120 Infiniband switch. (Note that I also ordered more quieted fans on eBay. The device is just “blody” noisy)  …. As for now I only tested 10 Gig speed between two nodes with direct connection as you can see in this video…… 

Quick note concerning the compatibility of the Infiniband PCI-E 4X DDR Dual Port Storage Host Channel Adapter cards. The cards showed up out of the box with the latest vSphere 5.5 U1 release. No tweaks were necessary.

Infiniband PCI-E 4X DDR Dual Port Storage Host Channel Adapter

The environment is (still) hooked to an older Drobo Elite iSCSI SAN device via small gigabit switch on separate gigabit network. Mostly for backup purposes and also for testing the PernixData solution. I'll be getting also larger Gigabit switch as my SG300-10 which I was using until now is great, but is lacking ports… -:). So I think that I just go for his bigger brother SG300-28 which is fanless. (the SG-300P is not fanless).

So the future is scale up or scale out?

Erik Bussink and Frank Denneman both in their posts are going for Supermicro Motherboard builds based on SuperMicro X9SRH-7TF. Nice server motherboard with socket 2011 with a possibility to scale up to 128 GbB of RAM (8×16 Gb RAM). Also the built in LSI 2308 IO controller is on the VSAN HCL which makes the choice almost no brainer, but be ready to pay some cash for it as the board itself is priced about €482. Frank has listed the node (without spinning storage) for about €1800. (with 64Gb of RAM). Oh btw, if an IPMI is something that would change your mind, the board has an IPMI.

In my series of articles that I called with the fancy title My VSAN journey I try to re-think the design of a low-cost VSAN cluster with (if possible) with consumer parts. The vast majority of home enthusiasts might not be able to invest between €1800 and €1900 per node…. and many folks do have some kind of Whitebox in their lab anyway, so why not just to transform those for VSAN?

DELL PERC H310 - 8-port 6GB/s PCIe x8 SAS SATA RAID ControllerScale out? My criteria were Low power and no noise when I originally went for a consumer based Haswell built, which has enough power with quad core i7-4770s which does have VT-d, 8Mb of cache and hyper-threading. See Intel's ARC site. Now I'll add the IO card and another SSD, which will bring the costs from the original $700 to something like $1000-$1100 per node. (note that for now I'm still using my old Nehalem i7 boxes) . Yes, the 32Gb RAM barier on Haswell builts is there, no Xeon powerfull CPU and no iPMI. Can you live without?

I know that no hardware is future proof, but until now stayed with consumer based whiteboxes and hardware as the server hardware is more pricey. I haven't had a SSD failure even if the lab is on 7/7 and I do many sort of things. Is the scale out way to go? If you're on the budget, yes. You can build a $1000 node quite easy without the wife going through the ceiling…

Scale up?  means bigger investment in RAM, CPU and Motherboard. As you could see the entry ticket is roughly €1900 per node. The minimum 3 nodes for a homelab design might refrain some from going this way even if it's relatively future-proof. But who knows, 12 months from now the prices might be divided by half.

One could still go and buy consumer based board based on socket 2011, Core i7-4820K (4cores) or Core i7-4930K (6cores), then with 64Gb of RAM per node. I've was thinking that this might be a way to go at one point….. but for now I'm trying to use what I got. We'll see when vSphere 6.0 will be released…

Thoughts

I can't give straight answer, there is many parameters (as usually) and many factors influencing homelab whitebox builds. A choice of supported NIC, IO storage card is definitely to look at much closer now than was the case two years ago. The low power builds are still my priority so the starting point is definitely the CPU which will play big role on the overall consumption if the box will be up 7/7.  I know that the best solution for a really low power VSAN lab would be by going nested way… -:)

Links:

  • Erik Bussink
  • Frank Denneman
  • Wade Holmes (IO cards for VSAN)
Shares
Vote !

| Filed Under: Server Virtualization Tagged With: vSphere Homelabs in 2014

About Vladan SEGET

This website is maintained by Vladan SEGET. Vladan is as an Independent consultant, professional blogger, vExpert x16, Veeam Vanguard x9, VCAP-DCA/DCD, ESX Virtualization site has started as a simple bookmarking site, but quickly found a large following of readers and subscribers.

Connect on: Facebook. Feel free to network via Twitter @vladan.

Comments

  1. Nick Morgowicz says

    March 29, 2014 at 9:44 pm

    Hi Vladan,

    You know, an option for storage and high bandwidth could also be 4gb fiber-channel. As business move up to 16gb fiber-channel, 4gb hardware has gotten dirt cheap.

    To give you an idea how much it cost me for my purchase last month, i got a Cisco MDS-9134 fiber-channel switch, which has 24 ports licensed to use active connections, for $250 off ebay. That cost included 24x 4gb Cisco SFP modules, but even if it didnt, you can get the SFP’s for $4-12/ea.

    I will be building some instructions on how to build and configure everything, using OmniOS (or any new Solaris) + the napp-it software as a fiber-channel target soon, but you need to use QLogic HBA’s to be able to easily change the functionality of the driver from initiator to target. These cards were $15 for single port up to $25-35 for dual port, and model is QLE-2460 for single port and QLE-2462 for dual.

    Last thing you need is LC-LC cable, and the cheapest is .62 micron stuff, which can still handle full 4gb for up to 50m. If you want to spend more money to get .50 micron cable, you can go higher bw for longer distances, but it was ~$10-15 for 1m-3m length and $20 for 10m.

    Other than that, you need to know how to create a cisco fiber channel fabric, but as i learned, it’s not very difficult. Then you use ZFS to create zvol’s and the COMSTAR iSCSI/FC components of Solaris to tie it all together. All disks show up as “Sun Fiber Channel Disk”, and setting up ALUA+round robin to use multiple ports is as simple as zoning an additional HBA to the target.

    I’m really impressed with FC, especially knowing that no matter how i build my homelab out in the future, i can go tall or wide and still have plenty of port density on my fabric!

  2. Vladan SEGET says

    March 30, 2014 at 7:12 am

    Looks like another nice (cheap) alternative to high speed storage network for homelabs. Infiniband is even more easy as the new vSphere 5.5 U1 does have the IB drivers baked in and the cards shows right after in the UI – no driver install necessary.

    Whenever you want to share your guide with a public, and if you don’t have a blog, contact me for a guest post! I think our readers will be delighted -:).

    • Nick Morgowicz says

      March 30, 2014 at 12:50 pm

      Sorry, i didn’t make it clear about the driver part of it. Much like IB, FC has a lot of built-in in the box support from OS vendors too. When you install the cards, whether you are using Microsoft Server 2012 R2, ESXi5.5 or 5.5UR1, or even OmniOS, the QLogic QLE2460 or 2462 drivers that the vendor provides will have your card functioning and recognizable without making any changes at all.

      What you have to change is on the Solaris side, where you will change the driver file attached to your HBA to go from qlc (qlogic card?) to qlt (qlogic target). FC works similarly to iSCSI, in which you will have 1 target system that serves your store out, and all receiving systems are initiators.

      I believe even when i went to download newer drivers to install in ESXi, VMware had put the latest build of the driver in the 5.5UR1 image. If you wanted additional monitoring, there was a CIM software pack for the QLogic card that you could install, but i dont know if it’s not working right in 5.5UR1 or if i didnt do something right, because i don’t show anything more in my hardware provider section after installing that CIM vib.

      The QLogic (or any vendor’s) FC HBA will show up under the storage adapters setting.

      I’ll probably take you up on your offer for a place to publish the instructions, but it’ll probably be a week or two before i’m ready. I’ve created a high level outline already, but between our US Tax Day coming up and my being on-call at work, the next week and a half will be a little hectic.

      Thank you again for the support!

      • Jason Shiplett says

        April 2, 2014 at 11:36 pm

        For those of us who switched over to VSAN for our storage, IB is still the best option for price/performance.

        Even in a situation where you weren’t using distributed storage, I’d argue IB makes a better value prop with affordable 10Gb vs. 4Gb FC.

  3. Tyson Then says

    March 30, 2014 at 12:23 pm

    You don’t even need a Fibre Switch. I initially had a 16 Port activated Brocade 200e but it was too noisy at 1 RU. You can do a direct connection between host and SAN (like a crossover). My Whitebox Nexenta CE SAN All Flash Array has two Qlogic 2464 Cards (8 Ports total). My 3 hosts have a Qlogic 2464 HBA each. I have 3 Paths for 2 Hosts and 1 host has 2 Paths. I get some really good speeds.

    The spare port on the hosts are connected to my Tier 2 storage which is just a 4 drive RAIDz with SSD caching on another Nexenta CE SAN with QLogic 2464.

Private Sponsors

Featured

  • Thinking about HCI? G2, an independent tech solutions peer review platform, has published its Winter 2023 Reports on Hyperconverged Infrastructure (HCI) Solutions.
  • Zerto: One Platform for Disaster Recovery, Backup & Cloud Mobility: Try FREE Hands-On Labs Today!
Click to Become a Sponsor

Most Recent

  • Veeam Backup & Replication v13 Beta: A Game-Changer with Linux
  • What is Veeam Data Cloud Vault and how it can help SMBs
  • Nakivo Backup and Replication – Malware Scan Feature
  • Zerto 10 U7 released with VMware NSX 4.2 Support
  • XorMon NG 1.9.0 Infrastructure Monitoring – now also with Veeam Backup Support
  • Heartbeat vs Node Majority StarWind VSAN Failover Strategy
  • Vulnerability in your VMs – VMware Tools Update
  • FREE version of StarWind VSAN vs Trial of Full version
  • Commvault’s Innovations at RSA Conference 2025 San Francisco
  • VMware ESXi FREE is FREE again!

Get new posts by email:

 

 

 

 

Support us on Ko-Fi

 

 

Buy Me a Coffee at ko-fi.com

Sponsors

Free Trials

  • DC Scope for VMware vSphere – optimization, capacity planning, and cost management. Download FREE Trial Here.
  • Augmented Inline Deduplication, Altaro VM Backup v9 For #VMware and #Hyper-V – Grab your copy now download TRIAL.

VMware Engineer Jobs

VMware Engineer Jobs

YouTube

…

Find us on Facebook

ESX Virtualization

…

Copyright © 2025 ·Dynamik-Gen · Genesis Framework · Log in