ESXi Lab

Lab 2015

The 3 lab hosts are based on X10SRH-CLN4F board which can go up to 521Gb RAM… (currently only 64Gb of DDR4, cause of the high prices of DDR4)

Energy Efficient ESXi Home Lab:

  1. Efficient Home Server – Start with an Efficient Power Supply
  2. ESXi Home lab – Get a quality and comfortable case
  3. Supermicro Single CPU Board for ESXi Homelab – X10SRH-CLN4F
  4. Supermicro Single CPU Board for ESXi Homelab – X10SRH-CLN4F – Part 2
  5. Supermicro Single CPU Board for ESXi Home lab – Upgrading LSI 3008 HBA on the X10SRH-CLN4F
  6. Building Energy Efficient ESXi Home Lab – parts list
  7. Homelab – Airflow Solar System
Supermicro X10SRH-CLN4F board

Supermicro X10SRH-CLN4F board

Storage Controler

The LSI Logic Fusion-MPT 12GSAS SAS 3008 PCI-Express. It’s the built-in onboard SAS.

LSI 3008

The lab…

Here is the look from the Inside. The whole lab “sits” on a mounted garage rack bought at the local hardware store, where at the bottom there is a 10Gb Switch and on the shelf there are 3 hosts. (Four actually as the fourth host is a Whitebox – former Haswell ESXi host). I’m running vSphere 6.0, but the storage configuration process isn’t finished yet as I’m using one partner’s hardware (Enteprise class SATA drives – OCZ’s SABER 1000) during test period. The plan is to run either VSAN 6.0 (All-Flash) or Atlantis USX.

Lab ESX Virtualization -

Each of the 3 hosts has only the two 10 GbE ports plugged in + the iPMI port used by Supermicro’s motherboards.

The Network

As concerning the network I took the plunge, broke the bank and went for a 10GbE switch from Cisco – SG500XG-8F8T. The same switch that Erik Bussink uses in his lab.

Cisco SG500XG-8F8T 10GbE

Concerning the 10GbE network cards I opted for low power, low consumption Intel Ethernet Controller X710-da2 cards with SFP+ connectors, So each host has two 10Gb ports.

A vSphere 6 is running for the moment in All-Flash VSAN 6.0 configuration…

The storage in the VSAN All-Flash config: (will change)

  • Capacity Tier – 9x 256Gb OCZ SABER 1000 –  [Enterprise grade SATA ]
  • Capacity Tier – 3x 480 Gb Crucial M500 [Consumer grade SATA ]
  • Cache Tier – 3 x 120 Gb Intel 530 [Consumer grade SATA SSD] – for now…

VMware vSphere 6 with VSAN All-Flash

The Solar Airflow system is an autonomous Solar extractor…

Solar Extracting System


Lab 2014:

This was my lab in 2014.

  • Haswell i7 whitebox, with 32 Gigs of RAM.
  • 2 physical ESX(i) hosts with 24Gigs of RAM each, plus 1 Haswell Whitebox with 32Gb of RAM participating in VSAN cluster.
  • 1 Whitebox with 16 Gb of RAM running vCenter, DC, a Backup server VM and some other VMs, in a management cluster.
  • 1 switch (Cisco SG300-28) – L3 switch, running VLANs with inter VLAN routing.
  • Plus one Drobo Elite iSCSI SAN with SATA disks. Slow, but good to store the backups of my critical VMs.

The Network is handled now with the SG300-28 Cisco Small Business switch. It’s a bigger brother of the SG300-10, which configuration you can follow here – My homelab – The Network design with Cisco SG 300 – a Layer 3 switch for €199. (with a complete schema).

Both models SG300-10 and SG300-28 are fan-less so no noise at all….. -:)

The SG300-28 configuration can be done all through GUI, but when you switch from L2 to L3 you loses all your settings. So that’s why probably it’s best to do this as a first step. CLI is also possible way to do the config, but you must know the commands…

VSAN in the Homelab – the whole serie:

I had a little trouble with Memory Channel per Bank getting non active when using PERC H310 Contoler card – Fixed! when those Dell PERC H310 controller cards used with my two Nehalem boxes, but glad it’s been “solved”… Those cards are on VMware HCL for VSAN. They are quite cheap (seek on eBay), but not the ones with the most performance.

Here is what the lab looks like – physically. Note that in the room is also a laundry washing machine and dishes washer so it’s most likely the piece where is the most noise. So even if the lab itself isn’t too noisy still you can hear the spinning disks and some fans. But it’s not a noisy system. At the top you can see sitting an Infiniband switch which role is the storage VSAN back-end network and VSAN network. VMotion over 10Gb looks pretty much the same as over 1Gb network, but much faster…


Haswell i7 whitebox, with 32 Gigs of RAM. Energy efficient i7-4770S model from Intel, which has VT-d, VT-x and vPro, and at the same time has only 65w TDP. The 4 cores, with 8Mb of cache and hyper threading gives me enough power and flexibility to play with.

Used Parts:

  • Intel i7-4770s
  • G.Skill F3-10666CL9Q-32GBXL (4x8Gb)
  • ASRock H87m PRO4
  • Antec NEO ECO 620C 620W
  • Zalman ZM-T1U3

The other two of my ESXi hosts are also whiteboxes running on non server hardware. It’s basically a PC with a lot’s of RAM (24 Gigs) each. Both boxes are running latest VMware ESXi hypervizors, which are installed onto USB Flash 2 gigs each.

Both boxes runs on Intel i7 Nehalem CPU architecture. The CPU are not the same, since the first box is the one I started few years ago, and I added one more box. One of those boxes has Intel i7 920 and the other i7 960.

Here is my list of pieces with are present in those 2 ESXi boxes:

  • Kingston DataTraveler G3 – 2 Go – USB stick to install the VMware ESXi hypervizors
  • Antec HCG – 400W – Power Supplies
  • Intel PRO/1000 GT Desktop (OEM) – NICs
  • Asus Sabertooth X58 – Motherboard
  • Intel Core i7 950 (and i7920)
  • G.Skill Kit Extreme3 3 x 4 Go PC10600 Ripjaws CAS 9 – DDR3 RAM kit

The 2 physical boxes with the Antec power supply make almost no noise. I must say that I took a special attention to pick this particular model of Antec, since the fan in those models is 135 mm large and the efficiency is 82%.

The silence of those power supplies is just ….. awesome ! Also, I must say that the original Intel CPU Fans were replaced by more silent models. I think that everyone confirms, that those original Intel Fans are quite noisy.

I also I digged in bios of both systems to find an option where on can setup “quiet mode” for CPU fan. Once activated, the RPM went down… and the noise level too..

Storage Adapters.

When I started my VSAN journey I went for cheap storage controller cards on eBay – the Dell Perc H310. But those were the lower ends adapters. At the beginning those adapters were on VMware HCL but then VMware has decided to take them out due to performance problems during rebuild and resync operation on VSAN enabled clusters.

How-to flash Dell Perc H310 with IT Firmware


In fact the queue depth of those adapters is only 25 when used with the original firmware. That’s why I flashed those Dell H310 cards with an IT firmware (from Dell) – How-to Flash Dell Perc H310 with IT Firmware To Change Queue Depth from 25 to 600..

Storage network

The storage network uses Cisco Topspin 120 (I think other references this switch as Cisco 7000 model too) which has 24 ports.

  • Infiniband switch (Topspin 120 from eBay – you can find one of those for roughly $200 + shipping) providing me with 10Gb for VSAN traffic and 10Gb for VMotion traffic.
  • The Mellanox HCA cards do provide 2x10Gb network speed and are also from eBay (HP 452372-001 Infiniband PCI-E 4X DDR Dual Port Storage Host Channel Adapter HCA  – €26 each).
  • Infiniband Cables – MOLEX 74526-1003 0.5 mm Pitch LaneLink 10Gbase-CX4 Cable Assembly – roughly €29 a piece.

VSAN 5.5 cluster

Now when VSAN is out I configured my 3 hosts with VSAN. Each of the 3 hosts has one spinning disk and one SSD. I might add more internal storage later.

VSAN Cluster Homelab

The Power consumption

power consumption1 My VSAN journey   all done!The lab draws less than 500W according to my powermeter. This is quite good considering that I’m running 4 physical hosts and with two switches and one iSCSI SAN device. Two Nehalem CPUs are the most power hungry as the Haswell low power CPU has TDP 65W only.

I now have enough resources to run the VMs I want and test solutions which I find interesting or emerging. As my two Nehalem boxes are maxing on 24Gb of RAM I think that this will be the first resource which I might run out. No problem, with VSAN I can just add another host to my VSAN cluster…

During the config process if you for example forget to check box on the vmkernel port group for the VSAN traffic, the VSAN assistant won’t let you activate the VSAN cluster and you’ll get a nice warning “Misconfiguration detected”. You can see it in my post which was done on testing VSAN on nested ESXi hypervisors

@erikbussink asked me if I had a video showing how noisy was the infiniband switch after I replaced the original fans with a Noctua onesThe switch has finally received a new Noctua fans (Noctua nf-a4x10) which do have 3 wires. (any fans with 2 wires won’t will work but the switch will shut itself down after 2 min….). For my particular situation I had to change the order of those wires. The write up is in the My VSAN Journey – All done article.

I’ve recorded a short video showing the noise (or I should rather say silence) of the lab. When I first bought the Infiniband switch, it was noisy… oh man. Impossible to stay in the house. More noisy than a washing machine.

The video shows also the individual components. The latest minibox is my “management” cluster. It has only 16Gb of RAM and I run there the vCenter, DC, DHCP, DNS services as well as my backup servers.