vSphere Lab


Recently I was able evolve my lab a little bit, add some more hardware and achieve a setup of my lab which is composed :

  • 2 physical ESX(i) hosts with 24Gigs of RAM each, plus 1 Haswell Whitebox with 32Gb of RAM participating in VSAN cluster.
  • 1 Whitebox with 16 Gb of RAM running vCenter, DC, a Backup server VM and some other VMs, in a management cluster.
  • 1 switch (Cisco SG300-28) – L3 switch, running VLANs with inter VLAN routing.
  • Plus one Drobo Elite iSCSI SAN with SATA disks. Slow, but good to store the backups of my critical VMs.

VSAN in the Homelab – the whole serie:

I had a little trouble with Memory Channel per Bank getting non active when using PERC H310 Contoler card – Fixed! when those Dell PERC H310 controller cards used with my two Nehalem boxes, but glad it’s been “solved”… Those cards are on VMware HCL for VSAN. They are quite cheap (seek on eBay), but not the ones with the most performance.

Here is what the lab looks like – physically. Note that in the room is also a laundry washing machine and dishes washer so it’s most likely the piece where is the most noise. So even if the lab itself isn’t too noisy still you can hear the spinning disks and some fans. But it’s not a noisy system. At the top you can see sitting an Infiniband switch which role is the storage VSAN back-end network and VSAN network. VMotion over 10Gb looks pretty much the same as over 1Gb network, but much faster…

Lab vladan.fr

Let’s start with the hosts setup.

Recently I added a new Haswell i7 whitebox, with 32 Gigs of RAM. Energy efficient i7-4770S model from Intel, which has VT-d, VT-x and vPro, and at the same time has only 65w TDP. The 4 cores, with 8Mb of cache and hyper threading gives me enough power and flexibility to play with.

Used Parts:

  • Intel i7-4770s
  • G.Skill F3-10666CL9Q-32GBXL (4x8Gb)
  • ASRock H87m PRO4
  • Antec NEO ECO 620C 620W
  • Zalman ZM-T1U3

The other two of my ESXi hosts are also whiteboxes running on non server hardware. It’s basically a PC with a lot’s of RAM (24 Gigs) each. Both boxes are running latest VMware ESXi hypervizors, which are installed onto USB Flash 2 gigs each.

Both boxes runs on Intel i7 Nehalem CPU architecture. The CPU are not the same, since the first box is the one I started few years ago, and I added one more box. One of those boxes has Intel i7 920 and the other i7 960.

Here is my list of pieces with are present in those 2 ESXi boxes:

  • Kingston DataTraveler G3 – 2 Go – USB stick to install the VMware ESXi hypervizors
  • Antec HCG – 400W – Power Supplies
  • Intel PRO/1000 GT Desktop (OEM) – NICs
  • Asus Sabertooth X58 – Motherboard
  • Intel Core i7 950 (and i7920)
  • G.Skill Kit Extreme3 3 x 4 Go PC10600 Ripjaws CAS 9 – DDR3 RAM kit

The 2 physical boxes with the Antec power supply make almost no noise. I must say that I took a special attention to pick this particular model of Antec, since the fan in those models is 135 mm large and the efficiency is 82%.

The silence of those power supplies is just ….. awesome ! Also, I must say that the original Intel CPU Fans were replaced by more silent models. I think that everyone confirms, that those original Intel Fans are quite noisy.

I also I digged in bios of both systems to find an option where on can setup “quiet mode” for CPU fan. Once activated, the RPM went down… and the noise level too..

Storage Adapters.

When I started my VSAN journey I went for cheap storage controller cards on eBay – the Dell Perc H310. But those were the lower ends adapters. At the beginning those adapters were on VMware HCL but then VMware has decided to take them out due to performance problems during rebuild and resync operation on VSAN enabled clusters.

How-to flash Dell Perc H310 with IT Firmware

 

In fact the queue depth of those adapters is only 25 when used with the original firmware. That’s why I flashed those Dell H310 cards with an IT firmware (from Dell) – How-to Flash Dell Perc H310 with IT Firmware To Change Queue Depth from 25 to 600.

My Network setup.

Update: The Network is handled now with the SG300-28 Cisco Small Business switch. It’s a bigger brother of the SG300-10, which configuration you can follow here – My homelab – The Network design with Cisco SG 300 – a Layer 3 switch for €199. (with a complete schema). Both models are fan-less so no noise at all….. -:)

The SG300-10 comes as a Layer 2 switch, but if you go to the CLI (command line) you can change mode for L3 and benefit of Inter routing VLAN functionnality. And here is a real story – My switch adventures

The SG300-28 configuration can be done all through GUI, but when you switch from L2 to L3 you loses all your settings. So that’s why probably it’s best to do this as a first step.

A Cisco L3 capable switch with 10 Gigabit ports

Storage network

The storage network uses Cisco Topspin 120 (I think other references this switch as Cisco 7000 model too) which has 24 ports.

  • Infiniband switch (Topspin 120 from eBay – you can find one of those for roughly $200 + shipping) providing me with 10Gb for VSAN traffic and 10Gb for VMotion traffic.
  • The Mellanox HCA cards do provide 2x10Gb network speed and are also from eBay (HP 452372-001 Infiniband PCI-E 4X DDR Dual Port Storage Host Channel Adapter HCA  – €26 each).
  • Infiniband Cables – MOLEX 74526-1003 0.5 mm Pitch LaneLink 10Gbase-CX4 Cable Assembly – roughly €29 a piece.

VSAN cluster

Now when VSAN is out I configured my 3 hosts with VSAN. Each of the 3 hosts has one spinning disk and one SSD. I might add more internal storage later.

VSAN Cluster Homelab Vladan.fr

The Power consumption

power consumption1 My VSAN journey   all done!The lab draws less than 500W according to my powermeter. This is quite good considering that I’m running 4 physical hosts and with two switches and one iSCSI SAN device. Two Nehalem CPUs are the most power hungry as the Haswell low power CPU has TDP 65W only.

I now have enough resources to run the VMs I want and test solutions which I find interesting or emerging. As my two Nehalem boxes are maxing on 24Gb of RAM I think that this will be the first resource which I might run out. No problem, with VSAN I can just add another host to my VSAN cluster…

During the config process if you for example forget to check box on the vmkernel port group for the VSAN traffic, the VSAN assistant won’t let you activate the VSAN cluster and you’ll get a nice warning “Misconfiguration detected”. You can see it in my post which was done on testing VSAN on nested ESXi hypervisors

Update:

@erikbussink asked me if I had a video showing how noisy was the infiniband switch after I replaced the original fans with a Noctua onesThe switch has finally received a new Noctua fans (Noctua nf-a4x10) which do have 3 wires. (any fans with 2 wires won’t will work but the switch will shut itself down after 2 min….). For my particular situation I had to change the order of those wires. The write up is in the My VSAN Journey – All done article.

I’ve recorded a short video showing the noise (or I should rather say silence) of the lab. When I first bought the Infiniband switch, it was noisy… oh man. Impossible to stay in the house. More noisy than a washing machine.

The video shows also the individual components. The latest minibox is my “management” cluster. It has only 16Gb of RAM and I run there the vCenter, DC, DHCP, DNS services as well as my backup servers.