ESXi Lab

Lab 2015

The 3 lab hosts are based on X10SRH-CLN4F board which can go up to 521Gb RAM… (currently only 64Gb of DDR4, cause of the high prices of DDR4)

Energy Efficient ESXi Home Lab:

  1. Efficient Home Server – Start with an Efficient Power Supply
  2. ESXi Home lab – Get a quality and comfortable case
  3. Supermicro Single CPU Board for ESXi Homelab – X10SRH-CLN4F
  4. Supermicro Single CPU Board for ESXi Homelab – X10SRH-CLN4F – Part 2
  5. Supermicro Single CPU Board for ESXi Home lab – Upgrading LSI 3008 HBA on the X10SRH-CLN4F
  6. Building Energy Efficient ESXi Home Lab – parts list
  7. Homelab – Airflow Solar System
Supermicro X10SRH-CLN4F board

Supermicro X10SRH-CLN4F board

Storage Controler

The LSI Logic Fusion-MPT 12GSAS SAS 3008 PCI-Express. It’s the built-in onboard SAS.

LSI 3008

The LSI 3008 SAS controller (built-in) present on the mobo is part of VMware HCL.

LSI 3008 is on VMware VSAN

The lab…

Here is the look from the Inside. The whole lab “sits” on a mounted garage rack bought at the local hardware store, where at the bottom there is a 10Gb Switch and on the shelf there are 3 hosts. (Four actually as the fourth host is a Whitebox – former Haswell ESXi host). I’m running vSphere 6.0, but the storage configuration process isn’t finished yet as I’m using one partner’s hardware (Enteprise class SATA drives – OCZ’s SABER 1000) during test period. The plan is to run either VSAN 6.0 (All-Flash) or Atlantis USX.

Lab ESX Virtualization - Vladan.fr

Each of the 3 hosts has only the two 10 GbE ports plugged in + the iPMI port used by Supermicro’s motherboards.

The Network

As concerning the network I took the plunge, broke the bank and went for a 10GbE switch from Cisco – SG500XG-8F8T. The same switch that Erik Bussink uses in his lab.

Cisco SG500XG-8F8T 10GbE

Concerning the 10GbE network cards I opted for low power, low consumption Intel Ethernet Controller X710-da2 cards with SFP+ connectors, So each host has two 10Gb ports.

Buyers Guide:

ItemPower Efficient ESXi Lab Host 2015 based on Haswell-EP architectureNotesShop
Supermicro-ATX-DDR4-LGA-2011-Motherboards-X10SRH-CLN4F-OSupermicro ATX DDR4 LGA 2011 X10SRH-CLN4F-O (with LSI 3008)SAS, SATA, 4NICsAmazon US
Supermicro-ATX-DDR4-LGA-2011-Motherboards-X10SRH-CFSupermicro ATX DDR4 LGA 2011 X10SRH-CF-O (with LSI 3008)SAS, SATA, 2NICsAmazon US
X10SRi-FSupermicro ATX DDR4 LGA 2011 X10SRI-F-O (No LSI 3008 card on this board…)SATA, 2NICsAmazon US
Xeon E5-2630L v3 CPUIntel Xeon E5-2630L v3 – 8 cores, 1.8 GHZ (55W TDP)20 Mb CacheAmazon US
Intel Xeon E5-2630 v3Intel Xeon E5-2630 v3 – 8 cores, 2.4GHz ( 85W TDP)20 Mb CacheAmazon US
Intel Xeon E5-2620 v3Xeon E5-2620 v3 – 6 Cores, 2.40 GHz (85W TDP)15 Mb CacheAmazon US
Intel Processor Xeon LGA2011-3 1.90G 15M Proc E5-2609V3Xeon E5-2609 V3 – 6 cores, 1.90GHz (85 TDP)15 Mb CacheAmazon US
Xeon E5-1620 v3 CPUIntel Xeon E5-2603 V3 – 6 cores, 1.6GHz (85 W TDP)15 Mb CacheAmazon US
Xeon E5-1650 v3 CPUXeon E5-1650 v3 – 6 cores, 3.50 GHz (140W TDP)15 Mb CacheAmazon US
Supermicro Certified MEM-DR432L-SL01-LR21 Samsung 32GB DDR4-2133 4Rx4 LP ECC LRDIMM MemorySupermicro Certified MEM-DR432L-SL01-LR21 Samsung 32GB DDR4-2133 4Rx4 LP ECC LRDIMM Memory32 GbAmazon US
Samsung DDR4-2133 16GB/2Gx72 ECC/REGSamsung DDR4-2133 16GB/2Gx72 ECC/REG CL15 Server Memory (M393A2G40DB0-CPB0)16 GbAmazon US
Micron DDR4-2133 16GB/2Gx72 ECC/REGSupermicro Certified MEM-DR416L-CL01-ER21 Micron Memory – 16GB DDR4-2133 2Rx4 ECC REG RoHS16 GbAmazon US
Hynix DDR4-2133 16GB/2Gx72 ECC/REG CL13 Hynix Chip Server MemoryHynix DDR4-2133 16GB/2Gx72 ECC/REG CL13 Hynix Chip Server Memory16 GbAmazon US
Samsung DDR4-2133 8GB/1Gx72 ECC/REG CL15 Server Memory (M393A1G40DB0-CPB)Samsung DDR4-2133 8GB/1Gx72 ECC/REG CL15 Server Memory (M393A1G40DB0-CPB)8 GbAmazon US
Supermicro Certified MEM-DR480L-CL01-ER21 Micron Memory - 8GB DDR4-2133 1Rx4 ECC REG RoHSSupermicro Certified MEM-DR480L-CL01-ER21 Micron Memory – 8GB DDR4-2133 1Rx4 ECC REG RoHS8 GbAmazon US
Supermicro certified MEM-DR480L-HL01-ER21 Hynix Memory - 8GB DDR4-2133 1Rx4 ECC REG RoHSSupermicro certified MEM-DR480L-HL01-ER21 Hynix Memory – 8GB DDR4-2133 1Rx4 ECC REG RoHS8 GbAmazon US
FSP Group 450W Modular Cable Power Supply 80 PLUS Platinum AURUM 92+ Series (PT-450M)FSP Group 450W Modular Cable Power Supply 80 PLUS Platinum AURUM 92+ Series (PT-450M)450WAmazon US
FSP Group 550W Modular Cable Power Supply 80 PLUS Platinum AURUM 92+ Series (PT-550M)FSP Group 550W Modular Cable Power Supply 80 PLUS Platinum AURUM 92+ Series (PT-550M)550WAmazon US
FSP Group 650W Modular Cable Power Supply 80 PLUS Platinum AURUM 92+ Series (PT-650M)FSP Group 650W Modular Cable Power Supply 80 PLUS Platinum AURUM 92+ Series (PT-650M)650WAmazon US
Noctua i4 CPU Cooler for Intel Xeon CPU_ LGA2011, 1356 and 1366 Platforms NH-U12DXi4Noctua i4 CPU Cooler for Intel Xeon CPU_ LGA2011, 1356 and 1366 Platforms NH-U12DXi4 (support the Narrow ILM socket)Amazon US
Noctua i4 CPU Cooler for Intel Xeon CPU_ LGA2011, 1356 and 1366 Platforms NH-U9DXi4Noctua i4 CPU Cooler for Intel Xeon CPU_ LGA2011, 1356 and 1366 Platforms NH-U9DXi4 (support Narrow ILM and Square ILM)Amazon US
Fractal Design Define R4 Cases, Black Pearl (FD-CA-DEF-R4-BL)Fractal Design Define R4, Black PearlAmazon US
Fractal 5Fractal Design Define R5, Black Pearl, ATX Mid Tower, Case (Black)Amazon US

 

A vSphere 6 is running for the moment in All-Flash VSAN 6.0 configuration…

The storage in the VSAN All-Flash config: (will change)

  • Capacity Tier – 9x 256Gb OCZ SABER 1000 –  [Enterprise grade SATA ]
  • Capacity Tier – 3x 480 Gb Crucial M500 [Consumer grade SATA ]
  • Cache Tier – 3 x 120 Gb Intel 530 [Consumer grade SATA SSD] – for now…

VMware vSphere 6 with VSAN All-Flash

 

The Solar Airflow system is an autonomous Solar extractor…

Solar Extracting System

……………………………………………………………………………………………………………………………………

Lab 2014:

This was my lab in 2014.

  • Haswell i7 whitebox, with 32 Gigs of RAM.
  • 2 physical ESX(i) hosts with 24Gigs of RAM each, plus 1 Haswell Whitebox with 32Gb of RAM participating in VSAN cluster.
  • 1 Whitebox with 16 Gb of RAM running vCenter, DC, a Backup server VM and some other VMs, in a management cluster.
  • 1 switch (Cisco SG300-28) – L3 switch, running VLANs with inter VLAN routing.
  • Plus one Drobo Elite iSCSI SAN with SATA disks. Slow, but good to store the backups of my critical VMs.

The Network is handled now with the SG300-28 Cisco Small Business switch. It’s a bigger brother of the SG300-10, which configuration you can follow here – My homelab – The Network design with Cisco SG 300 – a Layer 3 switch for €199. (with a complete schema).

Both models SG300-10 and SG300-28 are fan-less so no noise at all….. -:)

The SG300-28 configuration can be done all through GUI, but when you switch from L2 to L3 you loses all your settings. So that’s why probably it’s best to do this as a first step. CLI is also possible way to do the config, but you must know the commands…

VSAN in the Homelab – the whole serie:

I had a little trouble with Memory Channel per Bank getting non active when using PERC H310 Contoler card – Fixed! when those Dell PERC H310 controller cards used with my two Nehalem boxes, but glad it’s been “solved”… Those cards are on VMware HCL for VSAN. They are quite cheap (seek on eBay), but not the ones with the most performance.

Here is what the lab looks like – physically. Note that in the room is also a laundry washing machine and dishes washer so it’s most likely the piece where is the most noise. So even if the lab itself isn’t too noisy still you can hear the spinning disks and some fans. But it’s not a noisy system. At the top you can see sitting an Infiniband switch which role is the storage VSAN back-end network and VSAN network. VMotion over 10Gb looks pretty much the same as over 1Gb network, but much faster…

Lab vladan.fr

Haswell i7 whitebox, with 32 Gigs of RAM. Energy efficient i7-4770S model from Intel, which has VT-d, VT-x and vPro, and at the same time has only 65w TDP. The 4 cores, with 8Mb of cache and hyper threading gives me enough power and flexibility to play with.

Used Parts:

  • Intel i7-4770s
  • G.Skill F3-10666CL9Q-32GBXL (4x8Gb)
  • ASRock H87m PRO4
  • Antec NEO ECO 620C 620W
  • Zalman ZM-T1U3

The other two of my ESXi hosts are also whiteboxes running on non server hardware. It’s basically a PC with a lot’s of RAM (24 Gigs) each. Both boxes are running latest VMware ESXi hypervizors, which are installed onto USB Flash 2 gigs each.

Both boxes runs on Intel i7 Nehalem CPU architecture. The CPU are not the same, since the first box is the one I started few years ago, and I added one more box. One of those boxes has Intel i7 920 and the other i7 960.

Here is my list of pieces with are present in those 2 ESXi boxes:

  • Kingston DataTraveler G3 – 2 Go – USB stick to install the VMware ESXi hypervizors
  • Antec HCG – 400W – Power Supplies
  • Intel PRO/1000 GT Desktop (OEM) – NICs
  • Asus Sabertooth X58 – Motherboard
  • Intel Core i7 950 (and i7920)
  • G.Skill Kit Extreme3 3 x 4 Go PC10600 Ripjaws CAS 9 – DDR3 RAM kit

The 2 physical boxes with the Antec power supply make almost no noise. I must say that I took a special attention to pick this particular model of Antec, since the fan in those models is 135 mm large and the efficiency is 82%.

The silence of those power supplies is just ….. awesome ! Also, I must say that the original Intel CPU Fans were replaced by more silent models. I think that everyone confirms, that those original Intel Fans are quite noisy.

I also I digged in bios of both systems to find an option where on can setup “quiet mode” for CPU fan. Once activated, the RPM went down… and the noise level too..

Storage Adapters.

When I started my VSAN journey I went for cheap storage controller cards on eBay – the Dell Perc H310. But those were the lower ends adapters. At the beginning those adapters were on VMware HCL but then VMware has decided to take them out due to performance problems during rebuild and resync operation on VSAN enabled clusters.

How-to flash Dell Perc H310 with IT Firmware

 

In fact the queue depth of those adapters is only 25 when used with the original firmware. That’s why I flashed those Dell H310 cards with an IT firmware (from Dell) – How-to Flash Dell Perc H310 with IT Firmware To Change Queue Depth from 25 to 600..

Storage network

The storage network uses Cisco Topspin 120 (I think other references this switch as Cisco 7000 model too) which has 24 ports.

  • Infiniband switch (Topspin 120 from eBay – you can find one of those for roughly $200 + shipping) providing me with 10Gb for VSAN traffic and 10Gb for VMotion traffic.
  • The Mellanox HCA cards do provide 2x10Gb network speed and are also from eBay (HP 452372-001 Infiniband PCI-E 4X DDR Dual Port Storage Host Channel Adapter HCA  – €26 each).
  • Infiniband Cables – MOLEX 74526-1003 0.5 mm Pitch LaneLink 10Gbase-CX4 Cable Assembly – roughly €29 a piece.

VSAN 5.5 cluster

Now when VSAN is out I configured my 3 hosts with VSAN. Each of the 3 hosts has one spinning disk and one SSD. I might add more internal storage later.

VSAN Cluster Homelab Vladan.fr

The Power consumption

power consumption1 My VSAN journey   all done!The lab draws less than 500W according to my powermeter. This is quite good considering that I’m running 4 physical hosts and with two switches and one iSCSI SAN device. Two Nehalem CPUs are the most power hungry as the Haswell low power CPU has TDP 65W only.

I now have enough resources to run the VMs I want and test solutions which I find interesting or emerging. As my two Nehalem boxes are maxing on 24Gb of RAM I think that this will be the first resource which I might run out. No problem, with VSAN I can just add another host to my VSAN cluster…

During the config process if you for example forget to check box on the vmkernel port group for the VSAN traffic, the VSAN assistant won’t let you activate the VSAN cluster and you’ll get a nice warning “Misconfiguration detected”. You can see it in my post which was done on testing VSAN on nested ESXi hypervisors

@erikbussink asked me if I had a video showing how noisy was the infiniband switch after I replaced the original fans with a Noctua onesThe switch has finally received a new Noctua fans (Noctua nf-a4x10) which do have 3 wires. (any fans with 2 wires won’t will work but the switch will shut itself down after 2 min….). For my particular situation I had to change the order of those wires. The write up is in the My VSAN Journey – All done article.

I’ve recorded a short video showing the noise (or I should rather say silence) of the lab. When I first bought the Infiniband switch, it was noisy… oh man. Impossible to stay in the house. More noisy than a washing machine.

The video shows also the individual components. The latest minibox is my “management” cluster. It has only 16Gb of RAM and I run there the vCenter, DC, DHCP, DNS services as well as my backup servers.

ESXi Lab 5.00/5 (100.00%) 1 vote