Last year I had only one physical host running VMware Workstation and some nested ESXi hosts. This setup was great, since I could use the box for something else that a vSphere Lab, but I could not test all the features that vSphere allows. The limitations were these:
- I could not run 64bit nested VMs.
- I could not run VMs with fault tolerance
Also the speed of nested VMs is not the same as in normal, virtual environment. So this year I was able to make the lab evolve a little bit, add some more hardware and achieve a setup of my lab which is composed :
- 2 physical ESX(i) hosts with 18Gigs of RAM each.
- 1 physical shared storage. (Nas box running Intel Atom Double core CPU).
- 1 switch running VLANs and have a routing in between those VLANs.
- Plus one Drobo Elite box with SATA disks. Slow, but good to store the backups of my critical VMs.
Let’s start with the hosts setup.
Both of my ESXi hosts are whiteboxes running on non server hardware. It’s basically a PC with a lot’s of RAM (18 Gigs) each. Both boxes are running VMware ESXi hypervizors, which are installed onto USB Flash 2 gigs each. You say why 2 Gigs if ESXi don’t even needs 2 gigs of space? Actually, when I was building the lab, there were no more USB sticks with less capacity available on the market.
Both boxes runs on Intel Nehalem CPU architecture. The CPU are not the same, since the first box is the one I had last year already, and I just added one more box. But the CPU which I bought last year were not available as a new anymore. I could have probbabloy find those as a second hand, but I did not bother. So one of those boxes has Intel i7 920 and the other i7 960 (since there were no more the 920 available).
In case you’re searching to build a whitebox, you can check non official HCL for hardware compatible like motherboards, NICs, CPUs etc… through this website: http://www.vm-help.com/
Here is my list of pieces with are present in those 2 ESXi boxes:
- Kingston DataTraveler G3 – 2 Go – USB stick to install the VMware ESXi hypervizors
- Antec HCG – 400W – Power Supplies
- Intel PRO/1000 GT Desktop (OEM) – NICs
- Asus Sabertooth X58 – Motherboard
- Intel Core i7 950 – CPU
- G.Skill Kit Extreme3 3 x 4 Go PC10600 Ripjaws CAS 9 – DDR3 RAM kit
- Case Thermaltake V3 black
- And the box which I made last year – with the Asus P6T SE and Intel i7 920 CPU
You can see below that the box is
almost empty. Update: the setup changed a bit, since both of the boxes has now at least 240 of SSD storage. So not only I do have physical shared storage, but also local storage. Soon vVolumes?…
The 2 physical boxes with the Antec power supply make almost no noise. I must say that I took a special attention to pick this particular model of Antec, since the fan in those models is 135 mm large and the efficiency is 82%. The max. power of those power supplies are 400W, which is more than enough, since both systems does not have any CD/DVD drives or even Graphic cards….
The silence of those powersupplies is just ….. awesome ! The Whiteboxes are just right next to me. Also, I must say that the original Intel CPU Fans were replaced by more silent models. I think that everyone confirms, that those original Intel Fans are quite noisy.
I also I digged in bios of both systems to find an option where on can setup “quiet mode” for CPU fan. Once activated, the RPM went down… and the nose level too.. I might loosed some performance there, but that’s not a problem. So that’s the White box pat of my lab. Now let’s move on with other parts.
My Network setup.
What’s in my home network or how I designed the network for VMware vSphere lab? Recently I bought a Cisco switch which is a Layer 3 switch. It comes as a Layer 2 switch, but if you go to the CLI (command line) you can change mode for L3 and benefit of Inter routing VLAN functionnality. The setup was a bit tricky, you can read quite detailed article about how I configured the switch and how I make those VLANs working. There are 2 detailed articles – a real story – which in detail shows you my homelab networking experience:
- My switch adventures
- My homelab – The Network design with Cisco SG 300 – a Layer 3 switch for €199. (with a complete schema)
My switch: Cisco SG 300-10
My final (it’s still evolving..) network layout as you can see it through the VMware vSphere Client:
The final network design with 3 physical NICs.
You could only use two, or even one NIC, of course, but the Intel Destop NIC models aren’t so expensive and since inside the white boxes is plenty of PCI or PCI Express slots, I populated 3 NICs in each box.
As for the VLANs, I created like 7-8 VLANs to start with, but you can have up to 25 VLAN interfaces with assigned IP addresses configured on the Cisco SG 300.
The switch is fanless so no noise at all….. -:)
There is a GUI on this model, since it’s a model destined for the SMB market. You can configure the whole switch with yoiur favourite browser. (except for the initial Layer 3 configuration) .
The GUI is nice and clean, looks more “professional” compare to the Linksys SRW2008 (from the article….My switch adventures).
That’s in brief my network part. Now I’ll show you how I made my home made NAS box and which parts I used for that…
A shared storage.
Update: I just updated my lab with one mixed box ESXi/Nexenta VM….:
- Building NAS Box With SoftNAS
- Homelab Nexenta on ESXi – as a VM
- Fix 3 Warning Messages when deploying ESXi hosts in a lab
In order to benefit vMotion, and so DRS/DPM or FT (fault tolerance), you need to have a shared storage in your lab. The shared storage is accessible from all of your ESXi hosts and the VM’s files lives there. The VMDK, VMX and other files are physically stored there.
My nas device is a home made NAS box powered by Intel Atom Double core CPU. I have done a series of articles explaining the different parts and how I build it. Since such a NAS device can be installed with several systems (not in once of course), I have tested several like Openfiler, Nexenta or FreeNAS.
I don’t know If I’ll keep the current FreeNAS 8, but the interface is really cool….
I installed the FreeNAS to USB Flash drive to save one SATA port. Even if I could possibly afford to buy a commercial NAS box from QNAP or Synology I didn’t do it in purpose. I wanted to live a real life experience by building it myself and after all, at the end, my NAS box has not only RAID 5 spindle with 4 SATA drives, but the 2 SATA slots which stayed free, were populated with my 2 SSD drives (with capacities of 64Gigs and 128 Gigs). So I have created 2 more datastores my shared storage where I can put quite a few VMs which needs more IOPS. -:)
So at the end my solution is more flexible and cheaper, since I’m using 6 SATA slots. The commercial solutions from QNAP or Synology with 6 SATA slots are out of reach of simple individual since the’re over €1000 …..so way too expensive. My NAS box (without SATA drives) did cost me a bit over €300).
- How to build a low cost NAS for VMware Lab - introduction
- How to build low cost shared storage for vSphere lab - assembling the parts
- VMware Home Lab: building NAS at home to keep the costs down - installing FreeNAS
- Performance tests with FreeNAS 7.2 in my homelab
- Installation Openfiler 2.99 and configuring NFS share
- Installing FreeNAS 8 and taking it for a spin
- My homelab – The Network design with Cisco SG 300 - a Layer 3 switch for €199.
- vSphere Lab (This post)
- Video of my VMware vSphere HomeLAB
- How to configure FreeNAS 8 for iSCSI and connect to ESX(i)