I was able evolve my lab a little bit, add some more hardware and achieve a setup of my lab which is composed :
- 2 physical ESX(i) hosts with 24Gigs of RAM each.
- 1 Haswell Whitebox with 32Gb of RAM
- Another NAS Whitebox – running Nexenta - for my shared storage.
- 1 switch (Cisco SG300-10) running VLANs and have a routing in between those VLANs.
- Plus one Drobo Elite iSCSI SAN with SATA disks. Slow, but good to store the backups of my critical VMs.
Let’s start with the hosts setup.
Both of my ESXi hosts are whiteboxes running on non server hardware. It’s basically a PC with a lot’s of RAM (24 Gigs) each. Both boxes are running lates VMware ESXi hypervizors, which are installed onto USB Flash 2 gigs each.
Both boxes runs on Intel i7 Nehalem CPU architecture. The CPU are not the same, since the first box is the one I had last year already, and I just added one more box. But the CPU which I bought last year were not available as a new anymore. I could have probably find those as a second hand, but I did not bother. So one of those boxes has Intel i7 920 and the other i7 960 (since there were no more the 920 available).
Update: Recently I added a new Haswell i7 whitebox, with 32 Gigs of RAM.
Here is my list of pieces with are present in those 2 ESXi boxes:
- Kingston DataTraveler G3 – 2 Go – USB stick to install the VMware ESXi hypervizors
- Antec HCG – 400W – Power Supplies
- Intel PRO/1000 GT Desktop (OEM) – NICs
- Asus Sabertooth X58 – Motherboard
- Intel Core i7 950 – CPU
- G.Skill Kit Extreme3 3 x 4 Go PC10600 Ripjaws CAS 9 – DDR3 RAM kit
- Case Thermaltake V3 black
- And the box which I made last year – with the Asus P6T SE and Intel i7 920 CPU
The 2 physical boxes with the Antec power supply make almost no noise. I must say that I took a special attention to pick this particular model of Antec, since the fan in those models is 135 mm large and the efficiency is 82%. The max. power of those power supplies are 400W, which is more than enough, since both systems does not have any CD/DVD drives or even Graphic cards….
The silence of those power supplies is just ….. awesome ! The Whiteboxes are just right next to me. Also, I must say that the original Intel CPU Fans were replaced by more silent models. I think that everyone confirms, that those original Intel Fans are quite noisy.
I also I
- digged in bios of both systems to find an option where on can setup “quiet mode” for CPU fan. Once activated, the RPM went down… and the nose level too.. I might loosed some performance there, but that’s not a problem. So that’s the White box pat of my lab. Now let’s move on with other parts.
My Network setup.
What’s in my home network or how I designed the network for VMware vSphere lab? Recently I bought a Cisco switch which is a Layer 3 switch. It comes as a Layer 2 switch, but if you go to the CLI (command line) you can change mode for L3 and benefit of Inter routing VLAN functionnality. The setup was a bit tricky, you can read quite detailed article about how I configured the switch and how I make those VLANs working. There are 2 detailed articles – a real story – which in detail shows you my homelab networking experience:
- My switch adventures
- My homelab – The Network design with Cisco SG 300 – a Layer 3 switch for €199. (with a complete schema)
My switch: Cisco SG 300-10
My final (it’s still evolving..) network layout as you can see it through the VMware vSphere Client:
The final network design with 3 physical NICs.
You could only use two, or even one NIC, of course, but the Intel Destop NIC models aren’t so expensive and since inside the white boxes is plenty of PCI or PCI Express slots, I populated 3 NICs in each box.
As for the VLANs, I created like 7-8 VLANs to start with, but you can have up to 25 VLAN interfaces with assigned IP addresses configured on the Cisco SG 300.
There is a GUI on this model, since it’s a model destined for the SMB market. You can configure the whole switch with yoiur favourite browser. (except for the initial Layer 3 configuration) .The switch is fanless so no noise at all….. -:)
The GUI is nice and clean, looks more “professional” compare to the Linksys SRW2008 (from the article….My switch adventures).
That’s in brief my network part. Now I’ll show you how I made my home made NAS box and which parts I used for that…
In order to benefit vMotion, and so DRS/DPM or FT (fault tolerance), you need to have a shared storage in your lab. The shared storage is accessible from all of your ESXi hosts and the VM’s files lives there. The VMDK, VMX and other files are physically stored there.
- How to build a low cost NAS for VMware Lab - introduction
- How to build low cost shared storage for vSphere lab - assembling the parts
- VMware Home Lab: building NAS at home to keep the costs down - installing FreeNAS
- Performance tests with FreeNAS 7.2 in my homelab
- Installation Openfiler 2.99 and configuring NFS share
- Installing FreeNAS 8 and taking it for a spin
- My homelab – The Network design with Cisco SG 300 - a Layer 3 switch for €199.
- Video of my VMware vSphere HomeLAB
- How to configure FreeNAS 8 for iSCSI and connect to ESX(i)
- Homelab Nexenta on ESXi - as a VM
- Haswell ESXi Whitebox