Difficult to ignore the Intel’s latest CPU release. Intel codename Haswell. The K-series parts lack the support for transactional memory extensions and VT-d device virtualization included with standard Haswell CPUs, so obviously I’ve turned out to other than the K-series. If you are into overclocking, then the 4770K would probably be your choice.
For my home lab builds I always picks consumer parts. Price and WAF is mostly the first criteria. The second criteria was an energy consumption. That’s why I picked up the CPU with the S-series which is the energy efficient model. And I also picked up a cheap Haswell based board for €90 only. The board supports VT-D, the CPU also.
There are many homelab options, with many different needs and acceptance levels, and other factors which one can consider. Depending of those needs you’ll probably do another build completely different way. But I’m showing here what I’ve done, how it works for me, what’s the configuration and the approximative power consumption.
Update: Check my new ESXi Home Lab host – Building Power Efficient ESXi Server – Start with an Efficient Power Supply
The hardware components
I’m going “green” and picked up the energy efficient i7-4770S model from Intel, which has VT-d, VT-x and vPro, and at the same time has only 65w TDP. The 4 cores, with 8Mb of cache and hyper threading gives me enough power and flexibility to play with. The only “limiting factor” is finally the memory as I would prefer having 64Gb (with prices being so low at the moment).
The board I picked is mATX board from ASRock, and has in addition of two PCIe slots also two standard PCI slots, which seems to disappear on many boards. If not there is nothing fancy there, and I’m not an overclocker fan… What I needed was VT-x to be able to install ESXi and do a passthrough to play with VT-d. I found both requirements with this cheap ASRock board.
I picked up quad Memory pack – G.Skill F3-10666CL9Q-32GBXL (220€). As being said, the 32Gb is the maximum that the CPU (and the board) can support. But even with 32 GB “only” it will be another with this much ram as the two other i7 (Nehalem) boxes I’m using in my lab are maxing at 24 Gb.
Update: As a storage “rig” I’m using Nexenta. A build I’ve done few months ago. It’s a micro-ITX box that I revamped with an Intel i5 3470-S and 16Gb of RAM. The box runs NexentaStor Community edition (with some flash storage). So no more “hybrid” approach and Nexenta as a VM.
Shop for vSphere licenses at VMware Store:
- vSphere Essentials Plus – vMotion, HA… 3 Hosts, vCenter
- vSphere Essentials – 3 Hosts, vCenter
- vSphere Standard – Per Physical CPU license
Today’s build has a mini-ATX sized motherboard. And I also took a smaller size box from Zalman. It’s an ordinary, budget friendly box for approximately €30. I quite liked the place for the internal storage (I’m not planning to fill it with spinning magnetic disks). There is a place for two 3.5′ disk and one 2.5′ SSD (or hdd). The whole system is powered by Antec NEO ECO 620C 620W, which was one of the PSU certified on Haswell. Yes, one must pay attention to get a Haswell compatible PSO since the CPU drains almost no power at idle time.
Quote from an article from TechReport.com, where the folks got a detailed informations concerning the details about PSU Haswell compatibility. This is a reply from PSU manufacturer Corsair:
According to Intel’s presentation at IDF, the new Haswell processors enter a sleep state called C7 that can drop processor power usage as low as 0.05A. Even if the sleeping CPU is the only load on the +12V rail, most power supplies can handle a load this low. The potential problem comes up when there is still a substantial load on the power supply’s non-primary rails (the +3.3V and +5V). If the load on these non-primary rails are above a certain threshold (which varies by PSU), the +12V can go out of spec (voltages greater than +12.6V). If the +12V is out of spec when the motherboard comes out of the sleep state, the PSU’s protection may prevent the PSU from running and will cause the power supply to “latch off”. This will require the user to cycle the power on their power supply using the power switch on the back of the unit.
While writing those lines, the internal gigabit LAN port, which is the Intel I217V, on ESXi 5.1 U1, isn’t recognized yet and I did not find an available driver which I could “slip stream” to the ISO or install as a VIB. But this isn’t really a problem since I’ve slotted in 2 PCI Intel 82541PI Gigabit NICs I had a spare, so I can currently run this host with 2 NICs, which is ok. I’m using VLANs on my SG300-10 switch and I’m using VLANs so the first NIC is configured with management network, vmotion network, FT, VM network – each of those on separate VLAN. The second NIC is configured for iSCSI traffic.
This might change on next VMware major release, since the internal gigabit lan port is an Intel one and from my experience, most of Intel LAN adapters just works out of the box.
I’m running this build of ESXi from USB stick. It’s a convenient way to boot ESXi host, without the need for locally attached disks. In production environments you probably put two local disks in RAID 1 to have some redundancy, but for homelabing I feel comfortable. No noise next to my desk… -:).
In case you’re new to virtualization, or you don’t know the way to install ESXi on an USB stick, if you don’t have CD drive, then you can read my post How to create bootable ESXi 5 USB memory stick with VMware Player. I do have shared storage already in my lab so I did not need to put any local disks inside. I might put some hard drive inside just to install W7 with VMware Workstation, in order to do some testing.
The ESXi can leverage the passthrough capability present on the CPU and the motherboard. Here is a screenshot showing the devices which are available for passthrough.
Another thing that I haven’t write is the fact that the CPU fan, which is bundled with Intel CPUs and which is usually crappy and noisy, isn’t any more. It’s rather good news. And in addition, I’ve selected “silent mode” in the Bios to lower down the RPM so no need to replace for another fan as I use to do in my previous builds … -:)
The local SATA controler called Lynx is also recognized by the ESXi 5.1 and here is the screenshot. I’ve tested with one SSD I had in my hand today, at it got recognized without trouble. There is six SATA III storage controllers on the board. But I’m not using any local disks in that whitebox as most of my VMs runs from shared storage which runs Nexenta and Drobo.
The whole Haswell box drains between 50-70 W only, which is excellent compared to the older i7 Nehalem boxes which takes between 120-130w each. The whole lab consumes around 440 – 470w, which is quite good considering there is 4 hosts, 2 switches and 1 NAS box … It starts to be big, but is big enough a lab totaling 96 Gb of RAM? To run all the VMware vCloud suite, Horizon, View, Zimbra… Some Microsoft and Veeam to backup all that -:)
- Intel i7-4770s
- G.Skill F3-10666CL9Q-32GBXL (4x8Gb)
- ASRock H87m PRO4
- Antec NEO ECO 620C 620W
- Zalman ZM-T1U3
The whole ESXi box costs around €650 and will be used in my lab, for learning purposes. Haswell as architecture, as you know, brings not much computing increase power (around 10% at most), but brings very good energy efficiency and quite good built in graphics.
The i7-4770s has a Intel 4600 graphics card built in, so in case that you want to use such a box for something else than ESXi host, you can. Just slot in an SSD to obtain a killer gaming station – with energy efficiency as a bonus…. Or use it as a box with VMware Workstation, and “vLab-in-a-box” configurations. You might want to get my e-book which guides you through – How-to build a nested vSphere Lab on PC or laptop with limited resources?
I hope that you found this post useful. Feel free to share, comment and subscribe to our RSS feed.
You’ll certainly like this! ESXi 5.5 Free Version – no more hard limit 32GB of RAM
Some older Homelab articles:
- How to build a low cost NAS for VMware Lab – introduction
- How to build low cost shared storage for vSphere lab – assembling the parts
- VMware Home Lab: building NAS at home to keep the costs down – installing FreeNAS
- Performance tests with FreeNAS 7.2 in my homelab
- Installation Openfiler 2.99 and configuring NFS share
- Installing FreeNAS 8 and taking it for a spin
- My homelab – The Network design with Cisco SG 300 – a Layer 3 switch for €199.
- Video of my VMware vSphere HomeLAB
- How to configure FreeNAS 8 for iSCSI and connect to ESX(i)