Beta of VMware VSAN publicly available through VMware communities gives an opportunity for installing and testing. But how to evaluate the hardware necessary to build 3 nodes and not to break the budget? Here are my thoughts of what’s available right now and what would allows to build a VSAN based cluster for homelab. It’s only my own opinions, not a guide – you do what you want!
As for now, there is very little hardware officially supported. I’ve seen some Dell server systems. But this will change quite fast I think. For example, currently there are only 3 SATA and SAS SSDs on the HCL:
- Intel S3700 SATA (specifications) – approx price $500 at Amazon
- Samsung SM1625 SAS – approx. price $ ????
- Dell 400GB Solid State Drive SAS Value SLC 6Gbps 2.5in – saw a price $499 at eBay
Well, it seems that it would be possible to get a SAS based SSD for just under $499. This is still very expensive, but gets you the guarantee of performance and that the failure rate won’t be deceiving.
My thoughts on SSD?
Even if personally I haven’t had failed SSD in my lab yet (I got some Kingston, OCZ, Samsung and Crucial SSDs), I found a list referring to a failure rate of consumer SSDs. This list is no mean to be complete and also do not shows the models either, but gives you small overview of vendors:
From hardware.fr, here are the SSDs failures rates according to a French e-tailer as of May 2013:
- Samsung 0,05%
- Plextor 0,16%
- Intel 0,37%
- Crucial 1,12%
- Corsair 1,61%
- OCZ 6,64% (!)
If you ask me for recommendations for VSAN Homelab Hardware?
I think that it will be quite difficult to give any. For someone building a single, unsupported Whitebox from consumer parts, usually there is always someone who tries, builds the config and tell the others in a post what’s working and what’s not. Example my Haswell ESXi build. See there is a problem with the internal NIC, but a vib exists and can get slipstreamed to the installation ISO or installed via a CLI after.
For VSAN recommendations we will probably see much less posts, as the price tag is the first limiting factor which most likely will limit the number of posts from VMware fans and enthusiasts. It would be great actually to have a site like vm-help.com for VSAN… Don’t you think? Or a list of VSAN hardware for homelabs that just works… But I know that’s just impossible.
I think that for home VSAN builds the enterprise class hardware is quite out of question as for the price of single enterprise class SSD you can have 3 consumer grade SSDs. So if you are in the point to decide or thinking like me to build a small VSAN cluster, think of it as a risk which you might want (or not) to accept. It’s up to you. I assume that most of the VMware home users are not rich, WAF plus the budget are the first two limiting factors. The choice to build a physical VSAN cluster might be still out of budget for many (majority?) VMware users, unless the price hardware falls even more in the months to come.
If you’re on the budget and already have a ESXi whitebox in your lab, try Nested VSAN! Even
better cheaper, try the Free VMware HOL, Lab called HOL-SDC-1308 – Virtual SAN (VSAN) and Virtual Storage Solutions. You just need HTML 5 compatible browser… -:).
Otherwise the VMware VSAN HCL page offers a nice wizard which will integrate a nice feature (not active at the moment) allowing selecting a node of compatible hardware for VSAN. A node which will have compatible storage controller (SAS/SATA host bus adapter (HBA) or a RAID controller), SSD and magnetic disks, at least 1 Gb NICs for VSAN traffic (preferably more or 10 Gbe or Infiniband).
If you still want VSAN compatible HCL hardware, check out the HCL page. You can access the VSAN HCL wizard here.
The wizard shows just few Dell servers for now, but as you can see, there will be Virtual SAN ready Node option, which will allows you to chose a full node.
While the VSAN licensing and pricing is not known yet, few information have filtered already concerning the fact that VSAN will not need Enterprise Plus licensing and that even lower licensing based vSphere license would be used.
Concerning myself, I’m waiting for my Infiniband switch, which would allow me to have the back-end storage on 10Gb speed. Stay tuned for a follow up article. The first part is here – 20 Gbs Storage Network in my Lab.
Feel free to subscribe via RSS or follow me on Twitter.
Article originaly published on www.vladan.fr – Virtualization, ESXi, vSphere, Hyper-V … how-to, tutorials, videos….