Recently I’ve been toying with the idea of a homelab refresh. My current Dell and Cisco hardware has performed well over the years, and is still very powerful. However new technologies such as All-Flash VSAN and Azure Stack require enterprise-level hardware such as 10GbE and vast amounts of memory.
It was time for a change.
At present I use two Dell PowerEdge T710s stuffed to the gills with RAM and gigabit network cards (ten of them!) across two sites. I also use vSphere Replication together with SRM to provide multi-site redundancy.
Three of the NICs pipe into Cisco 2600 routers to emulate a WAN environment. I also use two Cisco 2950 switches and an ASA 5510 to keep my networking skills fresh.
As homelabs go this is still pretty impressive, however it is not without its problems. The PowerEdges are very power hungry… every time they start up the lights dim. They only have one PSU fitted, but 1100W is a lot for any server.
Each server has 144GB of RAM, but I still found this a limitation if I wanted to run full Exchange, View and vRealize Automation solutions concurrently. And this is without the supporting management infrastructure such as domain controllers, RSA Authentication Manager etc etc.
Networking was also a constraint. Whilst most of the VMs don’t require huge amounts of bandwidth (and I don’t need to perform vMotion eight machines at once), certain solutions I wished to evaluate, do. VSAN and Atlantis USX are prime examples.
Finally, noise was a factor. Enterprise-level hardware is not only expensive but noisy. Whilst the “secondary datacentre” (at my parents house) is located in an attic space, the primary currently sits in the dining room – not the official datacentre that it was designed for. As someone said, it had a low wife acceptance factor.
The King is dead, long live the King
Please note: I cannot take credit for this design. The genesis for this refresh can be traced back to Frank Denneman who first put me on to using this hardware.
I began the search for new hardware, which had to meet the following criteria:
- Powerful – could take 128GB of RAM
- Future proof – 10GbE was a must
- Low noise – it had to have a high WAF
- Low energy requirements – I don’t want the National Grid to have to fire up Dinorwig Power Station every time I power it on
In the end I came across the SuperMicro 5018D-FN4T. These are 19-inch, 1U rackable servers consisting of the following:
- Intel Xeon D 8-core processor
- 8GB RAM, expandable to 128GB
- 2 x 10GbE NICs
- 2 x 1GbE NICs
- 1 x PCIe 3.0
- 1 x M.2 slot
- Remote management
- 200W low noise PSU
- No storage as standard (which suited me fine)
These servers seemed ideal, and on that basis I bought three of them.
However the base model fell way short of what I needed. With a budget in mind I headed over to Amazon and added the following to each box:
As the current Cisco switches lack gigabit, never mind ten gigabit, I decided to add a Netgear XS708E also. Six ports would provide connectivity for the three hosts, and two for trunking into my existing switches.
Conscious that I wanted to run software-defined storage as opposed to traditional fibre-channel SAN I decided I would buy some decent disk controllers for the SSDs. Having previous experience with Dell, I purchased three PERC H700 cards off eBay and placed them in each PCIe slot.
To house the servers, two Cisco switches, two routers, a firewall and a PDU I bought a 12U rack and racked it all up.
The three servers are powered directly from my APC UPS, whilst the remaining hardware is plugged into the PDU. To get the power cables into the rack we we had to drill a 38mm hole in the back, which unfortunately chewed up one my my father’s drill bits in the process.
Another item for the bill of materials….
Being a massive fan of VMware vSphere, I decided to take advantage of their generous software allowance for vExperts and installed vSphere 6.0 U2 on each of the 128GB SSDs.
With sequential reads up to a maximum of 2,500MB/s, and writes up to 1,500MB/s, it made sense to use the Samsung 950 Pro as the caching tier for VSAN. The remaining 1TB SSD would be the capacity disk.
With that plan I began the installation.
The first issue I encountered was that the Dell PERC H700 cards refused to boot into the config utility to setup the virtual drives. No amount of pressing <CTRL><R> or changing the motherboard settings did the trick – they just simply wouldn’t play.
- Upgrading/downgrading the BIOS
- Every possible BIOS combination potentially affecting the PCIe slot
- Flashing the PERC cards with different vendor firmware/SBR/SPD
Finally, suspecting a dodgy batch of PERC cards I sourced another locally. That failed to work too.
I reached out to SuperMicro support and they did their best to help. They were really great… even writing custom BIOSes for me to try. However in the end I had to admit defeat – for some reason the PERC cards did not want to play with the SuperMicro servers.
Reluctantly I purchased three more disk controllers, this time the IBM M1015 cards. Whilst without a memory cache and battery backup, they’re still pretty solid cards and perform well.
vSphere 6.0 U2 installed without issue, but did not include the Intel 10GbE drivers. After a quick hunt on Google I found the necessary vib and installed it.
As each SSD is configured as a virtual drive on the disk controller, vSphere needs to be told each disk is a flash device. In previous versions of vSphere this was time-consuming, but in 6.0 U2 this can done painlessly in the Web Client.
Despite being initially sceptical at how easy it was, I was pleasantly surprised to find that configuring VSAN is just a matter of three clicks. Unfortunately for me, I’d accidentally configured one of the NVMe cards as a datastore at one point, so VSAN refused to use it. To discover why I dropped into the shell and used:
This gave me the problem straight away:I removed the partitions using partedUtil and then proceeded to configure VSAN.
All my VMs have been migrated, but the move to vSphere 6.0 has left a couple of issues outstanding.
First order of business is to change my vCenter background with William Lam’s help. The second is to secure my PSCs and vCenter with genuine SSL certificates from my multi-tiered in-house Certificate Authority.
To quote the film, these servers “move like s**t through a goose”. They are really fast and have plenty of room for future expansion. The move to VSAN has been worthwhile, giving both more space and a hefty increase in performance.
A special thanks
I’d like thank my Dad for his help with the project. Not only did he spend time racking it all with me, but he painstakingly measured, cut, crimped and tested each network cable to perfection. He was an absolute star and if it wasn’t for him it would have been an absolute mess.
I’d also like to thank him for continuing to allow me to store one of the Dell PowerEdge T710s in his attic.
And yes Dad, that is the reason your electricity bill went up £75 a month. And yes, I will most certainly cover the cost of that 😉