Today is a sad day. After just over fifteen years of having a computer lab environment at home, I have decided the time has come to decommission it. The bandwidth and energy requirements have become too much, and I can no longer justify having one in my current role.
For a while now, the name “homelab” hasn’t really done it justice. As one chap pointed out on Twitter a few weeks back – “this guy has a better homelab than most commercial companies”. Unfortunately, that comes at a price, which manifests itself each month in the form of a cripplingly expensive electricity and Internet bill. As summer approaches, the warm weather exacerbates the problem.
However by far the biggest issue with keeping enterprise-grade computing hardware at home is the noise. It often goes far beyond what is bearable, and when you work from home a couple of days a week it can drive you to distraction.
Something had to change.
The existing platform centres around a 4-node VMware vSAN cluster, of which one host is compute only. The ESXi hosts are SuperMicro 5018-FN4T servers, each with an 8-core processor and 128GB. All Flash vSAN storage comes in the form of a 480GB cache and two 960GB capacity Samsung SSDs.
Networking is provided by a single Netgear XS712T 10GbE switch, which all hosts connect to. Obviously this is not ideal from a redundancy point of view.
Tier 2 storage comes in the form of a Synology DS1515+ fitted with five 3TB Seagate disks. This offers up NFS and iSCSI storage for the compute and edge clusters, as well as a place to store ISOs. Additionally I use this device to store my Veeam backups.
More information can be found here.
The hunt is on…
Once I had made the decision to move the platform, the next question was “where to?”. My first conversation was with my resident code monkey, who has an office in Warrington and was already considering purchasing a rack to house his kit. However despite being in the centre of a major UK town, connectivity options were frustratingly limited. The best that the local infrastructure could provide (without incurring silly costs) was a 76Mb broadband connection. This just wouldn’t be enough.
However it did make me realise this would not be a simple move, and that I needed to put more thought into it. The following requirements were identified:
- Power and cooling needed to be a priority
- Internet bandwidth had to be substantial
- At least eight public IP addresses
- Costs had to be fixed, and preferably low
All these requirements pointed to a colo of some sort. However these don’t come cheap, and location also comes into play. Whilst it would be “cool” to go all Facebook and find a datacentre in Northern Sweden, the reality is I would need to occasionally visit in person – and anywhere north of The Wall would be a pain. Somewhere in the Midlands, with close proximity to an airport would be ideal.
The Internet pipe also had to be considerable, as VMware’s Site Recovery Manager plays a large part in a number of my PoCs, so the bandwidth available for array-based replication had to be fit for purpose. Whilst it’s hard to quantify exactly how much would be required without exact calculations, it would be enough to simply say “as much as possible”.
A number of products in the environment are hosted online, from a fully distributed Microsoft Exchange solution to a load-balanced VMware View implementation. Whilst these Internet-facing services could share public IP addresses, I do prefer to separate where possible. I immediately identified that eight addresses would be needed.
Finally costs had to be fixed, and as low as possible. I didn’t want to spend more than £250 (including VAT) a month.
After spending some time doing research, I came across VeloxServ, a server hosting provider in the West Midlands, UK. They have datacentres in Wolverhampton and London, and offer dedicated server hosting and colocation.
I chose their quarter-rack offering, and specified 3 amp for my power requirements:
Increasing the power brings the total to £222 a month (inc VAT). Whilst not exactly cheap, it falls within my budget.
Uplink to the VeloxServ network comes in the form of a gigabit network connection, and a /28 subnet of public IPs (thirteen usable) is provided. As for the speed:
That settled it. It was time to move.
The physical migration went smooth with no issues. I raised a few support calls to arrange site access and establish reverse DNS which were all dealt with quickly.
In a few hours the lab was relocated to its new home in the West Midlands.
The environment is constantly evolving to keep up with the pace of technology. It requires continuous investment in both time and money to prevent it becoming obsolete.
My first task is to increase my storage performance in the compute cluster. After reading about Myles‘ environment, I have decided to replace the existing Synology DS1515+ with a DS2015xs. This will enable 10GbE connectivity and provide more drives slots for an SSD read/write cache.
My next task is to install another Netgear XS712T 10GbE switch. This will eliminate a single point of failure at the switching level, which if occurred would be catastrophic for vSAN.
At the software level, I need to extend my Horizon View environment so I can implement a full cloud pod architecture. I already have a pair of F5 LTMs in HA at each datacentre, now I just need to build the GTMs. I also need to configure the HA Proxy servers in the DMZ which are load-balancing my View Access Points.
Lastly I need to extend my vRealize Automation environment into the cloud – specifically Azure. My current customer is keen to leverage cloud platforms in an upcoming project, and this would provide an excellent opportunity to get ahead of the curve.
Oh, and I still have to move my production environment too… but that’s certainly for another day.