Today is a sad day. After just over fifteen years of having a computer lab environment at home, I have decided the time has come to decommission it. The bandwidth and energy requirements have become too much, and I can no longer justify having one in my current role.
For a while now, the name “homelab” hasn’t really done it justice. As one chap pointed out on Twitter a few weeks back – “this guy has a better homelab than most commercial companies”. Unfortunately, that comes at a price, which manifests itself each month in the form of a cripplingly expensive electricity and Internet bill. As summer approaches, the warm weather exacerbates the problem.
However by far the biggest issue with keeping enterprise-grade computing hardware at home is the noise. It often goes far beyond what is bearable, and when you work from home a couple of days a week it can drive you to distraction.
Something had to change.
Current environment
The existing platform centres around a 4-node VMware vSAN cluster, of which one host is compute only. The ESXi hosts are SuperMicro 5018-FN4T servers, each with an 8-core processor and 128GB. All Flash vSAN storage comes in the form of a 480GB cache and two 960GB capacity Samsung SSDs.
Networking is provided by a single Netgear XS712T 10GbE switch, which all hosts connect to. Obviously this is not ideal from a redundancy point of view.
Tier 2 storage comes in the form of a Synology DS1515+ fitted with five 3TB Seagate disks. This offers up NFS and iSCSI storage for the compute and edge clusters, as well as a place to store ISOs. Additionally I use this device to store my Veeam backups.
More information can be found here.
The hunt is on…
Once I had made the decision to move the platform, the next question was “where to?”. My first conversation was with my resident code monkey, who has an office in Warrington and was already considering purchasing a rack to house his kit. However despite being in the centre of a major UK town, connectivity options were frustratingly limited. The best that the local infrastructure could provide (without incurring silly costs) was a 76Mb broadband connection. This just wouldn’t be enough.
However it did make me realise this would not be a simple move, and that I needed to put more thought into it. The following requirements were identified:
- Power and cooling needed to be a priority
- Internet bandwidth had to be substantial
- At least eight public IP addresses
- Costs had to be fixed, and preferably low
All these requirements pointed to a colo of some sort. However these don’t come cheap, and location also comes into play. Whilst it would be “cool” to go all Facebook and find a datacentre in Northern Sweden, the reality is I would need to occasionally visit in person – and anywhere north of The Wall would be a pain. Somewhere in the Midlands, with close proximity to an airport would be ideal.
The Internet pipe also had to be considerable, as VMware’s Site Recovery Manager plays a large part in a number of my PoCs, so the bandwidth available for array-based replication had to be fit for purpose. Whilst it’s hard to quantify exactly how much would be required without exact calculations, it would be enough to simply say “as much as possible”.
A number of products in the environment are hosted online, from a fully distributed Microsoft Exchange solution to a load-balanced VMware View implementation. Whilst these Internet-facing services could share public IP addresses, I do prefer to separate where possible. I immediately identified that eight addresses would be needed.
Finally costs had to be fixed, and as low as possible. I didn’t want to spend more than £250 (including VAT) a month.
The find
After spending some time doing research, I came across VeloxServ, a server hosting provider in the West Midlands, UK. They have datacentres in Wolverhampton and London, and offer dedicated server hosting and colocation.
I chose their quarter-rack offering, and specified 3 amp for my power requirements:
Increasing the power brings the total to £222 a month (inc VAT). Whilst not exactly cheap, it falls within my budget.
Uplink to the VeloxServ network comes in the form of a gigabit network connection, and a /28 subnet of public IPs (thirteen usable) is provided. As for the speed:

We need to talk about that latency…
That settled it. It was time to move.
The physical migration went smooth with no issues. I raised a few support calls to arrange site access and establish reverse DNS which were all dealt with quickly.
In a few hours the lab was relocated to its new home in the West Midlands.
The future
The environment is constantly evolving to keep up with the pace of technology. It requires continuous investment in both time and money to prevent it becoming obsolete.
My first task is to increase my storage performance in the compute cluster. After reading about Myles‘ environment, I have decided to replace the existing Synology DS1515+ with a DS2015xs. This will enable 10GbE connectivity and provide more drives slots for an SSD read/write cache.
My next task is to install another Netgear XS712T 10GbE switch. This will eliminate a single point of failure at the switching level, which if occurred would be catastrophic for vSAN.
At the software level, I need to extend my Horizon View environment so I can implement a full cloud pod architecture. I already have a pair of F5 LTMs in HA at each datacentre, now I just need to build the GTMs. I also need to configure the HA Proxy servers in the DMZ which are load-balancing my View Access Points.
Lastly I need to extend my vRealize Automation environment into the cloud – specifically Azure. My current customer is keen to leverage cloud platforms in an upcoming project, and this would provide an excellent opportunity to get ahead of the curve.
Oh, and I still have to move my production environment too… but that’s certainly for another day.
I’m surprised at your move. I run two Supermicro 5028D-TN4T servers (the small tower versions of yours), a custom built Supermicro X10SL7-F for my SAN (with 8 SSDs and one HDD), a Cisco SG300-28 switch, Cyberpower UPS and a Netgate pfSense firewall and I run all of this in my lounge where I watch movies on a projector and its quiet enough considering the area its in. Not silent but far from noisy (and noise drives me mad). I seem to use between 180 and 220W for power and I reckon it averages about 190W when idle. My electricity bill went up about £25pm once the lab was built, turned on and went live. I pay about £25 for 100mb down/10mb up fibre but I have a dynamic IP (business fibre is silly expensive here). I only count the extra I spend in electricity to run the lab as I was already paying for the fibre broadband so am only paying about £25 to run the lab per month. For noise you just need to select the hardware carefully.
If I could only lower the fan RPMs in the Supermicro 5028D-TN4T servers this setup would be near perfect!
LikeLike
So I have four servers in total, but also it’s the associated infrastructure. Storage, switches, firewalls etc. It all adds up.
But as you pointed out, it’s not just about cost – the noise comes into it as well. Plus I couldn’t live with just a single dynamic IP 😦
Have you looked at Noctura fans? I know a lot of people who swap out the stock fans for those….
LikeLike
I thought if you used the Noctua fans they caused the fans to continually rev up and down which, from my experience, is rather annoying?
LikeLike
Pingback: Moving Production to Amazon Web Services | virtualhobbit
I took a different approach in my home lab.
I run my entire vmware lab inside vmware workstation using nested virtualization, on a custom-built workstation with dual 8 core E5, 64GB of ram(looking to get some more), and NVMe Raid 0. It has served me very well for learning and testing purposes, and the performance is beyond my expectations. It has saved me a lot of money on switches and servers.
For home semi-production(thanks to electricity and ISP monopoly I don’t have redundancy on both) environment, i decided to go without a SAN. I run Joyent Triton on 2 Dell R720 all SSD local-attached and 1 Supermicro on HDD as backup server. The performance is bare-metal speed, and they live happily on 1Gb switches. As for the noise they are quiet most of the time, and my fridge+dishwasher+dryer are much louder than them.
LikeLike
Thanks for this! I’m considering a similar move myself and appreciate your time in writing this up!
LikeLike