Wednesday Tidbit: Fix Container Service Extension 4.1 Permissions Issue

A few weeks ago I upgraded Cloud Director CSE from 4.0.3 to 4.1. However, when I tried to deploy a new Kubernetes 1.25 cluster, I received the following error:

For this particular tenant, I clone with OrgAdmin role to a new one (OrgAdmin w/ Kubernetes) and then add the permissions of the Cluster Author role.

However, the 4.0.3 combination of this was missing a required permission for 4.1 – hence giving me the above error. After comparing my amalgamated role with the Cluster Author role, I spotted it was missing this:

After adding this permission to my “OrgAdmin w/ Kubernetes” role I was able to recreate CSE clusters.

And…. we’re back!

Mindmaps are useful

This past year has unfortunately not been a productive one for the blog. Daily work, a lab in a perpetual state of disrepair, and family constraints have conspired to keep me producing content.

Well, today I’m back, and with a plethora of ideas and projects to share!

Over the coming weeks, I plan a major overhaul of the lab, and of my on-premises tenant that hosts my private workloads.

One of these will be to move my blog from hosted WordPress to Kubernetes on-prem. The second part of the migration will be to Hugo.

So… lots to come. Stay tuned!

Wednesday Tidbit: Enable Basic Auth in NSX-ALB (Avi) to enable Tanzu Kubernetes Grid

For a while now I’ve been a big fan of VMware’s Tanzu Kubernetes Grid Integrated, formally Pivotal’s Kubernetes Service. However whilst great in the beginning, newer technologies such as Cluster API have overtaken things like Bosh.

Whilst it could be frustratingly difficult to setup, VMware made serioues efforts to simplify this with solutions such as the Management Console. However after recently struggling with this and trying to create NSX-T Principal Identity certificates during setup, I decided it was time to walk away and go “all in” with TKG.

Tanzu Kubernetes Grid

After the initial fiddly bits (in my case, getting Docker to work nicely on Ubuntu), firing up the management cluster using the UI was trivially easy.

I first opted for using kube-vip as my control plane endpoint provider. After a sucessful installation, I decided I wanted to use NSX Advanced Load Balancer (formaly known as Avi Vantage) for my load-balancer.

I re-ran the installation, entered in the NSX ALB credentials, verified it saw my cloud and networks, and continued on.

However, it didn’t matter how many times I tried to perform the installation, it would always fail without even creating the control plane VMs. The logs were lacking in clarity as to the cause.

TL;DR

After pouring over the initial documentation at https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.5/vmware-tanzu-kubernetes-grid-15/GUID-mgmt-clusters-install-nsx-adv-lb.html, it appears VMware have missed a critical step.

To enable the TKG installer to install and configure an endpoint provider in NSX ALB, Basic Authentcation needs to be enabled.

To do this, in ALB click on the Administration tab, then expand Settings, and click Access Settings. Click the pencil icon on the right and then check the box for Allow Basic Authentication:

Makes all the difference

Frustrating I only found this out while reading Cormac Hogan‘s blog, so kudos to him.

Automate SSL Certificate Issuing and Renewal with HashiCorp Vault Agent – Part 1: Configure Vault

One of the biggest challenges operations teams face is SSL certificate issuing and renewal. Often this is because different applications, like vendor appliances, have a complicated renewal process. Others can be because corporations simply miss the renewal date. If large enterprises like Microsoft sometimes fail at this, what hope do the rest of us have? Continue reading

Wednesday Tidbit: Ensure you add Perl to your Linux Templates for vRealize Automation 8.x

Recently I deployed a new vRealize Automation environment into HobbitCloud using vRealize Suite Lifecycle Manager. The Day 2 configuration was done using Ansible, and configured items such as the Cloud Zones, Projects, Image Mappings etc.

As with all my desktop and server images, I had automated the build of my CentOS 8 image using HashiCorp Packer and GitLab by simply committing the latest ISO to my repository.

My Cloud Templates were also stored in source control (also my on-premises GitLab environment), so once Ansible had configured the integration the started to sync. However when I came to deploy a CentOS-based workload to one of my compute workload domains, it failed as it was unable to get an IP address.

TL;DR

For open-vm-tools to be able to get a IP address from a vRealize Automation network profile, you need to ensure Perl is installed in the base image.