I’ve been running VMware’s Tanzu Kubernetes Grid in HobbitCloud for quite a while. It’s an easy way for me to consume Kubernetes, which I use for demonstrating containerised workload connectivity between clouds. I also deploy container workloads direct from vRealize Automation to TKGm clusters.
Recently I was asked to stand up a simple three-node Nginx application load-balanced by the NSX Advanced Load-Balancer and secured using SSL. For the latter, I decide to use the tried and tested HashiCorp Vault.
Whilst this is quite a simple task, there are several steps one must take to configure the Advanced Load-Balancer and Vault.
This post assumes you have the following installed and configured:
- TKG management cluster
- TKG workload cluster
- HashiCorp Vault
- Vault PKI secrets engine
In the following scenario, Vault is a three-node HA cluster using Raft, installed outside of Kubernetes. For this reason, we will not be using Kubernetes authentication.
The first task is to deploy cert-manager, which will secure our Nginx ingress with a certificate issued by Vault.
cert-manager
First, get a list of clusters using:
kubectl config get-contexts
Select your workload cluster:
kubectl config use-context <insert workload cluster name here>
Install cert-manager:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
The command above uses 1.8.0, which is the latest version available at the time of writing. You can check this at https://github.com/cert-manager/cert-manager.
That’s it for installing cert-manager. The configuration comes in the form of the issuer, and for that we use Vault.
HashiCorp Vault
To enable cert-manager to communicate with Vault we need to create an issuer. To prevent the need to create a separate one per namespace, we will create a ClusterIssuer.
First, create a policy in Vault (substitute your PKI mount point accordingly):
path "pki/sign/mdb-lab-dot-com" { | |
capabilities = ["create", "update"] | |
} |
Apply the policy:
cat acl_sa_cert-manager.hcl | vault policy write acl_sa_cert-manager -
Create an AppRole. I have called mine sa_cert-manager:
vault write auth/approle/role/sa_cert-manager \ token_ttl=0m \ token_num_uses=0 \ secret_id_num_uses=0 \ token_no_default_policy=false \ policies="acl_sa_cert-manager"
Finally, get the role ID for the sa_cert-manager service account from Vault and generate a secret ID:
vault read auth/approle/role/sa_cert-manager/role-id
vault write -f auth/approle/role/sa_cert-manager/secret-id
Take the secret generated from the previous step and Base64 encode it. You can either use a website such as https://www.base64decode.org to do it or on Linux use:
echo "<insert secretId here>" | base64
Create a file named vault-auth.yaml and insert the secretId on line 8:
apiVersion: v1 | |
kind: Secret | |
type: Opaque | |
metadata: | |
name: cert-manager-vault-approle | |
namespace: cert-manager | |
data: | |
secretId: "<insert base64 secret here>" |
The key thing here is to ensure the secret is created in the cert-manager namespace.
Create the secret:
kubectl apply -f vault-auth.yaml
Now we have the secret, let’s create the issuer. Create a file called vault-issuer.yaml and insert the Vault server address and AppRole roleId on line 13:
apiVersion: cert-manager.io/v1 | |
kind: ClusterIssuer | |
metadata: | |
name: vault-cluster-issuer | |
spec: | |
vault: | |
path: pki/sign/<insert PKI issuer here> | |
server: https://<insert Vault server here>:8200 | |
caBundle: <base64 encoded caBundle PEM file> | |
auth: | |
appRole: | |
path: approle | |
roleId: "<insert roleId here>" | |
secretRef: | |
name: cert-manager-vault-approle | |
key: secretId |
Don’t forget the port on line 8. I had tremendous amounts of fun trying to work out why it wasn’t working with that missing.
Now create the issuer:
kubectl apply -f vault-issuer.yaml
Verify cert-manager can successfully communicate with Vault:
kubectl get clusterissuer vault-cluster-issuer -o wide
If you see “True” under READY and “Vault Verified” under STATUS then communication is successful.
Issue the Certificate
Create a new namespace using:
kubectl create ns nginx-test
Create a file called cert-nginx-test.yaml and enter the following (substitute accordingly):
apiVersion: cert-manager.io/v1 | |
kind: Certificate | |
metadata: | |
name: nginx-test-tkg-mdb-lab-com | |
namespace: nginx-test | |
spec: | |
secretName: nginx-test-tkg-mdb-lab-com | |
issuerRef: | |
name: vault-cluster-issuer | |
kind: ClusterIssuer | |
commonName: nginx-test.tkg.mdb-lab.com | |
dnsNames: | |
– nginx-test.tkg.mdb-lab.com |
Issue the certificate:
kubectl apply -f cert-nginx-test.yaml
Verify the certificate has been issued correctly:
kubectl get certificate -n nginx-test nginx-test-tkg-mdb-lab-com
If the status is “True” under READY, then the certificate was issued successfully.
You can drill down further using:
kubectl describe certificate -n nginx-test nginx-test-tkg-mdb-lab-com
Here you will see that a private key was written out to a secret, a certificate request was made, and the certificate was successfully issued:
The secret containing the private key can be viewed using:
kubectl describe secret -n nginx-test nginx-test-tkg-mdb-lab-com
If the certificate request was not successful, first find the certificate request name:
kubectl get certificateRequest -n nginx-test
Now describe it to understand what prevented it from being issued (substitute accordingly):
kubectl describe certificateRequest -n nginx-test nginx-test-tkg-mdb-lab-com-pdq999
NSX Advanced Load-Balancer
To enable name resolution for our load-balanced Kubernetes application we need to create a virtual service for DNS in the Advanced Load-Balancer.
Login to the ALB console and click the Applications tab, followed by VS VIPs. Create a VIP on the network you wish to be the frontend for your load-balancers and allocate an IP address.
I have dedicated an entire VLAN (VLAN60) and subnet (172.17.60.0/24) to mine.
Click Virtual Services. Create a new service (using the Advanced Setup) and use the following (substitute accordingly):
Name | ALB_ALB_DNS |
VS VIP | <insert previously created VIP here> |
Application Profile | System-DNS |
Services | 53 |
Click Save. Please note a pool for this service is not needed.
Click the Administration tab followed by Settings and then DNS Service. Select the service you created in the previous step:
The final step is to create a delegation in your DNS to point to the DNS Service. In my environment I want ALB to be responsible for the tkg.mdb-lab.com domain, and I have created a record for alb_dns_dns and the VS VIP in the mdb-lab.com domain. The delegation for tkg points to this record.
Avi Kubernetes Operator
Once the ALB DNS Service has been created, the next step is to update the Avi Kubernetes Operator (AKO).
AKO gets its config from a config map in the avi-system namespace. However, it’s not enough to make changes there as they will be overwritten by the Kapp controller.
The solution is to make changes upstream in ako-for-all in the Management cluster, which will then propagate down.
Whilst still in the workload cluster, get the current config map and examine its output:
kubectl get cm avi-k8s-config -n avi-system -o yaml
Make a note of the defaultIngController settings (line 10):
apiVersion: v1 | |
data: | |
apiServerPort: "8080" | |
autoFQDN: disabled | |
cloudName: HobbitCloud – vSphere – Utrecht | |
clusterName: default-tkg-wkl-prod | |
cniPlugin: antrea | |
controllerIP: nl-utc-p-alb-01.mdb-lab.com | |
controllerVersion: 20.1.3 | |
defaultIngController: "false" | |
deleteConfig: "false" | |
disableStaticRouteSync: "true" | |
fullSyncFrequency: "1800" | |
logLevel: INFO | |
serviceEngineGroupName: Utrecht | |
serviceType: NodePort | |
shardVSSize: SMALL | |
vipNetworkList: '[{"networkName":"VLAN60","cidr":"172.17.60.0/24"}]' | |
kind: ConfigMap | |
metadata: | |
annotations: | |
kapp.k14s.io/identity: v1;avi-system//ConfigMap/avi-k8s-config;v1 | |
kapp.k14s.io/original: '{"apiVersion":"v1","data":{"apiServerPort":"8080","autoFQDN":"disabled","cloudName":"HobbitCloud | |
– vSphere – Utrecht","clusterName":"default-tkg-wkl-prod","cniPlugin":"antrea","controllerIP":"nl-utc-p-alb-01.mdb-lab.com","controllerVersion":"20.1.3","defaultIngController":"false","deleteConfig":"false","disableStaticRouteSync":"true","fullSyncFrequency":"1800","logLevel":"INFO","serviceEngineGroupName":"Utrecht","serviceType":"NodePort","shardVSSize":"SMALL","vipNetworkList":"[{\"networkName\":\"VLAN60\",\"cidr\":\"172.17.60.0/24\"}]"},"kind":"ConfigMap","metadata":{"labels":{"kapp.k14s.io/app":"1650977783928597302","kapp.k14s.io/association":"v1.ae838cced3b6caccc5a03bfb3ae65cd7"},"name":"avi-k8s-config","namespace":"avi-system"}}' | |
kapp.k14s.io/original-diff-md5: c6e94dc94aed3401b5d0f26ed6c0bff3 | |
creationTimestamp: "2022-04-26T12:56:35Z" | |
labels: | |
kapp.k14s.io/app: "1650977783928597302" | |
kapp.k14s.io/association: v1.ae838cced3b6caccc5a03bfb3ae65cd7 | |
name: avi-k8s-config | |
namespace: avi-system | |
resourceVersion: "5587" | |
uid: 076c0505-f417-444b-93f5-bc284144cd8f |
Switch to the TKGm management using kubectl config use-context and then create the following:
apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1 | |
kind: AKODeploymentConfig | |
metadata: | |
name: install-ako-for-all | |
spec: | |
extraConfigs: | |
ingress: | |
defaultIngressController: true | |
disableIngressClass: false |
Apply it:
kubectl apply -f install-ako-for-all.yaml --validate=false
You’ll receive a warning but this can safely be ignored. Confirm the changes are reflected:
kubectl get akodeploymentconfig install-ako-for-all -o yaml
Switch back the workload cluster and redeploy the AKO pod:
kubectl delete po -n avi-system ako-0
Confirm the changes have been made:
kubectl get cm -n avi-system avi-k8s-config -o yaml
Deployment File
 The last step is to supply our deployment, service and ingress.
Create a file called nginx-test.yaml and use the following (substitute accordingly):
apiVersion: apps/v1 | |
kind: Deployment | |
metadata: | |
name: deployment-nginx | |
labels: | |
app: nginx | |
namespace: nginx-test | |
spec: | |
selector: | |
matchLabels: | |
app: nginx | |
replicas: 3 | |
template: | |
metadata: | |
labels: | |
app: nginx | |
spec: | |
containers: | |
– name: nginx | |
image: nginx:latest | |
ports: | |
– containerPort: 80 | |
— | |
apiVersion: v1 | |
kind: Service | |
metadata: | |
name: svc-nginx | |
labels: | |
app: nginx | |
namespace: nginx-test | |
spec: | |
type: LoadBalancer | |
ports: | |
– name: https | |
port: 443 | |
targetPort: 80 | |
protocol: TCP | |
selector: | |
app: nginx | |
— | |
apiVersion: networking.k8s.io/v1 | |
kind: Ingress | |
metadata: | |
name: ingress-nginx | |
labels: | |
app: nginx | |
namespace: nginx-test | |
spec: | |
tls: | |
– hosts: | |
– nginx-test.tkg.mdb-lab.com | |
secretName: nginx-test-tkg-mdb-lab-com | |
rules: | |
– host: nginx-test.tkg.mdb-lab.com | |
http: | |
paths: | |
– path: / | |
pathType: Prefix | |
backend: | |
service: | |
name: svc-nginx | |
port: | |
number: 443 |
kubectl apply -f nginx-test.yaml
Monitor the deployment using the Avi logs:
kubectl logs -n avi-system pod/ako-0
If it is successful you will see several services created in the NSX Advanced Load-Balancer:
In the example above, we’re interested in the address “nginx-test.tkg.mdb-lab.com”, which if your DNS delegation is configured corrected – should respond to a ping.
In a web browser the address should answer on HTTPS and show the Nginx default page:

Success!