In part 1 of this short series we used vRO to create a bearer token for connecting to Microsoft Azure. Now we will create some new components and a master workflow for provisioning our AKS Cluster, to enable our developers to request public cloud Kubernetes straight from vRealize Automation.
Other posts in this series:
Carrying on
Copy the following and save to a JSON file:
{ "location": "", "tags": { "tier": "production", "archv2": "" }, "properties": { "kubernetesVersion": "", "dnsPrefix": "", "agentPoolProfiles": [ { "name": "nodepool1", "count": 3, "vmSize": "Standard_DS1_v2", "osType": "Linux" } ], "linuxProfile": { "adminUsername": "", "ssh": { "publicKeys": [ { "keyData": "" } ] } }, "servicePrincipalProfile": { "clientId": "", "secret": "" }, "addonProfiles": {}, "enableRBAC": false } }
In vRO, click on the Resources tab and create a new folder. Import the above JSON file.
Create a new workflow called Provision Azure Kubernetes Service Cluster. Create the following input parameters, all of type String:
- inClusterName
- inLocation
- inAdminUserName
- inKeyData
Create the following in attributes, all of type String, accept for attRestHost (Any) and attJsonFile (Resource Element):
- attApiVersion
- attBody
- attClientId
- attClientSecret
- attJsonFile
- attResourceGroup
- attResponse
- attRestHost
- attSubscriptionId
- attToken
- attUrl
Enter the client ID and client secret as done in part 1. Obtain your Azure subscription ID in the same manner you did for the tenant ID and enter that.
Set attApiVersion to 2018-03-31 and attResourceGroup to a suitable resource group in Azure. Mine is called Default_RG.
Set attUrl to management.azure.com. For attJsonFile, use the field chooser to select the JSON file you previously imported.
We now have all our inputs and attributes.
Building it out
Drag a workflow onto the canvass. When prompted for the name, choose Request Azure Token. Map outToken to attToken:
Drag an action on and select createRestHost. Map attUrl to inUrl and actionResult to attRestHost:
Drag a scriptable task on and rename it Define Body. Map all input parameters, along with attClientId, attClientSecret and attJsonFile. From the bottom right-hand corner, drag to map attBody as an output:
Click on the Scripting tab and paste in the following code:
//Store content of resource element var config = attJsonFile.getContentAsMimeAttachment(); var content = config.content //Convert to JSON object var jsonObj = JSON.parse(content); //Substitute data jsonObj["location"] = inLocation; jsonObj["properties"]["dnsPrefix"] = inClusterName; jsonObj["properties"]["linuxProfile"]["adminUsername"] = inAdminUserName; jsonObj["properties"]["linuxProfile"]["ssh"]["publicKeys"][0] = { keyData: inKeyData }; jsonObj["properties"]["servicePrincipalProfile"]["clientId"] = attClientId; jsonObj["properties"]["servicePrincipalProfile"]["secret"] = attClientSecret; //Convert JSON back to string attBody = JSON.stringify(jsonObj, null, 2);
Drag the final component, another scriptable task onto the canvass. Call it Submit request, and map inClusterName and the following attributes:
- attApiVersion
- attBody
- attResourceGroup
- attResponse
- attRestHost
- attSubscriptionId
- attToken
As this is the last component there is nothing to output.

Nearly there…
Paste the following for the script:
//Compute the full URL var requestUrlString = "/subscriptions/" + attSubscriptionId + "/resourceGroups/" + attResourceGroup + "/providers/Microsoft.ContainerService/managedClusters/" + inClusterName + "?api-version=" + attApiVersion; var request = attRestHost.createRequest("PUT", requestUrlString, attBody); //Set the content type request.contentType = "application/json"; request.setHeader("Authorization", "Bearer " + attToken); //Execute the request and get the response var response; response = request.execute(); var statusCode = response.statusCode; System.debug("statusCode = " + statusCode); if (statusCode != 201) { System.error("Error: " + attBody); throw new Error("Failed to provision cluster: " + statusCode); } //Set the response attResponse = response.contentAsString; System.log("Response is: " + attResponse);
Validate the workflow, fixing any mistakes if necessary.
Click on the Presentation tab and select the inLocation parameter. Click the turquoise triangle on the right to add a property. From the list choose Predefined answers. Finally, enter values for the locations near to you where you can deploy AKS. For example I have chosen uksouth and ukwest, although at the time of writing UK West is not supported. This is also a good time to make the input labels more presentable.
Run the workflow, entering the necessary details. The final field requires an SSH public key:
Click Submit and verify the cluster creates successfully.
vRealize Automation
Now we know the workflow works as intended, we integrate this into vRA.
In the Design tab, select XaaS Blueprints, and find the Provision Azure Kubernetes Service Cluster workflow and click Next >. Continue to click Next until Finish.
Publish the blueprint. On the Administration tab, select an icon and category for it:

Let’s deploy!!
Click Request and watch your Azure Kubernetes Cluster come to life!

Success!
Final thoughts
Whilst this solution demonstrates how to deploy Azure AKS, it is not yet enterprise-ready.
For example, error checking would need to be added in to ensure any issues with the token or deployment could be easily identified. It would also be handy to be able to choose the amount of nodes to deploy, which would be easy to do.
It would also be necessary to find a way to provide the kubectl file to the requester after provisioning. This could possibly be done as a day 2 operation.
Lastly, this solution has no mechanism for decommissioning the cluster in Azure. To prevent unnecessary costs that would certainly need to be added.
Pingback: Deploy an Azure Kubernetes Service cluster from vRealize Automation – Part 1: Authentication | virtualhobbit
Pingback: Deploy an Azure Kubernetes Service cluster from vRealize Automation – Part 2: Deploying the cluster — virtualhobbit – SutoCom Solutions