Automating VDI Template Creation with VMware Code Stream and HashiCorp Packer – Part 3: Automating the Solution

In part 2 we automated the installation of our VMware Horizon agents, including AppVolumes and Dynamic Environment Manager. We also patched our template and applied optimization settings using the VMware OSOT fling. In this final part, we automate the entire solution from end-to-end using VMware CI/CD tool, Code Stream. In the end, with one-click, we will generate our template ready for use.

Other posts in this series:

  1. Building Windows
  2. Installing the VDI Agents
  3. Automating the Solution

The Process

If we refer back to our original logical design:

Here we see we need to enlist the help of a few extra items such as a Git repo, VMware Code Stream and vRealize Automation Cloud.

vRealize Automation Cloud

Formally known as Cloud Automation Services, vRealize Automation (vRA) Cloud is VMware’s SAAS-based automation solution. It gives you the ability to automate your workloads for the private, public and hybrid clouds.

In this scenario, we will use it to stand up a virtual machine in our private cloud (stage 1). This VM will then download Packer, clone our Git repo, and then run Packer (stage 2). Finally, we will shut down the VM and delete it (stage 3).

The solution should look something like this:

Please note: the rest of this post assumes you already have a vRA Cloud account set-up and configured, and you have deployed a CAS proxy to your private cloud. I also assume you have defined OS flavours and VM sizes.

Getting Started

Create a template in your on-premises environment for vRA Cloud to use. I will be using a Linux-based one in this example (CentOS), but any OS will do. I will not be using cloud-init, but regular vSphere customization.

Create a CAS blueprint consisting of your template and the target network. Under normal circumstances, you would make heavy use of tagging to make selections about network and storage placement. To make things easy for this post, I will specify the details directly in the blueprint:

Ready to go

Version the blueprint, and then deploy it to your environment. When it deploys, verify it has been placed on the correct cluster, network and storage.

Get Packer

Inside the newly deployed VM, use the following commands to download HashiCorp’s Packer:

wget https://releases.hashicorp.com/packer/1.4.2/packer_1.4.2_linux_amd64.zip
unzip packer_1.4.2_linux_amd64.zip
chmod +x packer && mv packer /usr/local/bin/packer
wget https://github.com/jetbrains-infra/packer-builder-vsphere/releases/download/v2.3/packer-builder-vsphere-iso.linux
chmod +x packer-builder-vsphere-iso.linux

Confirm the files download without issue.

Clone Git repo

The contents of our windows-10 folder should look like the following:

Store these files in a Git repo that the deployed VM will be able to access. In my example, I have stored these in a private repo in GitHub (called Packer) and generated an API token to be able to access it. Therefore to clone the repo from the command line use (substitute accordingly):

yum install -y git
git clone https://d369fb19bb212a0fe6d9ad2a8002b06e423fb2b2@github.com/your_username/packer/

Run Packer

Now we have Packer, the vSphere Builder and have cloned the repo. Run Packer using the following to verify our VDI template is created as expected:

cd packer/windows-10
mv ../../packer-builder-vsphere-iso.linux .
packer build -force -var-file variables.json windows-10.json

If all has gone well, our newly created VM should exist in vCenter, along with our deployed blueprint from vRA Cloud. Once verified, delete both.

Code Stream

Login into Code Stream, expand Configure and click Variables.

Click New Variable, and select the appropriate project. Give it a name of rootPW and change the type to SECRET.

Enter in the root password of the template you defined in your CAS blueprint in the value field, then click Save. We will use this later.

SSH Agent

Click Configure and then Endpoints. Click New Endpoint:

Select the appropriate project, and set the type to Agent. Give the agent a name (I simply use “SSH Agent”) and select the cloud proxy relevent to your private cloud.

Click Create.

Build

Create a new pipeline, then select Blank Canvas:

Give the pipeline a name and select the project it will be associated with, then click Create:

Let's get it on!

Click Stage, and then on the right-hand side, rename the stage to something meaningful, like Build.

Click Sequential Task, and give it a name like Clone CentOS Machine.

On type, select Blueprint. Leave Execute task set to Always.

Give a name to your deployment and check the radio button for Cloud Assembly Blueprints. Select your previously-defined blueprint and the associated version.

Click Save.

Configure

Click to add a new stage, and name it accordingly, such as Configure.

Click to Sequential Task to add the first of three SSH tasks. Name it Get Packer, and select your previously defined SSH Agent Endpoint as above.

For the host, enter:

${Build.Deploy CentOS machine.output.deploymentDetails.resources.Cloud_vSphere_Machine_1.address}

The above variable refers to the Build stage, the Deploy CentOS machine task, and the vSphere Machine 1 as a deployment artefact.

Enter the template root/Administrator username, and the check Password. In the Password field, type:

${var.rootPW}

In the Script field, enter the following:

wget https://releases.hashicorp.com/packer/1.4.2/packer_1.4.2_linux_amd64.zip
unzip packer_1.4.2_linux_amd64.zip
chmod +x packer && mv packer /usr/local/bin/packer
wget https://github.com/jetbrains-infra/packer-builder-vsphere/releases/download/v2.3/packer-builder-vsphere-iso.linux
chmod +x packer-builder-vsphere-iso.linux

Click Save.

Click Parallel Task and repeat the above steps. Set the name to Clone Git repo, the type to SSH, and select your SSH Agent endpoint again.

The host, username and password are the same as above. For the script, this time use (substitute accordingly):

yum install -y git
git clone https://d369fb19bb212a0fe6d9ad2a8002b06e423fb2b2@github.com/your_usernane</packer/

Click Save, and then Sequential Task. Enter a name of Run Packer, and all the above details as before. For the script, use:

cd packer/windows-10
mv ../../packer-builder-vsphere-iso.linux .
packer build -force -var-file variables.json windows-10.json

Click Save.

Cleanup

Click to add a new stage and name it Cleanup.

Create a new sequential task called Delete Blueprint and set the type to Blueprint. Set the action to Delete, and enter the following as the Deployment Name:

${Build.Deploy CentOS machine.output.deploymentName}

Click Save.

If everything has been created correctly, your pipeline should look like this:

Looking lovely!

Click Close.

Testing It Out

In Code Stream, click Pipelines:

Click Run, enter an execution comment if necessary, and click Run again. The pipeline will then begin:

If you view the execution in more detail, you can monitor its progress:

After a little while the pipeline will complete:

Success!

If you’ve enabled mail notifications, you’ll get a nice mail in your inbox telling you it’s all done!

Happy pipelining!

3 thoughts on “Automating VDI Template Creation with VMware Code Stream and HashiCorp Packer – Part 3: Automating the Solution

  1. Pingback: Automating VDI Template Creation with VMware Code Stream and HashiCorp Packer – Part 2: Installing the VDI Agents | virtualhobbit

  2. Pingback: Automating VDI Template Creation with VMware Code Stream and HashiCorp Packer – Part 1: Building Windows | virtualhobbit

  3. Pingback: Minimal Touch VDI Image Building With MDT, PowerCLI, and Chocolatey | The Virtual Horizon

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.