Using Continuous Deployment to Provision VDI Desktops

Back in May, I wrote about using GitLab to automate my server builds using HashiCorp Packer. Whilst it is trivially easy to update it to accommodate desktop builds for our VDI users, I now needed a solution to automate the entire workflow – building the image and updating my VMware Horizon desktop pool. In this post, I will document how to do just that,

using continuous deployment methodologies traditionally found in software development.

The brief is simple:

  • Fully automate the provisioning of VDI desktops for the Pilot pool
  • Solution should be triggered when changes are made
  • It also should be scheduled
  • Solution must be stored in Source Control
  • All sensitive data (infrastructure naming etc) must be redacted
  • Credentials must be retrieved from secure storage

The aim is to ensure users in the Horizon Pilot pool have access to the latest components in their desktop image, whether it be the operating system, patches, or VMware Tools. These pilot users are technically savvy, and accept that being part of a test group means that reliability will occasionally suffer.

For this solution we will use HashiCorp Packer to build our image. We will store the Packer configuration files in GitHub (so they can be shared publicly) but mirror the repository into the internal GitLab installation.

To prevent sensitive data from being exposed in a public repository, GitLab CI/CD will define environment variables which Packer will use when executing the build. We will use HashiCorp Vault to store secure machine credentials, and both GitLab and Packer will communicate with Vault to retrieve these.

Finally, we will install VMware PowerCLI, which we will then use to recompose our desktop pool.

The workflow will look like this:

Packer

To use Packer we will need a handful of files. These will be the manifest file, the Autounattend.xml file, the Windows Update provisioner, and numerous scripts for configuring additional agents. Most of these scripts are covered in Automating VDI Template Creation with VMware Code Stream and HashiCorp Packer – Part 1: Building Windows, so I won’t go over them again.

The directory structure looks like this:

Our first task is to configure our Packer manifest. Before we get into the main builder section of the file, we need to define our variables. This will be done in a separate block at the top of the manifest:


{
"variables": {
"vcenter": "{{env `packer_vcenter`}}",
"vcenterUser": "{{env `packer_vcenterUser`}}",
"vcenterPass": "{{env `packer_vcenterPass`}}",
"vmName": "{{env `packer_vmName`}}",
"cluster": "{{env `packer_cluster`}}",
"datastore": "{{env `packer_datastore`}}",
"datastore_iso": "{{env `packer_datastoreISO`}}",
"network": "{{env `packer_network`}}",
"winrmUser": "{{vault `creds/users/misc/administrator` `Username`}}",
"winrmPass": "{{vault `creds/users/misc/administrator` `Password`}}"
}

view raw

variiables.json

hosted with ❤ by GitHub

This block doesn’t contain any sensitive data and merely refers to environment variables, which we will configure later.

Next, we configure the builder:


"builders": [
{
"type": "vsphere-iso",
"vcenter_server": "{{user `vcenter`}}",
"username": "{{user `vcenterUser`}}",
"password": "{{user `vcenterPass`}}",
"insecure_connection": true,
"vm_name": "{{user `vmName`}}",
"vm_version": 15,
"guest_os_type": "windows9_64Guest",
"boot_order": "disk,cdrom",
"create_snapshot": true,
"convert_to_template": false,
"cluster": "{{user `cluster`}}",
"CPUs": 2,
"RAM": 8192,
"RAM_reserve_all": true,
"datastore": "{{user `datastore`}}",
"disk_controller_type": "pvscsi",
"storage": [
{
"disk_size": 51200,
"disk_thin_provisioned": true
}
],
"iso_paths": [
"[{{user `datastore_iso`}}] en-gb_windows_10_business_editions_version_2004_updated_may_2020_x64_dvd_783c55e0.iso",
"[{{user `datastore_iso`}}] VMware-tools-windows-11.1.0-16036546.iso"
],
"floppy_files": [
"{{template_dir}}/setup/"
],
"remove_cdrom": true,
"network_adapters": [
{
"network": "{{user `network`}}",
"network_card": "vmxnet3"
}
],
"communicator": "winrm",
"winrm_username": "{{user `winrmUser`}}",
"winrm_password": "{{user `winrmPass`}}"
}
]

view raw

builder.json

hosted with ❤ by GitHub

In the builder, the only two values of note are the Windows 10 ISO file name and VMware Tools. Altering these at a late date and committing the change to the repo should trigger the pipeline.

The final section is the provisioners. I won’t list them here, but the entire manifest can be found at https://github.com/virtualhobbit/packer-desktops/blob/master/windows-10/windows-10-pilot.json.

Vault

For this use case, we have created a KV store in our Vault installation, and a secret for holding the build image username and password.

The path to the secret is creds/users/misc/administrator.

We have also created an AppRole for authentication and made a note of the token, which will be needed later.

Windows

To install Windows an autounattend.xml file is needed. To facilitate the password being retrieved from Vault and injected, two placeholders are inserted:


<UserAccounts>
<AdministratorPassword>
<Value>{{password}}</Value>
<PlainText>true</PlainText>
</AdministratorPassword>
</UserAccounts>
<AutoLogon>
<Enabled>true</Enabled>
<Username>Administrator</Username>
<Password>
<Value>{{password}}</Value>
<PlainText>true</PlainText>
</Password>
<LogonCount>1</LogonCount>
</AutoLogon>

The full file I’m using can be found at https://github.com/virtualhobbit/packer-desktops/blob/master/windows-10/setup/Autounattend.xml.

GitLab

We’re using GitLab EE in HobbitCloud to provide repositories and Continuous Integration/Continuous Deployment functionality.

For this to work, we need a runtime engine, which comes in the form of the GitLab Runner. We have decided to utilize Kubernetes for this, and have installed the runner in one of our Tanzu Kubernetes Grid Integrated (formally PKS) clusters.

If you would like to do the same, check out Deploying GitLab Runner to VMware Enterprise PKS.

The process that will be triggered (either through a Git commit or from a schedule) is called a pipeline. This lists the specific steps that will be taken to achieve our objective, and will be broken down into the following stages:

  • Get
  • Retrieve
  • Build
  • Install
  • Recompose

The pipeline is stored at the root of the repository in a YAML file called .gitlab-ci.yml. Note the preceding “.” to signify it is a hidden file. Once GitLab detects this file it will initiate the pipeline.

The beginning of the pipeline file defines the stages and instructs Git to ignore any SSL certificates it doesn’t like.


stages:
– get
– retrieve
– build
– install
– recompose
before_script:
– git config –global http.sslVerify false

view raw

stages.yml

hosted with ❤ by GitHub

The code for the stages is as follows:

Get


get_packer:
stage: get
tags:
– windows
artifacts:
paths:
– packer.exe
script:
– Write-Host "Fetching packer"
– $packerFile = "packer_" + $packerVersion + "_windows_amd64.zip"
– Invoke-WebRequest -Uri ($packerURL + "/" + $packerVersion + "/" + $packerFile) -OutFile $packerFile
– Expand-Archive $packerFile -DestinationPath .

view raw

get.yml

hosted with ❤ by GitHub

In the Get stage, we construct the variable which will define the Packer filename, which is then downloaded from HashiCorp. The file is then unzipped and marked as an artefact so it can be passed on to other stages.

Please note: we have tagged our jobs as “Windows”, to ensure these jobs execute on runners on our Windows-based Kubernetes clusters.

Retrieve


retrieve_vault_password:
stage: retrieve
tags:
– windows
artifacts:
paths:
– windows-10/setup/Autounattend.xml
script:
– Write-Host "Retrieving Administrator password from Vault"
– $result = Invoke-RestMethod -Headers @{"X-Vault-Token" = ${env:VAULT_TOKEN}} -Method Get -Body $json -Uri ${env:VAULT_ADDR}/v1/creds/users/misc/administrator
– $pass = $result.data.Password
– Write-Host "Updating Autounattend.xml file with Administrator password"
– (Get-Content $xmlFile -Raw) -replace '{{password}}',$pass | Set-Content $xmlFile

view raw

retrieve.yml

hosted with ❤ by GitHub

In the Retrieve stage, the code uses the two environment variables (for the address and token) to connect to Vault to retrieve the local Administrator credentials. These are then injected into the XML file, which is then marked as an artefact.

Build


deploy_windows-10:
stage: build
tags:
– windows
script:
– Write-Host "Deploying Windows 10"
– Set-Location windows-10
– ../packer.exe build -force windows-10-pilot.json

view raw

build.yml

hosted with ❤ by GitHub

This stage uses Packer to build the image.

Install


install_powercli:
stage: install
tags:
– windows
script:
– Write-Host "Installing NuGet"
– Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force
– Write-Host "Set the repo installation policy"
– Set-PSRepository PSGallery -InstallationPolicy Trusted
– Write-Host "Installing PowerCLI"
– Install-Module -Name VMware.PowerCLI -Scope CurrentUser -Confirm:$false
– sleep 60
– Set-PowerCLIConfiguration -Scope User -ParticipateInCEIP $false -InvalidCertificateAction Ignore -Confirm:$false
– Write-Host "Importing the helper module"
– Import-Module VMware.VimAutomation.HorizonView
– if (Test-Path $env:Temp\PowerCLI-Example-Scripts){Remove-Item $env:Temp\PowerCLI-Example-Scripts -Recurse -Force -Confirm:$false}
– git clone $exampleScriptURL $env:Temp\PowerCLI-Example-Scripts
– $modulePath = [Environment]::GetEnvironmentVariable('PSModulePath').split(";")[0]
– if (Test-Path $modulePath\PowerCLI-Example-Scripts){Remove-Item $modulePath\PowerCLI-Example-Scripts -Recurse -Force -Confirm:$false}
– Copy-Item -Recurse $env:Temp\PowerCLI-Example-Scripts $modulePath

view raw

install.yml

hosted with ❤ by GitHub

In the Install stage, VMware PowerCLI is downloaded and installed, and the Horizon View module is imported.

The PowerCLI Example Scripts repository is also cloned and is copied to the correct folder.

Recompose


recompose_pool:
stage: recompose
tags:
– windows
script:
– Write-Host "Connecting to Horizon"
– Connect-HVServer -Server $cs -User $csUser -Password $csPass -Domain $csDomain
– Write-Host "Refreshing Horizon Pool"
– Start-HVPool -Pool $poolName -Recompose -LogoffSetting FORCE_LOGOFF -ParentVM "$packer_vmName" -SnapshotVM "Created by Packer" -StopOnFirstError $true
– Write-Host "Disconnecting to Horizon"
– Disconnect-HVServer -Server $cs -Confirm:$false

view raw

recompose.yml

hosted with ❤ by GitHub

The last stage recomposes the Pilot pool with the new snapshot.

Variables

The last task is to define the environment variables GitLab will use during execution:

Variable Description
cs The Horizon Connection Server
csDomain Connection Server user domain
csPass Connection Server user password
csUser Connection Server user name
exampleScriptURL URL for the PowerCLI Example Scripts
packer_cluster vCenter cluster
packer_datastore vCenter Datastore
packer_datastoreISO vCenter Datastore for storing ISOs
packer_network vCenter network
packer_vcenter vCenter
packer_vcenterPass vCenter user password
packer_vcenterUser vCenter user name
packer_vmName VM name
packerURL URL for HashiCorp Packer
packerVersion Packer version
poolName Horizon pool name
VAULT_ADDR Vault address
VAULT_TOKEN Vault token
xmlfile Autounattend.xml file location

These should be defined in the GitLab project, under Settings, CI/CD, Variables.

Finishing Up

We now have a Git repo containing our Packer files, Autounattend.xml, Windows Update provisioner, various scripts and our pipeline file. Once this is committed, the pipeline will begin.

First, it will download Packer. It will then connect to Vault, retrieve the machine credentials and inject them into the XML file. Using this file, Packer will then build the Windows image.

Next, the pipeline will download and install VMware PowerCLI. It will then use this to connect to Horizon and recompose the correct pool.

If you’d like to view the individual files in this repository, you can clone it from https://github.com/virtualhobbit/packer-desktops.

3 thoughts on “Using Continuous Deployment to Provision VDI Desktops

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.