A long time ago I decided I was done with manual builds, and that my desktop images had to be automated. I had a lot of success with that solution, and wrote about it here.
Recently I made the decision to automate my server builds too, also using HashiCorp Packer. Whilst I used VMware Code Stream to
orchestrate my VDI template builds, this time I wanted to setup an automated pipeline internally to produce the artefacts.
The first challenge I had was that my Packer templates and scripts are stored in an external version control system, GitHub. This is so my trusted collaborators can contribute to my private repo without having access to my internal systems.
Whilst GitHub is great, I prefer to use GitLab internally for my code development and version control. Not only does it offer the same great functionality as GitHub, but also CI/CD pipelines that allow me to promote my code from development all the way through to production.
GitLab comes in three forms: Community Edition (CE), Enterprise Edition (EE), and GitLab.com. The former two are self-hosted, the latter being their SaaS based offering. Whilst I use EE in HobbitCloud, CE will more than suffice for most individual needs – including the aim of this post.
The aim is to automate our Packer builds on a scheduled basis, so that new templates are created once a month and stored in vCenter. These templates will also be patched, so to avoid any VMs being introduced into the environment which are not secure.
To begin with I created a folder for each template OS. Inside these would be a variables file and the Packer configuration file, both JSON files. I won’t explain what each of these files do as this is covered in great detail at Michael Poore‘s blog over at https://blog.v12n.io/creating-vsphere-vm-templates-with-packer-part-1 (amongst other places).
Once I had Packer running just as I wanted I pushed the files to my GitHub repo.
As mentioned above, my scenario is quite specific in that my repo is hosted externally. For my self-hosted GitLab to access it, I first have to create an API key.
To do this, login into GitHub and click Settings. On the left select Developer Settings, followed by Personal Access Tokens. Create a token with “repo” permissions only and save the generated key somewhere safe.
It’s Time to Start Running
GitLab CI/CD pipeline jobs are executed on “Runners”. These can be either standard machines running either Windows or Linux, or can be containers orchestrated by Kubernetes. To take advantage of the ephemeral nature of containers, I deploy my Runners on VMware Enterprise PKS, but any Linux-based host such as CentOS will perform equally well.
Runners can be shared or assigned to a specific group or project. To bring one online is simply a case of installing the software, telling it your GitLab server name and providing it a token.
If you experience issues with your Runner, check your SSL settings. It is important the Runner trusts your GitLab server, which may involve specifying the certificate file for the CA that issued the one to your server.
A CI/CD pipeline is a collection of commands. In my example, I divide the pipeline into two stages, build and deploy.
The build stage will configure git (which will run on the Runner) to accept insecure connections. The next step is to download Packer from HashiCorp, unzip it, and specify it as an artefact. This enables it to be used in the next stage.
The deploy stage then runs Packer for each template OS.
|– git config –global http.sslVerify false|
|– echo "Fetching packer"|
|– wget https://releases.hashicorp.com/packer/1.5.5/packer_1.5.5_linux_amd64.zip|
|– unzip packer_1.5.5_linux_amd64.zip|
|– chmod +x packer|
|– echo "Deploying CentOS 7"|
|– cd centos-7|
|– ../packer build -force -var-file variables.json centos-7.json|
|– echo "Deploying CentOS 8"|
|– cd centos-8|
|– ../packer build -force -var-file variables.json centos-8.json|
|– echo "Deploying Windows Server 2016"|
|– cd windows-2016|
|– ../packer build -force -var-file variables.json windows-2016.json|
|– echo "Deploying Windows Server 2019"|
|– cd windows-2019|
|– ../packer build -force -var-file variables.json windows-2019.json|
Save the file as .gitlab-ci.yml and push it to your Packer repo on GitHub.
Import the Repository
Once you are happy your Packer repo is configured correctly in GitHub, it’s time to import it.
Please note: one important limitation of GitLab CE is that repository mirroring is one-way only – pushing. So if you make changes to your Packer repository in GitHub, you will need to re-import it into GitLab again. However changes you make in GitLab directly can be pushed out to GitHub.
To get around this you can install GitLab EE, which enables two-way repository mirroring.
Create a new project in GitLab and select Import Project:
Click GitHub, and then on the next screen, insert the personal access token created previously, then click Authenticate.
You will now be presented with a list of GitHub repositories. Select your Packer repo and click Import.
For EE users: It may be handy to enable a pull mirror as discussed above. To do this, select your project in GitLab and click Settings, followed by Repository. Expand the Mirroring section:
Enter the Git repository URL, and change the direction to Pull, enter your GitHub password and click Mirror Repository. If configured correctly it should look something like this:
Now when you push changes to GitHub, they will be automatically pulled into GitLab.
Running the Pipeline
Your pipeline will run automatically when GitLab detects your .gitlab-ci.yml file – you are not required to run it manually (although you obviously can).
One last task is to set a schedule. To do this, select your project and click CI/CD, then Schedules, followed by New Schedule.
Now your server builds will be automated courtesy of GitHub, GitLab and HashiCorp Packer.