Using GitLab CI/CD Pipelines to Automate your HashiCorp Packer Builds

A long time ago I decided I was done with manual builds, and that my desktop images had to be automated. I had a lot of success with that solution and wrote about it here.

Recently I made the decision to automate my server builds too, also using HashiCorp Packer. Whilst I used VMware Code Stream to

orchestrate my VDI template builds, this time I wanted to setup an automated pipeline internally to produce the artefacts.

The first challenge I had was that my Packer templates and scripts are stored in an external version control system, GitHub. This is so my trusted collaborators can contribute to my private repo without having access to my internal systems.

Whilst GitHub is great, I prefer to use GitLab internally for my code development and version control. Not only does it offer the same great functionality as GitHub, but also CI/CD pipelines that allow me to promote my code from development all the way through to production.

GitLab comes in three forms: Community Edition (CE), Enterprise Edition (EE), and GitLab.com. The former two are self-hosted, the latter being their SaaS based offering. Whilst I use EE in HobbitCloud, CE will more than suffice for most individual needs – including the aim of this post.

The aim is to automate our Packer builds on a scheduled basis, so that new templates are created once a month and stored in vCenter. These templates will also be patched, so to avoid any VMs being introduced into the environment which are not secure.

Getting Started

To begin with I created a folder for each template OS. Inside these would be a variables file and the Packer configuration file, both JSON files. I won’t explain what each of these files do as this is covered in great detail at Michael Poore‘s blog over at https://blog.v12n.io/creating-vsphere-vm-templates-with-packer-part-1 (amongst other places).

Once I had Packer running just as I wanted I pushed the files to my GitHub repo.

Gaining Access

As mentioned above, my scenario is quite specific in that my repo is hosted externally. For my self-hosted GitLab to access it, I first have to create an API key.

To do this, login into GitHub and click Settings. On the left select Developer Settings, followed by Personal Access Tokens. Create a token with “repo” permissions only and save the generated key somewhere safe.

It’s Time to Start Running

GitLab CI/CD pipeline jobs are executed on “Runners”. These can be either standard machines running either Windows or Linux, or can be containers orchestrated by Kubernetes. To take advantage of the ephemeral nature of containers, I deploy my Runners on VMware Enterprise PKS, but any Linux-based host such as CentOS will perform equally well.

Runners can be shared or assigned to a specific group or project. To bring one online is simply a case of installing the software, telling it your GitLab server name and providing it a token.

If you experience issues with your Runner, check your SSL settings. It is important the Runner trusts your GitLab server, which may involve specifying the certificate file for the CA that issued the one to your server.

Pipeline

A CI/CD pipeline is a collection of commands. In my example, I divide the pipeline into two stages, build and deploy.

The build stage will configure git (which will run on the Runner) to accept insecure connections. The next step is to download Packer from HashiCorp, unzip it, and specify it as an artefact. This enables it to be used in the next stage.

The deploy stage then runs Packer for each template OS.


stages:
– build
– deploy
image: alpine
before_script:
– git config –global http.sslVerify false
get_packer:
stage: build
artifacts:
paths:
– packer
script:
– echo "Fetching packer"
– wget https://releases.hashicorp.com/packer/1.5.5/packer_1.5.5_linux_amd64.zip
– unzip packer_1.5.5_linux_amd64.zip
– chmod +x packer
deploy_centos-7:
stage: deploy
script:
– echo "Deploying CentOS 7"
– cd centos-7
– ../packer build -force -var-file variables.json centos-7.json
deploy_centos-8:
stage: deploy
script:
– echo "Deploying CentOS 8"
– cd centos-8
– ../packer build -force -var-file variables.json centos-8.json
deploy_windows-2016:
stage: deploy
script:
– echo "Deploying Windows Server 2016"
– cd windows-2016
– ../packer build -force -var-file variables.json windows-2016.json
deploy_windows-2019:
stage: deploy
script:
– echo "Deploying Windows Server 2019"
– cd windows-2019
– ../packer build -force -var-file variables.json windows-2019.json

view raw

gistfile1.txt

hosted with ❤ by GitHub

Save the file as .gitlab-ci.yml and push it to your Packer repo on GitHub.

Import the Repository

Once you are happy your Packer repo is configured correctly in GitHub, it’s time to import it.

Please note: one important limitation of GitLab CE is that repository mirroring is one-way only – pushing. So if you make changes to your Packer repository in GitHub, you will need to re-import it into GitLab again. However changes you make in GitLab directly can be pushed out to GitHub.

To get around this you can install GitLab EE, which enables two-way repository mirroring.

Create a new project in GitLab and select Import Project:

Click GitHub, and then on the next screen, insert the personal access token created previously, then click Authenticate.

You will now be presented with a list of GitHub repositories. Select your Packer repo and click Import.

For EE users: It may be handy to enable a pull mirror as discussed above. To do this, select your project in GitLab and click Settings, followed by Repository. Expand the Mirroring section:

Enter the Git repository URL, and change the direction to Pull, enter your GitHub password and click Mirror Repository. If configured correctly it should look something like this:

Now when you push changes to GitHub, they will be automatically pulled into GitLab.

Running the Pipeline

Your pipeline will run automatically when GitLab detects your .gitlab-ci.yml file – you are not required to run it manually (although you obviously can).

One last task is to set a schedule. To do this, select your project and click CI/CD, then Schedules, followed by New Schedule.

Now your server builds will be automated courtesy of GitHub, GitLab and HashiCorp Packer.

Success!

8 thoughts on “Using GitLab CI/CD Pipelines to Automate your HashiCorp Packer Builds

  1. Do you know a way to expose a port from within the runner container? I am generating my kickstart file dynamically for CentOS 8 with variables (since the floppy method no longer works) and need to make it accessible from packer http after generation. Been searching but can’t find a solution. Only thing keeping me from moving my template generation to Gitlab

    Like

    • Hi Michael,

      I don’t, no. However, I no longer need a floppy for CentOS 8 and just use the following boot command:

      “boot_command”: [
      ” text ks=http://your.webserver.com/centos-8.cfg”
      ],

      Give that a try and see if it helps. If not let me know and we’ll try and figure it out.

      Like

      • Yeah after digging in a bit it seems they deliberately (closing tickets that suggested it) don’t allow port mapping on the runner containers.
        Since we are dynamically generating the kickstart from a template I’ll just have to modify my scripts to push the kickstart to another accessible location instead of using the built-in packer web server normally used for that.

        Like

  2. Pingback: Using Continuous Deployment to Provision VDI Desktops | virtualhobbit

  3. Pingback: Automated Packer VMware vSphere templates with CI/CD Pipeline Build - Virtualization Howto

  4. Pingback: First look into… HCP Packer

  5. Hi Mark,

    You have mentioned that your packer template and scripts are stored in GitHub and I guess your packer variable file will have sensitive passwords and credentials. How do you ensure these secrets are not exposed in your GitHub repository and git lab pipeline as well?

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.