Packer is the industry standard tool for managing machine image artifacts. It is also essentially the “only game in town”. It supports a variety of platforms and provisioners for sophisticated management of various sources and builds using HCL2 and a workflow comparative to Terraform’s. It enables a straightforward and simple process to manage an entire image hierarchy for various platforms, systems, and services.
In this article, we will explore pipeline examples for continuous integration of Packer templates and configs and automated builds and publications of image artifacts. We will demonstrate how to achieve this with four sufficiently different yet common CI pipeline tools: Jenkins, Circle, Concourse, and Travis.
In general, the design for a Packer template pipeline adheres to the following design:
This is somewhat similar to the Terraform config pipeline, with the final step of infrastructure management only applicable towards root config modules and the dependency installation being an even rarer requirement.
We can construct a basic Jenkins Pipeline for this example using the Jenkins Pipeline library for Packer. The Packer configs and templates are assumed independent of platform and provisioner, and therefore this pipeline does not include credentials retrieval or a custom agent with additional tools. These would both be normal for a standard production pipeline. The pipeline is a basic retrieval, initialization, validation, and artifact build. Without respect to branching, this pipeline executes sequentially due to no potential computational optimizations.
// using GitHub Branch Source plugin
@Library('github.com/mschuchard/jenkins-devops-libs@2.0.1')_
pipeline {
agent { docker { image 'hashicorp/packer:1.7.10' } }
parameters {
string(name: 'SCM_URL', description: 'The URL (HTTPS or SSH URI) to the source repository containing the Packer templates and configs (should also contain provisioning and validation support code for the artifact).')
}
stages {
stage('Initialize Packer Templates and Configs') {
steps {
checkout([
$class: 'GitSCM',
userRemoteConfigs: [[url: params.SCM_URL]]
])
script {
packer.init(
dir: '.',
upgrade: true
)
}
}
}
stage('Packer Templates and Configs Validation') {
steps {
script {
// remember template param also targets directories
packer.validate(template: '.')
packer.fmt(
check: true,
diff: true,
template: '.'
)
}
}
}
stage('Build Image Artifacts') {
steps {
script {
packer.build(template: '.')
}
}
}
}
}
Here we see a standard example for managing Packer artifacts within Circle. The assumption here is that the build agent Docker image contains bindings for retrieving secrets, the Packer statically linked executable binary, and Ansible installed via pip
. We have basic caching for any installations on top of the image during preparation. We also have split jobs for validation on pull requests versus merged. Packer builds are scheduled at the beginning of the month to produce updated artifacts (assumes extrinsic to an immutable infrastructure pipeline or combined with a downstream trigger).
---
version: 2.1
defaults: &defaults
working_directory: /tmp/project
docker:
- image: myrepo/myrepoimage:mytag
environment:
PACKER_LOG: 1
SECRETS_REGION: 'us-west-1'
default_pre_steps: &default_pre_steps
pre-steps:
- attach_workspace:
at: /tmp/project
- checkout
- restore_cache:
keys:
- v1-packer-ansible
- run:
name: initialize packer
command: packer init .
...
- save_cache:
paths:
- ~/.cache/pip
- /usr/local/lib/python3.9/site-packages
- /usr/local/lib/site-python
- /usr/local/bin
key: v1-packer-ansible
jobs:
validate:
<<: *defaults
steps:
- run:
name: validate packer templates
command: packer validate .
- run:
name: check packer formatting
command: packer fmt -diff -check .
build:
<<: *defaults
steps:
- run:
name: tasks here to retrieve and utilize secrets for aws
command:
- run:
name: build packer templates
command: packer build .
workflows:
version: 2.1
validate_template:
jobs:
- validate:
<<: *default_pre_steps
filters:
branches:
ignore: main
build_ami:
jobs:
- build:
<<: *default_pre_steps
context:
- AWS_SECRETS
- OTHER_SECRETS
triggers:
- schedule:
cron: "0 4 1 * *"
filters:
branches:
only: main
This example uses the Concourse Packer resource that I forked, and then enhanced and updated for MITODL. The pipeline was also constructed for MITODL’s artifact management. The resource is contained inside two Docker images specifically built as Concourse agents; one for validating and another for building. The Packer templates and configs are validated when pushed, and the artifacts are built on a schedule. The validations and builds each execute in parallel relative to each other.
---
resource_types:
- name: packer
type: docker-image
source:
repository: mitodl/concourse-packer-resource
tag: latest
- name: packer-builder
type: docker-image
source:
repository: mitodl/concourse-packer-resource-builder
tag: latest
resources:
- icon: github
name: packer_templates
source:
branch: main
paths:
- src/bilder/images/
uri: https://github.com/mitodl/ol-infrastructure
type: git
- name: artifact-build-schedule
type: time
icon: clock-outline
source:
start: 4:00 AM
stop: 5:00 AM
days: [Sunday]
- name: packer-validate
type: packer
- name: packer-build
type: packer-builder
jobs:
- name: packer-validate-workflow
plan:
- get: packer_templates
trigger: true
params:
depth: 1
- in_parallel:
- put: packer-validate
params:
template: packer_templates/src/bilder/images/.
objective: validate
vars:
app_name: consul
only:
- amazon-ebs.third-party
- put: packer-validate
params:
template: packer_templates/src/bilder/images/edxapp/edxapp_base.pkr.hcl
objective: validate
vars:
node_type: web
- put: packer-validate
params:
template: packer_templates/src/bilder/images/.
objective: validate
vars:
app_name: vault
only:
- amazon-ebs.third-party
- name: packer-build-workflow
plan:
- get: artifact-build-schedule
trigger: true
- get: packer_templates
trigger: false
params:
depth: 1
- in_parallel:
- put: packer-build
params:
template: packer_templates/src/bilder/images/.
objective: build
vars:
app_name: consul
env_vars:
AWS_REGION: us-east-1
only:
- amazon-ebs.third-party
- put: packer-build
params:
template: packer_templates/src/bilder/images/edxapp/edxapp_base.pkr.hcl
objective: build
vars:
node_type: web
env_vars:
AWS_REGION: us-east-1
- put: packer-build
params:
template: packer_templates/src/bilder/images/.
objective: build
vars:
app_name: vault
only:
- amazon-ebs.third-party
env_vars:
AWS_REGION: us-east-1
Since Travis cannot use custom build agents supplied by a user like the previous CI pipeline tools, we can demonstrate how to provision dependencies in this example. With an Ubuntu 20 distribution and Python 3.9, we can install the statically linked binary executable for Packer. We then use pip to install Ansible and Boto3. Note that Travis (also Github Actions, by the way) bakes in Packer into their build agent images for some unknown reason, and therefore you must supply a pathing preference for Packer if implicitly executed by other software, or explicitly path to it instead if possible. We have an example here of pathing preference to cover the more difficult scenario. We validate and format check Packer on pull requests and build an artifact on merge.
dist: focal
language: python
python: 3.9
os: linux
branches:
only:
- master
notifications:
email: false
git:
depth: 5
cache: pip
env:
global:
- PACKER_LOG=1
- PACKER_VERSION='1.7.10'
- ANSIBLE_VERSION='2.11'
install:
# packer needs an alternate install location that is preferred over other paths since the travis images bake in a version of packer by default
- mkdir -p /home/travis/bin
- if [ ! -f /home/travis/bin/packer ]; then curl "https://releases.hashicorp.com/packer/${PACKER_VERSION}/packer_${PACKER_VERSION}_linux_amd64.zip" -o packer.zip && unzip packer.zip -d /home/travis/bin/; fi
- pip install ansible~=${ANSIBLE_VERSION}.0 boto3
before_script:
- ansible --version
script:
- if [ $TRAVIS_PULL_REQUEST != "false" ]; then
packer validate . && packer fmt -diff -check .;
fi
- if [ $TRAVIS_PULL_REQUEST == "false" ]; then
packer build .;
fi
In this article, we explored configurations for four major CI pipeline tools for automated integration, build, and publication with Packer. You should now be able to confidently implement pipelines for your Packer usage. If your CI pipeline tool varies from the four explained above, then it should still be straightforward to transpose the design into a feasible implementation with the provided information.
If your organization is interested in robust codified management for images for the servers and containers for your infrastructure and automated pipelines for integrating, building, and publishing your server and container images, then contact Shadow-Soft