This article assumes a basic familiarity with Docker, Kubernetes, and Jenkins Pipeline, including the declarative DSL. It will also not cover the proper configuration of a Jenkins instance or a Kubernetes cluster, and both of these are considered pre-existing. The Jenkins instance should have all of the normal prerequisites satisfied, such as installing the Docker Pipeline plugin and Docker server, and the Jenkins user added to the Docker group. This article will cover the actual pipeline code and configuration itself.
We need an application to containerize and deploy to Kubernetes for our pipeline, so we will use a simple Python webapp. The example Dockerfile for this webapp can be viewed below:
FROM python:3.7 AS builder
ARG version
WORKDIR /app
COPY app_files/ .
RUN python setup.py sdist && pip install dist/app-${version}.tar.gz
FROM python:3.7-alpine
ARG version
ARG port=8000
EXPOSE ${port}
WORKDIR /app
COPY --from=builder /root/.cache /root/.cache
COPY --from=builder dist/app-${version}.tar.gz .
RUN pip install app-${version}.tar.gz && rm -rf /root/.cache
ENTRYPOINT ["python", "-m", "SimpleHTTPServer"]
CMD ["8000"]
For pipeline agents, and other situations where the web server should persist beyond container commands, we would change the last two lines to:
dockerfile
RUN python -m SimpleHTTPServer ${port}
We begin by setting up the pipeline with the necessary setup for agent, parameters, and environment. We will select agent none
for greater flexibility over build agents in the pipeline. You can set the agent for each stage as necessary. If you are trying this out in a lab environment, then the master label should work fine. Alternatively, you can create a dedicated server(s) for this pipeline and assign them a unique label. Remember that if you are running Jenkins inside of Docker, then you will need to bind mount the socket for Docker on any applicable build agents.
The environment block only really needs a map entry to assign a value for env.KUBECONFIG
. We will need this later to deploy to the Kubernetes cluster. We will also need the Jenkins pipeline library for Helm for the later cluster deployment and for GoSS for the intermediate image validation. We will assume that we are executing with enabled sandbox (typical of production systems), and therefore that mschuchard/jenkins-devops-libs
has been loaded in the global configuration. We will then load the library for the pipeline at a specified version.
We will pause on populating the parameters block for a moment.
@Library('jenkins-devops-libs@v1.4.0')_
pipeline {
agent none
parameters {}
environment { KUBECONFIG = '/path/to/.kube/config' }
}
Alternatively, you can leverage the Credentials Binding plugin for the Kube Config file (remember it will be stored in the environment) to mask the location:
withCredentials([kubeconfigFile(credentialsId: 'mykubeconfig', variable: 'KUBECONFIG')]) {
sh 'use $KUBECONFIG' // environment variable; not pipeline variable
}
If you want to store the content in the console instead, you could use the kubeconfigContent
binding for the plugin.
Now consider the parameters we will likely need to populate values in the pipeline. In all likelihood, you will not know every value that should be parameterized when initially architecting the pipeline, but we can suppose that a few of them will be useful later on. For example, let us go ahead and initialize these parameters:
parameters {
string(name: 'SCM_URL', description: 'The URL (HTTPS or SSH URI) to the source repository containing the Dockerfile.')
string(name: 'BRANCH', defaultValue: 'master', description: 'GIT SCM branch from repository to clone/pull.')
string(name: 'APP', description: 'The application for which to build and push an image.')
string(name: 'ORG', description: 'The organization for the application; used for Docker image repository prefix (if left blank defaults to Git server organization).')
string(name: 'VERSION', defaultValue: "${env.BUILD_NUMBER.toInteger() + 1}", description: 'The version of the application for the Docker Image tag.')
string(name: 'REGISTRY_URL', defaultValue: 'registry.hub.docker.com', description: 'The Docker Registry server URL (no URI; https:// is embedded in code and required for registries).')
}
In the first stage, we can initialize some variables global to this pipeline which may not make sense to exist as parameters or environment. For example, we can sanely define the Docker image name according to a standardized nomenclature with the following code:
// use github org for repo
if (params.ORG == '') {
// determine git org for image repo name
repo = params.SCM_URL =~ /\/\/.*\/|:(.*)\/.*\.git/
// establish the <registry>/<repo> for the docker image
image_name = "${repo[0][1]}/${params.APP}"
// null var for serialization
repo = null
}
// use input parameter for repo
else {
// establish the <registry>/<repo> for the docker image
image_name = "${params.ORG}/${params.APP}"
}
Now we have a good image name for containerizing the application, which can be re-used throughout this pipeline.
We still need to retrieve the actual application code with its corresponding Dockerfile. This can easily be accomplished with the normal suitable class provided in Jenkins Pipeline for retrieving with Git. Please consult the reference documentation for other version control software, but we will assume for now you are using Git. We will also keep the Jenkins job directories separate for each branch to preserve the organization.
checkout([
$class: 'GitSCM',
branches: [[name: "*/${params.BRANCH}"]],
doGenerateSubmoduleConfigurations: false,
extensions: [[$class: 'RelativeTargetDirectory',
relativeTargetDir: params.BRANCH]],
submoduleCfg: [],
userRemoteConfigs: [[url: params.SCM_URL]]])
At this point, we are ready to begin interacting with Docker from within the Jenkins Pipeline. Although the Docker Pipeline plugin provides bindings directly within a Java class, it is customary to interact with it via the Groovy global variables interface. This does add the additional restriction that its method invocations must be placed in the script
block within the steps
block. Therefore, we can easily build the Docker image and store the returned object with code like the following:
dir(params.BRANCH) {
script {
// build docker image and store result to image object
image = docker.build(image_name)
}
}
Note the code above to encapsulate this image build within the specific branch’s directory within the Jenkins job directory. It can also be convenient to specify additional flags and arguments to the Docker build, such as the location of the Dockerfile
:
image = docker.build(image_name, '--build-arg port=9000 ./dockerfiles/Dockerfile')
An often neglected part of the Docker image lifecycle is validating the image. Thankfully, the pipeline makes automated testing a breeze. We have multiple options here for testing. The first notable test is by simple shell commands within a running container based on the image. We will use the built image as the agent for this first example. We also will explore the possibility of the image built with the --build-arg port=9000
for this first example.
agent { docker {
image image_name
args '-p 9000:9000'
} }
steps {
script {
sh 'curl localhost:9000'
}
}
Note that you can also use environment variables in the agent block if they are directly accessed and not from the env map. Alternatively, we can execute commands within the container running from the image via the Docker Pipeline methods. For this example, we will also use a Gossfile that will execute a simple validation of the image. We will execute this test with the appropriate library from jenkins-devops-libs
.
# Gossfile
---
port:
tcp:8000:
listening: true
image.inside('-v /usr/bin/goss:/goss -v /path/to/gossfiles:/gossdir') {
goss.validate(
bin: '/goss',
gossfile: '/gossdir/gossfile.yaml'
)
}
We can also use the withRun
method to expose the running container’s information as a temporary lambda variable within the code block. This allows for some clever enablement around capabilities like running sidecars:
image.withRun() { container ->
docker.image('centos:8').inside("--link ${container.id}:app") {
sh 'ls /'
}
}
Note that this is just an example of potential functionality and that, in practice, you would want to utilize bridged networks and Docker instead of Compose for this kind of requirement.
At this point, we are ready to tag the image and push it to a registry. Ensure that the registry credentials are stored securely within the Jenkins configuration. We can easily push the image to a custom registry with code like the following:
// push docker image with latest and version
docker.withRegistry("https://${params.REGISTRY_URL}", '<jenkins_reg_creds_id>') {
image.push()
image.push(params.VERSION)
}
Note that you can similarly push the image to other custom registries that may have custom bindings to Jenkins Pipeline. For example, code like the following will push the image to an Artifactory Docker Registry:
// initialize artifactory server object by url
artServer = Artifactory.newServer(url: "https://${params.REGISTRY_URL}", credentialsId: '<jenkins_reg_creds_id>')
// initialize docker registry object
artDocker = Artifactory.docker(server: artServer)
// push docker image with latest and version
artDocker.push("${image_name}:latest", params.TARGET_REPO)
image.tag(params.VERSION)
buildInfo = artDocker.push("${image_name}:${params.VERSION}", params.TARGET_REPO) // method will return build info for capturing
Although this section mostly pertains to more feature-filled registries like Artifactory, we can also publish image metadata along with the image. For example, we can capture the repo digest information from the image and then modify the Artifactory buildInfo
map to contain the information. We can do this by inspecting the Docker image, parsing the output through a Go template, and then capturing the resulting stdout.
// grab the repository digest for the image
repoDigest = sh(label: 'Obtain Image Repository Digest', returnStdout: true, script: "docker inspect -f '' ${image.imageName()}").trim()
// add digest property to build info manifest artifact
buildInfo.repoDigest = repodigest
// push build info manifest to artifactory
artServer.publishBuildInfo(buildInfo)
We can also scan the image for security vulnerabilities. For example, JFrog XRay will perform this task and has bindings to Jenkins Pipeline. We can scan the image and display the results with code like the following:
// scan image for vulnerabilities
scanConfig = [
'buildName': buildInfo.name,
'buildNumber': buildInfo.number
]
xrayResults = xrayScanBuild(scanConfig)
print xrayResults as String
Note this code will also fatally error if a security vulnerability is discovered, so you can safely avoid continuing with a vulnerable image.
After the image has passed testing and is baked in sufficiently in your registry for development and/or QA, you should then create a request to promote the image to a higher lifecycle environment registry. This can be done seamlessly with Artifactory like the following:
promotionConfig = [
buildName: buildInfo.name,
buildNumber: buildInfo.number,
targetRepo: 'docker-prod',
sourceRepo: 'docker-qa',
comment: 'Promote image upward when application finishes testing.',
includeDependencies: false,
]
Artifactory.addInteractivePromotion(server: artServer, promotionConfig: promotionConfig, displayName: "${params.APP} version ${params.VERSION} image promotion to higher lifecycle registry.")
For other registries, you can re-push the image to the new registry. This results in mostly the same functionality but without various auditing and metadata features.
We are now ready to deploy the application to a Kubernetes cluster. We will deploy this application using Helm (the Kubernetes package manager). We will also leverage the Jenkins Pipeline library for Helm. We need to ensure Helm is set up on the Jenkins node at a specific version with code like this:
helm.setup('2.14.3')
Now that we have ensured Helm is configured on the Jenkins node for the agent user and Tiller is configured on the Kubernetes cluster, we are ready to package the application and deploy it to Kubernetes.
We can now prepare the Helm chart developed for deploying the application. We first lint the Helm chart to ensure code quality, and then we package the Helm chart so we can utilize it to deploy the application. For a simplified Helm chart with Kubernetes manifests and metadata like the following:
# Chart.yaml
---
apiVersion: v1
appVersion: "1.0"
description: A Helm chart for Kubernetes to deploy a Python Webapp
name: webapp
version: 1.0.0
# values.yaml
---
image:
tag: stable
pullPolicy: IfNotPresent
port: 8000
# deploy.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name:
labels:
app:
release:
spec:
replicas: 3
selector:
matchLabels:
name:
app:
release:
template:
metadata:
labels:
name:
app:
release:
spec:
containers:
- name: "-"
image: "webapp:"
imagePullPolicy:
ports:
- containerPort:
We can do this in the pipeline like so:
# Chart.yaml
---
apiVersion: v1
appVersion: "1.0"
description: A Helm chart for Kubernetes to deploy a Python Webapp
name: webapp
version: 1.0.0
# values.yaml
---
image:
tag: stable
pullPolicy: IfNotPresent
port: 8000
# deploy.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name:
labels:
app:
release:
spec:
replicas: 3
selector:
matchLabels:
name:
app:
release:
template:
metadata:
labels:
name:
app:
release:
spec:
containers:
- name: "-"
image: "webapp:"
imagePullPolicy:
ports:
- containerPort:
we can do this in the pipeline like so:
helm.lint(
chart: params.APP,
set: ["image.tag=${params.VERSION}"]
)
helm.packages(
chart: "charts/${params.APP}",
update_deps: true,
version: params.VERSION
)
We are ready to install the application onto the Kubernetes cluster via Helm and then execute tests against the application’s running pods to ensure a valid deployment.
helm.install(
chart: "${params.APP}-chart",
name: "${params.APP}-${params.VERSION}"
)
helm.test(
cleanup: false,
name: "${params.APP}-${params.VERSION}",
parallel: true
)
This will install a new version of the application alongside the previous version. This allows for easy blue-green deployment of the application, as the previous version can then be removed with a helm.delete
. Alternatively, in lower lifecycle environments, you may want to just immediately helm.upgrade
for speed and ease of use.
In the final section of the pipeline, we can perform various cleanups and report on the job results. This includes removing stale Docker artifacts and publishing results to email or chat programs like Slack.
success {
print 'Job completed successfully.'
sh "docker rmi -f ${image.id}"
}
failure {
print 'Job failed.'
// notify slack and email on failure
slackSend(
channel: '<my-slack-channel>',
color: 'warning',
message: "Job failed for ${env.JOB_NAME}/${env.JOB_NAME} at ${env.JOB_URL}."
)
mail(
to: '<my-report-email>',
subject: "Failure: ${env.BUILD_TAG}",
body: "Job failed for ${env.JOB_NAME}/${env.JOB_NAME} at ${env.JOB_URL}."
)
}
always {
// remove built docker image and prune system
print 'Cleaning up the Docker system.'
sh 'docker system prune -f'
}
Note that a system prune may aggressively clean up your Docker footprint on your system beyond what you may want, so use that with care.
In this article, we explored the Jenkins Pipeline for an end-to-end containerized application lifecycle. This began with cloning the application and its associated Dockerfile and ended with validating the application deployment on a Kubernetes cluster and reporting its status.
There are opportunities to expand the functionality of this pipeline to add even more features to the container lifecycle of the application. Through the proper use of the parameters, this pipeline applies to any application in your organization, and there is no need to develop a separate Jenkinsfile
for each application.
If your organization is interested in a pipeline for automatically managing the container lifecycle and Kubernetes deployments of your applications, you can speak with our technical sales team to learn more about the added value we can deliver to your organization with these tools.