<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=58103&amp;fmt=gif">
Skip to content
All posts

Vault Kubernetes: Advanced

The Vault Kubernetes Advanced Workshop will teach you how to deploy, configure, integrate, and use Vault with Kubernetes in advanced ways.

Overview

You will learn how to:

  • Use the Vault Helm chart to deploy Vault on Kubernetes in a variety of ways, including with an external Vault cluster and with auto-unsealing enabled.
  • Configure Vault for Kubernetes, including setting up roles, policies, and secret engines.
  • Integrate Vault with Kubernetes, including using the Vault Agent to inject secrets into pods and using the Vault CLI to manage secrets from within Kubernetes.
  • Use Vault with Kubernetes applications, including using Vault to store secrets for applications and to authenticate and authorize users to access applications.

By the end of the course, you will be able to:

  • Deploy Vault on Kubernetes in a secure and scalable way.
  • Configure Vault for Kubernetes to meet the specific needs of your applications.
  • Integrate Vault with Kubernetes to simplify the management of secrets and to improve the security of your applications.
  • Use Vault with Kubernetes applications to secure your applications and to make them easier to develop and deploy.

The course is intended for system administrators, DevOps engineers, and security engineers who are already familiar with Vault and Kubernetes. The prerequisite is a basic understanding of Vault and Kubernetes.

Section 1: Vault Kubernetes Deployment

Lesson 1 - Vault Kubernetes Deployment: Helm Chart Input Values

In the Intermediate workshop lesson, we viewed the most common and useful Helm chart values. The up-to-date entire range of possible values and their functionality can be found at https://github.com/hashicorp/vault-helm/blob/main/values.yaml.

A formatted version of the values can be found at https://developer.hashicorp.com/vault/docs/platform/k8s/helm/configuration.

Lesson 2 - Vault Kubernetes Deployment: External Vault Cluster

While deploying a Vault server cluster to a Kubernetes cluster is convenient in the lab environment for this workshop, the more secure and production style deployment would be to bare metal (preferred) or virtualized instances external to a cluster. In this situation you would then need Kubernetes resources to interface with an external Vault cluster. While it is possible to manually configure this for Kubernetes, it is also much more convenient to continue to leverage the Vault Helm chart.

Note that for this subsection it is possible to completely implement the described steps instead of simulating them. You can easily start a Vault development server locally with:

vault server -dev

This dev-mode server requires no further setup, and your local Vault CLI will be authenticated to communicate with it. This makes it easy to experiment with Vault or to start a Vault instance for development. Every feature of Vault is available in the"dev" mode. The “-dev” flag merely short-circuits a lot of setup to insecure defaults.

WARNING: Never, ever, ever run a "dev" mode server in production. It is insecure and will lose data on every restart (since it stores data in memory). It is only made for development or experimentation.

However, executing these steps instead of simulating them will redirect Vault consumers to the external Vault server instead of the Kubernetes cluster Vault cluster. After completing these steps, you would then need to undo the deployment modifications with Helm. There are various other reset options, such as continuing to target the external server or switching between the Vault servers as desired.

According to documentation, we can allegedly specify the Vault Agent injector to target the external Vault server by modifying the current deployment:

helm -n vault-system upgrade vault hashicorp/vault --set 'injector.externalVaultAddr=http://<external vault>:8200'

However, in reality, more parameters are required for this modification:

helm -n vault-system upgrade vault hashicorp/vault --set 'global.enabled=true' --set 'injector.enabled=true' --set 'injector.externalVaultAddr=http://<external vault>:8200' --set 'server.enabled=false'

Now, the Agent injection procedure targets the external Vault cluster, and the Vault server cluster on Kubernetes is uninstalled (if previously installed). This is significantly more streamlined than the manual process with no tradeoffs whatsoever, and thus, this is the only procedure worth discussing. For any other lab workshop exercises, this external server can be used if desired. When necessary, the Vault address will need to be changed from the Kubernetes DNS to the external address. Note in case it is not obvious that the Kubernetes cluster will require network security modifications (e.g., ingress) to communicate with the external Vault server.

Lesson 3 - Vault Kubernetes Deployment: Auto-Unsealing

Unfortunately, the options for this on Kubernetes are limited to primarily two choices:

  • Custom unsealing automated through wrapper code
  • Transit secrets engine authenticated and authorized from a second Vault server

The former would require deployment through, e.g., Ansible to parse the returned unseal keys, store them in memory, and then use them to unseal the other nodes. One could also use the Vault bindings in various languages to perform this task. Since we would need to retain the unseal keys, these would need to be stored somewhere secure. The Vault Transit secrets engine allows AES-256-GCM cipher encryption of strings; therefore, it is a strong candidate. However, this would need to be stored in a second Vault server or another secrets management tool and would constitute another “secret zero”.

This leads us to the second option, which is clearly somewhat similar. If the unseal keys are stored in the Transit secrets engine of the second Vault server, and the first Vault server is authenticated and authorized, then we can configure the Vault configuration file to enable auto-unsealing in this manner with the following stanza:

seal "transit" {
  address         = "https://<vault addr>:8200" # second Vault
  disable_renewal = "false"
  key_name        = "<transit engine auto-unseal key name>"
  mount_path      = "transit/"
}

This would be only a portion of the configuration file, and it would need to be stored as a ConfigMap in Kubernetes that is accessible to the Vault cluster during and after deployment. The key should also be wrapped and unwrapped during the generation and retrieval process.

This whole process is so involved and requires so much explanation and context beyond what is covered in a Vault Kubernetes workshop that we only discuss it at a high level here. Fortunately, this is the most complicated procedure for auto-unsealing Vault, so a production Vault cluster external to Kubernetes would not encounter this level of difficulty.

You have now learned more about deploying Vault on Kubernetes. You can customize the deployment very robustly according to your environment requirements, interact within Kubernetes workloads to an external Vault cluster instead of one deployed to Kubernetes, and auto-unseal the Vault cluster with a customized configuration in a ConfigMap and authentication/authorization towards a Transit secrets engine in another Vault server.

[Back to the top]

Section 2: Vault Kubernetes Configuration

Lesson 1 - Vault Kubernetes Configuration: Engine Interaction with Vault API

Interfacing with the Vault REST API enables users to programmatically interact with Vault. There are also various bindings in different languages to provide first class support in the respective languages for the REST API. However, we will only discuss interfacing with the REST API as covering each language’s bindings would become very lengthy, and requires some prior knowledge of the language itself.

Assuming we are executing these interactions with the “curl” shell executable command, then the general form will be:

curl --header "X-Vault-Token: <vault token>" --request <type> --data '{"<parameter>": "<value>", "<parameter>": "<value>"}' http(s)://<vault addr>:<port>/v1/<endpoint type>/<endpoint engine>/<endpoint item>

For example, if we wanted to create the “webapps” bonus Vault role for the Kubernetes authentication engine, we could create a JSON file “payload.json” for the data payload like:

{
  "bound_service_account_names": "webapps",
  "bound_service_account_namespaces": "default",
  "policies": ["webapp-one-read", "webapp-two-read"],
}

We would then execute:

curl --header "X-Vault-Token: <vault token>" --request POST --data @payload.json http://vault.vault-system:8200/v1/auth/kubernetes/role/webapps

Note how the parameters align with the CLI arguments and flags and how the API endpoint aligns with the final “path” argument to the CLI. This is an intentional design and is very helpful to notice because not everything will be documented for the Vault CLI (similar to paths for Vault policies). However, the API is exhaustively documented, and thus, one could translate from API to CLI if desired or necessary.

We could similarly write key value pairs for a KV2 secret with the payload:

{
  "data": {
    "<key>": "<value>",
    "<key>": "<value>"
  }
}

and execute:

curl --header "X-Vault-Token: <vault token>" --request POST --data @payload.json http://vault.vault-system:8200/v1/secret/data/webapps/webapp_one

Note the presence of the “data” in the endpoint and recall how it was in the policy path for the KV2 secrets read. Also, recall the comment that policy paths typically align with API endpoints. Sure enough, the pattern is demonstrated here to be true. For KV2, “data” will be part of the path for anything associated with the API but not anything associated with the CLI.

Notice also that the key value pairs are nested within a “data” key. If we read the secret at version 1 (it has not been updated at this point, but rather only initialized):

curl --header "X-Vault-Token: <vault token>" http://vault.vault-system:8200/v1/secret/data/webapps/webapp_one?version=1

The returned body would appear like this:

{
  "data": {
    "data": {
      "foo": "bar"
    },
    "metadata": {
      ...
    }  
  }
}

The outer “data” key differentiates the actual data from the metadata. The nested “data” key is standard within the KV2 secrets engine as the top-level key containing the key value pairs at a given secret path.

The complete Vault API documentation can be found at https://developer.hashicorp.com/vault/api-docs

Lesson 2 - Vault Kubernetes Configuration: Vault Policy Operators

In the Intermediate workshop lesson, we discussed writing basic policies enhanced with the “*” (glob) operator, and the standard capabilities. There are two additional capabilities to discuss now:

  • sudo - Allows access to paths that are root-protected. Tokens are not permitted to interact with these paths unless they have the sudo capability (in addition to the other necessary capabilities for performing an operation against that path, such as read or delete). For example, modifying the audit log backends requires a token with sudo privileges.
  • deny - Disallows access. This always takes precedence regardless of any other defined capabilities (including sudo).

In addition, a + can be used to denote any number of characters bounded within a single path segment. It is primarily used for specifying multiple prefixes to paths:

# Permit reading the "myteam" path under any top-level path under secret/
path "secret/+/myteam" {
  capabilities = ["read"]
}

# Permit reading secret/foo/bar/otherteam, secret/bar/foo/otherteam, etc.
path "secret/+/+/otherteam" {
  capabilities = ["read"]
}

One can also template the policies and expand namespaces such as “identity.entity.id” for the id of the entity (such as a user or role) to which the policy is attached. There are various other expansions to the “identity.entity” namespace that can be rendered in Vault policies. The following example displays a policy where an entity (e.g., user, role) is permitted access to its own secrets path:

path "secret/data//*" {
  capabilities = ["create", "update", "patch", "read", "delete"]
}

path "secret/metadata//*" {
  capabilities = ["list"]
}

Finally, it is also possible to specify allowed, denied, or required parameters for a path. In the KV2 secrets engine, these would equate to specific keys in the path. However, in practice, this is rarely (if ever) used.

Lesson 3 - Vault Kubernetes Configuration: Vault Agent Configuration

Vault Agent is a client daemon that provides the following features:

  • Auto-Auth - Automatically authenticate to Vault and manage the token renewal process for locally-retrieved dynamic secrets.
  • Caching - Allows client-side caching of responses containing newly created tokens and responses containing leased secrets generated off of these newly created tokens. The agent also manages the renewals of the cached tokens and leases.
  • Windows Service - Allows running the Vault Agent as a Windows service.
  • Templating - Allows rendering of user-supplied templates by Vault Agent, using the token generated by the Auto-Auth step.

The most important aspects of this workshop are the auto-auth and templating. Their impact on Kubernetes applications' secrets retrieval will be explored more in depth in the Application section of the workshop. However, note that in the Intermediate sections, we were using the defaults for those features. Since the auto-auth would be mostly a pedantic exercise, as the Helm chart can handle most use cases automatically, we will introduce only the templating here. An example of templating configuration blocks is as follows:

template_config {
  static_secret_render_interval = "10m"
  exit_on_retry_failure = true
}

template {
  source      = "/tmp/agent/template.ctmpl"
  destination = "/tmp/agent/render.txt"
}

template {
  contents     = ""
  destination  = "/tmp/agent/render-content.txt"
}

In this example, we observe two files with content render by the templating engine. The first example is rendered from a source template file. The second example is rendered from an inline Go template. The functionality of the second example essentially accesses the secret at the path “secret/my-secret”, and then parses for the value at the key “foo”. The value then populates the content at the file “/tmp/agent/render-content.txt”. It is not important to fully understand the functionality here if you are unfamiliar with the Sprig Go template renderer as this serves as a first exposure to the concept, and we will reiterate on this more thoroughly in the Application section of the workshop.

You have now learned more about configuring Vault on Kubernetes. You can now interact with the Vault server via its API, and robustly customize fine-tuned Vault policies for authorization.

[Back to the top]

Section 3: Vault Kubernetes Integration

Lesson 1 - Vault Kubernetes Integration: External Vault Server

In the Advanced Deployment workshop lesson, we learned how to deploy the Vault Agent injector such that it targets an external Vault server. However, we now need to similarly configure the Kubernetes service account for Vault with a Kubernetes secret similar to how we configured the “webapp-one” service account with a Kubernetes secret in this Introduction workshop lesson. We can use the following manifest for this purpose:

apiVersion: v1
kind: Secret
metadata:
  name: vault-token
  namespace: vault-system
  annotations:
    kubernetes.io/service-account.name: vault
type: kubernetes.io/service-account-token

When configuring the Vault Kubernetes authentication where the Vault cluster is deployed in the Kubernetes cluster, the CA cert and JWT both default to the local Kubernetes settings because the Vault deployment can easily automatically obtain these settings. However, we will need to explicitly specify these for an external Vault server with a corresponding Kubernetes service account for Vault. We already specified the CA cert when we configured the engine in the Introduction section, but we now need to configure the JWT.

Retrieve the base64 encoded JWT with:

kubectl -n vault-system describe secret vault-token

Decode the token e.g., echo <token> | base64 --decode. Then, update the token reviewer JWT for the Kubernetes authentication engine with the base64 decoded token. You can either do this in the UI, or overwriting the config with:

vault write auth/kubernetes/config token_reviewer_jwt="<base 64 decoded token>" kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" kubernetes_ca_cert="<ca cert text>"

You have now learned more about integrating Vault with Kubernetes. You can now integrate the Vault service account with an external Vault server for automatic Vault Agent injection as an init container into Kubernetes workloads.

[Back to the top]

Section 4: Vault Kubernetes Application

Lesson 1 - Vault Kubernetes Application: Agent Injector Annotations

In the Introduction workshop lesson, we implemented the basic Kubernetes pod annotations for Vault Agent injection of a KV2 secret. There are many more possible annotations recognized by the Vault Agent webhook and pod for customization. The full documentation for these annotations can be examined at https://developer.hashicorp.com/vault/docs/platform/k8s/injector/annotations

In the Intermediate workshop lesson, we discussed the CSI driver and plugin. This enabled syncing secrets to Kubernetes, which could then be interfaced with workload containers as environment variables. We discussed the tradeoff between the disadvantage of an additional attack vector for the secrets versus the advantage of in-process environment variable storage instead of a stored file. What if we could have the advantage without the disadvantage?

With these additional annotations, that is almost completely possible. Recall the following annotations for the “webapp-one” pod within the Kubernetes deployment:

spec:
  template:
    metadata:
      annotations:
        vault.hashicorp.com/agent-inject: 'true'
        vault.hashicorp.com/role: 'webapp-one'
        vault.hashicorp.com/agent-inject-secret-mysecret: 'secret/data/webapps/app_one'

Note that the path to “mysecret” has an implied prefix “secret/data” as those are the defaults for KV2. You would need to specify the mount path and prefix if the secrets engine is mounted elsewhere.

We can now use the “vault.hashicorp.com/agent-inject-template” annotation to configure the template the Vault Agent should use for rendering a secret:

spec:
  template:
    metadata:
      annotations:
        vault.hashicorp.com/agent-inject-template-mysecret: |
         
            export CREDENTIAL=""
         

Now the Vault Agent would hopefully export the secret as a container process environment variable named “CREDENTIAL”, and there will be no additional syncing to a Kubernetes secret. This may be more cumbersome than the alternatives but is certainly more secure. Note the repeated JSON parsing of the key “data” in the Go template within the manifest. Recall the first “data” is for the data key in the body response versus the metadata (reference the Advanced Configuration API lesson), and the second is for the actual nested key “data” in the KV2 secrets engine.

However, this actually renders the content of a file on the container filesystem with this command to export the environment variable. We would still need to instruct the container to export the environment variable in its process using the file contents:

spec:
  serviceAccountName: webapp-one
  containers:
  - name: httpd
    image: httpd
    command: ['sh', '-c']
    args: ['source /vault/secrets/config && <entrypoint script>']

Therefore, the secret content is still rendered into an accessible file on the filesystem, and we still have a potential attack vector for the secret. However, this is still the procedure with the most mitigations for intrusion.

Lesson 2 - Vault Kubernetes Application: Explicit Vault Agent Injector Configuration

In the previous section, we introduced the full gamut of possible Vault Agent injection annotations. These enable the configuration of the Vault Agent relative to Kubernetes. However, we can also leverage the “https://vault.hashicorp.com/agent-configmap” annotation to customize the Vault Agent process itself within the container. This can be helpful if we want to customize beyond the automated defaults. In the Advanced Configuration workshop lesson, we learned about Vault Agent configuration features and Vault Agent HCL templates. We can now apply this knowledge to an actual implementation.

To provide a soft lead into these concepts, we will begin with a configuration for an explicit auto-authentication with the Kubernetes authentication engine. Note that this is typically unnecessary due to using the Vault Agent injector init container provided with the Vault Helm chart, but it serves as a good pedantic exercise (and perhaps a historical lesson). First, we will examine a ConfigMap storing the Vault Agent configuration in HCL format:

apiVersion: v1
kind: ConfigMap
metadata:
  name: vault-agent-config
  namespace: default
data:
  vault-agent-config.hcl: |
    # Remove if running as sidecar instead of initContainer
    exit_after_auth = true

    pid_file = "/home/vault/pidfile"

    auto_auth {
        method "kubernetes" {
            mount_path = "auth/kubernetes"
            config = {
                role = "webapp-one"
            }
        }

        sink "file" {
            config = {
                path = "/home/vault/.vault-token"
            }
        }
    }

This will utilize the Kubernetes authentication engine with the role we configured previously to authenticate and store a token at the specified path. Now we would need to patch (do not actually do this as it would overwrite the sane defaults provided by the Helm chart) the Kubernetes “webapp-one” deployment to mount the ConfigMap as data in an additional initContainer for the Vault Agent (note how similar much of this procedure is to the automation provided by the Vault Helm chart):

apiVersion: v1
kind: Pod
metadata:
  name: webapp-one
  namespace: default
spec:
  volumes:
    - configMap:
        items:
          - key: vault-agent-config.hcl
            path: vault-agent-config.hcl
        name: vault-agent-config
      name: agent-config
    - emptyDir: {}
      name: shared-data
  initContainers:
    - args:
        - agent
        - -config=/etc/vault/vault-agent-config.hcl
        - -log-level=debug
      env:
        - name: VAULT_ADDR
          value: http://VAULT_ADDR:8200
      image: vault
      name: vault-agent
      volumeMounts:
        - mountPath: /etc/vault
          name: agent-config

It is worth noting that “/etc/vault” is not a directory that is created as part of a normal Vault installation and configuration but is the customary location to place Vault configuration files for both the agent and the server. Therefore, that is the directory that we will use here for the mount path. Also, once again, we assume this is a patch and that, therefore, the “webapp-one” service account is again already specified in the values within the manifest API schema for the deployment and associated pods.

Here is a comprehensive example of a ConfigMap for a Vault Agent config relevant to our examples thus far. It contains a “vault-agent-config.hcl” for a sidecar injector and a “vault-agent-config-init-hcl” for an initContainer injector:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: vault-agent-config
data:
  vault-agent-config.hcl: |
    "auto_auth" = {
      "method" = {
        "config" = {
          "role" = "webapp-one"
        }
        "type" = "kubernetes"
      }

      "sink" = {
        "config" = {
          "path" = "/home/vault/.token"
        }

        "type" = "file"
      }
    }

    "exit_after_auth" = false
    "pid_file" = "/home/vault/.pid"

    "template" = {
      "contents" = "/usr/bin/fake-command login "
      "destination" = "/vault/secrets/db-creds"
    }

    "vault" = {
      "address" = "https://vault.vault-system.svc.cluster.local:8200"
      "ca_cert" = "/vault/tls/ca.crt"
      "client_cert" = "/vault/tls/client.crt"
      "client_key" = "/vault/tls/client.key"
    }
  vault-agent-config-init.hcl: |
    "auto_auth" = {
      "method" = {
        "config" = {
          "role" = "webapp-one"
        }
        "type" = "kubernetes"
      }

      "sink" = {
        "config" = {
          "path" = "/home/vault/.token"
        }

        "type" = "file"
      }
    }

    "exit_after_auth" = true
    "pid_file" = "/home/vault/.pid"

    "template" = {
      "contents" = "/usr/bin/fake-command login "
      "destination" = "/vault/secrets/db-creds"
    }

    "vault" = {
      "address" = "https://vault.vault-system.svc.cluster.local:8200"
      "ca_cert" = "/vault/tls/ca.crt"
      "client_cert" = "/vault/tls/client.crt"
      "client_key" = "/vault/tls/client.key"
    }

You have now learned more about Vault with Kubernetes applications. You can now robustly configure the Vault Agent in Kubernetes with annotations and robustly configure the Vault Agent in the container with configuration files mounted from ConfigMaps.

You are finished learning about everything you likely need to know about Vault with Kubernetes! However, there is still plenty more to learn if you want to perform a deep dive or need to deal with edge cases in your environment. The Vault documentation has even more to offer for those interested: https://developer.hashicorp.com/vault/docs.

Contact us with any questions. We are here to help!