<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=58103&amp;fmt=gif">
Skip to content
All posts

How to Create a Terraform Provider: 11 Architecture Components

Matt Schuchard, a certified Terraform engineer at Shadow-Soft, explores a recommended architecture for creating a custom Terraform provider.

Terraform is the industry standard for infrastructure provisioning. One of its core strengths is its capability to abstract all APIs into a common interface through its DSL. The software plugin which brokers these APIs to the Terraform DSL is a provider. Normally, the Go language bindings will further broker between the provider and the API.

Beginning the development of your first Terraform provider can be a daunting and intimidating task. While there are only a few details documented for best practices around provider architecture, providers usually follow certain guidelines for code and repository optimization. In this article, we will explore the generally recommended architecture of a custom Terraform provider.

This article assumes a basic familiarity with Terraform and Golang. It also assumes you have completed the official Hashicorp introductory tutorial and guide on writing a custom provider. The primary prerequisites for this article are a recent version of Golang (>= 1.12) and Terraform (>= 0.12.7).

Repository Organization

The official Hashicorp guide on a custom provider contains an example where you have a code layout of:

.
├── main.go
├── provider.go
└── resource_server.go

This organization is quite sufficient for a Hello World style example. However, almost every real-world provider will require many more code files to be simultaneously properly organized and feature complete. A typical Terraform provider would appear more like the following:

.
├── .gitignore
├── .travis.yml
├── CHANGELOG.md
├── go.mod
├── go.sum
├── LICENSE.md
├── package_name
| ├── client.go
| ├── client_test.go
| ├── data_one.go
| ├── data_one_test.go
| ├── provider.go
| ├── provider_test.go
| ├── resource_one.go
| ├── resource_one_test.go
| ├── resource_two.go
| └── resource_two_test.go
├── main.go
├── Makefile
└── README.md

This repository organization and code layout is an instructive example of how one would place code and supporting files for a Terraform provider. Flexibility in this organization is certainly possible for code and files that are not covered in this example, so it is absolutely a suggestion and not a mandate.

Bindings

An important distinction of the Terraform provider is that it should leverage the Go language bindings to the API and not contain the actual Go bindings within its code. For example, for the rest of this article, we will assume the example Terraform provider interfaces with an API to make a pizza. Note that the author is aware of the actual Terraform community provider, which makes a pizza, and that the one described here is a theoretical provider for a theoretical API.

Assume we have a REST API endpoint at the URL https://api.pizza.com/request/create. This endpoint allows us to request a pizza creation with options such as size and toppings. The greatly simplified and trimmed Go code to interact with this endpoint would look something like:

func MakePizza(opts *pizzaOpts) (map[string][]string, error) {
  // initialize endpoint
  endpoint := "https://api.pizza.com/request/create"

  // initialize http client
  client := &http.Client{}

  // body into bytes.Buffer pointer
  var bodyBuffer *bytes.Buffer
  bodyBuffer = bytes.NewBuffer(opts.body)

  // body always request body or nil for reads
  request, err := http.NewRequest(opts.method, endpoint, bodyBuffer)

  // code here to error handle

  // initiate request for response
  response, err := client.Do(request)

  // code here to error handle

  // code here to read body of response and return it, and convert response json to map
  defer response.Body.Close()

  return responseMap, err
}

This code represents an example of code that fits within the Go language bindings and does not fit within the code for the provider. The provider would instead import the bindings package and invoke the methods in the bindings like:

// options passed from terraform schema
opts := &pizzaOpts {
 size:     12,
 toppings: ["sausage", "peppers", "olives"],
}

// passed from resource function to binding function
pizza, err := MakePizza(opts)

This is an important code architecture distinction that is recommended by HashiCorp and follows intuitively from code separation best practices.

Go Modules and Terraform SDK

An important part of provider development is leveraging the Terraform provider SDK. All current providers are expected to use it, and it greatly facilitates provider development. In fact, it is the primary impetus for the requirements of a recent Go and Terraform, as described above. It requires Go module support, but it is greatly encouraged to be utilizing Go modules for your provider’s dependency management anyway.

To ensure you are properly capturing your dependencies, you should have a go.mod with at least your module, the Terraform SDK, and the Go language bindings for the API specified inside:

module github.com/pizzacorp/terraform-provider-pizza

go 1.12

require (
  github.com/hashicorp/terraform >=0.12.7
  github.com/pizzacorp/pizza-go >=1.0.0
)

You can now easily manage your dependencies with the go executable and its associated subcommands, such as mod, list, and get.

Main and Provider Packages

Your code will normally be organized into two packages: one for the main and another for the core provider. The main package will normally be located within your main.go file and be rather succinct:

package main

import (
  "github.com/pizzacorp/terraform/terraform-provider-pizza/pizza"
  "github.com/hashicorp/terraform-plugin-sdk/plugin"
  "github.com/hashicorp/terraform-plugin-sdk/terraform"
)

func main() {
  plugin.Serve(&plugin.ServeOpts {
    ProviderFunc: pizza.Provider,
  })
}

Note that we are specifying the main package, and then importing the Terraform Provider SDK and the actual provider package. The provider package code will be located within a directory (e.g., pizza) in your root-level directory. The rest of the code in the main package is fairly boilerplate.

Within your package code directory, there will not be a pizza.go file normally. However, you should remember to specify the package as pizza (replace this in your use case with the specific software the provider is for) at the beginning of each of your code files. There are also a couple of imports that will generally be found in each code file containing the provider, data, and resource functions.

package pizza

import (
  "github.com/hashicorp/terraform-plugin-sdk/helper/schema"
  "github.com/hashicorp/terraform-plugin-sdk/terraform"
)

Client

The code in your client should broker communication between the API endpoints and your provider, resources, and data functions. The client should absolutely leverage the Golang bindings for the API. Hierarchically, you can think of the client as functions invoked from the provider, resource, and data functions. The client then invokes functions from the bindings to interface with the API endpoints. For example, one of the common purposes for the client is to provide authentication for your provider.

In some cases, the client code will actually be part of the bindings and not within the provider. This is true of bindings that are also considered widely used core SDKs. In these situations, software consumers will typically be varied enough that a client function would be conveniently located in the bindings.

One common problem that developers often face is how to efficiently pass around the authentication details for the client. The generally accepted practice for this is to declare a struct in the client like:

type clientOpts struct {
  token         string
  username      string
  environment   string
  enterprise    bool
  // we need interface type for the values since they will be highly varied
  configuration map[string]interface{}
  client        *http.Client
}

This struct can then conveniently be passed around the functions in your package for persistent authentication and configuration without the need for re-declaring or re-defining your client options. This will be especially convenient when we construct the provider functions.

Your simplified primary client function could then appear like:

// note the non-error outputs can be a variety of types depending upon context
func APIClient(opts *clientOpts) ([]byte, map[string][]string, error) {
  // initialize client
  client = &pizza.Client{}

  // request a pizza
  request, err = pizza.NewRequest(opts.username, opts.token, opts.configuration, opts.endpoint_suffix)

  if err != nil {
 fmt.Errorf("Error constructing request for pizza")
  }

  // more code for communicating the pizza request with the api

  return body, headers, err
}

Note that the client function should be (and in this example is) public.

Provider Functions

The primary provider function will also appear fairly boilerplate:

"), "The username value must conform to characters and integers."),
Description: "Username for authentication",
},
"enterprise": &schema.Schema {
Type: schema.TypeBool,
Optional: true,
DefaultFunc: schema.EnvDefaultFunc("PIZZA_ENTERPRISE", false),
Description: "Enterprise or FOSS Pizza",
},
// other arguments omitted for brevity
},
// map terraform dsl data to functions
DataSourcesMap: map[string]*schema.Resource {
"pizzas": dataPizza(),
},
// map terraform dsl resources to functions
ResourcesMap: map[string]*schema.Resource {
"pizza": resourcePizza(),
"cheesy_bread": resourceCheesyBread(),
},
// provider configuration function
ConfigureFunc: configureProvider,
}
}

In the above code, we see the formerly secret capability of ValidateFunc in the specification. There are several other formerly secretly documented function keys for the argument schema above that could be found by searching through source code generated documentation, or by diving in to large providers e.g. AWS and Azure. The ValidateFunc example, as one may guess, provides the capability to validate input arguments to the provider block. These are now publicly documented in the Terraform Extensions documentation and also include DiffSuppressFunc, DefaultFunc, and StateFunc.

The final key of the primary provider function schema specifies a provider configuration function. The purpose of this function is generally to interface between the provider function and the client functions. A typical flow would be a user in the Terraform DSL passing in arguments to the provider block, the primary provider function taking these inputs and passing them to the provider configure function, and then the configure function passing them to the client for initialization. A simple example follows:

// configure provider options
func configureProvider(data *schema.ResourceData) (interface{}, error) {
  // store input options in opts struct
  opts := &pizzaOpts {
    token:      data.Get("username").(string),
    commercial: data.Get("enterprise").(bool),
  }

  // pass options from terraform DSL to the client
  client, err := APIClient(opts)

  // code to error handle

  return client, nil
}

Data

A data source is easier to architect and develop than a resource because it is primarily based around a read action instead of several actions. Consequentially, the amount of code required to develop a data source is normally substantially less than for a resource. The primary data source function generally appears like:

func dataPizza() *schema.Resource {
  return &schema.Resource{
 Read: dataPizzaRead,
 Schema: map[string]*schema.Schema {
      "id": {
 Type:     schema.TypeInt,
 Optional: true,
 Computed: true,
      },
      // this would be an argument to the data block in the terraform DSL
      "size": {
 Type:     schema.TypeInt,
 Required: true,
      },
      // this would be one of the data's exported attributes
      "toppings": {
        // we do not allow toppings to be repeated, so a set type is preferable
 Type:     schema.TypeSet,
 Optional: true,
        // ensure the members of the set are all strings
 Elem:     &schema.Schema{Type: schema.TypeString},
        // terraform determines this and not the user
 Computed: true,
      },
      "prices": {
 Type:     schema.TypeList,
 Optional: true,
        // ensure the members of the list are all floats
 Elem:     &schema.Schema{Type: schema.TypeFloat},
 Computed: true,
      },
    },
  }
}

As can be inferred from the code, this data source allows us to look up the possible toppings and prices for a pizza of a given size. For data sources, we typically only directly need a read function (out of all possible helper functions). The read function will perform the read action against the API endpoint.

// read function for the data
func dataPizzaRead(data *schema.ResourceData, meta interface{}) error {
  // construct pizzaOpts
  opts := &pizzaOpts {
    configuration:   {"size": data.Get("size").(int)}
    endpoint_suffix: "/pizzas"
  }

  // receive response body
  responseMap, err := APIClient(opts)

  // error handle code here

  // verify response returned from pizza
  if _, exists := responseMap["toppings"]; exists {
    // set data exported attributes
    data.Set("toppings", responseMap["toppings"])
  } else {
    fmt.Errorf("Toppings not found in response from Pizza.")
  }
  if _, exists := responseMap["prices"]; exists {
    // set data exported attributes
    data.Set("prices", responseMap["prices"])
    return nil
  } else {
    fmt.Errorf("Prices not found in response from Pizza.")
  }

  // typically we only want to return nil or err since we utilize Set methods
  return err
}

Resources

Creating a custom Terraform resource is actually covered rather comprehensively in the official Terraform Provider Tutorial, so we will only need to elucidate on its architecture in relation to the remainder of the provider codebase. Similar to the provider and data functions, we need a primary resource function:

// pizza resource declaration and schema
func resourcePizza() *schema.Resource {
  return &schema.Resource {
    // functions for the various actions
 Create: resourcePizzaCreate,
 Read:   resourcePizzaRead,
 Update: resourcePizzaUpdate,
 Delete: resourcePizzaDelete,
 Exists: resourcePizzaExists,
    // used only in the case that an ID-only refresh is possible, which is not completely true here, but we use this anyway for the sake of brevity
 Importer: &schema.ResourceImporter {
 State: schema.ImportStatePassthrough,
    },
 Schema: map[string]*schema.Schema {
      // resource arguments and their specifications go here
      "size": &schema.Schema {
 Type:         schema.TypeInt,
 Required:     true,
        // if argument value unspecified, grab from environment; nil would be the final backup default value here
 DefaultFunc:  schema.EnvDefaultFunc("PIZZA_SIZE", nil),
 Description:  "The size of the pizza.",
      },
      "toppings": &schema.Schema {
 Type:        schema.TypeSet,
 Optional:    true,
        // ensure the members of the set are all strings
 Elem:        &schema.Schema{Type: schema.TypeString},
 Description: "The toppings that should be on the pizza.",
      },
    },
  }
}

For the sake of brevity, we will only touch on the helper function to perform the pizza creation. Here is a simplified example of how that would appear:

// create a pizza
func resourcePizzaCreate(data *schema.ResourceData, meta interface{}) error {
  // create struct from desired pizza arguments
  opts := &pizzaOpts {
    configuration: {"size": 12, "toppings": ["sausage", "peppers", "olives"]}
  }

  // invoke bindings to make pizza according to specifications
  pizza, err := MakePizza(opts)

  // code to handle errors

  // we need to set the resource id before completely returning from this stack
  data.SetID(pizza["id"])

  return resourcePizzaRead(data, meta)
}

It is important to eventually return a read on the resource such that it can be verified as successfully created according to the specifications. The immediate return could be, e.g., an update, but eventually, the stack should return a read. Also, note that partial state mode is not covered here because it has an implementation impact and therefore does not pertain to the architecture.

Acceptance Tests

Acceptance testing will be an important part of your provider’s code. These tests are normally correlated to your code files with a _test suffix. The core essential imports needed for these tests include:

import (
    "testing"

    "github.com/hashicorp/terraform-plugin-sdk/helper/resource"
    "github.com/hashicorp/terraform-plugin-sdk/terraform"
)

With these packages supporting your code, you are now ready to write acceptance tests for your data and resources. A simplified example of a test for the pizza resource would appear like:

import (
    "testing"

    "github.com/hashicorp/terraform-plugin-sdk/helper/resource"
    "github.com/hashicorp/terraform-plugin-sdk/terraform"
)
With these packages supporting your code, you are now ready to write acceptance tests for your data and resources. A simplified example of a test for the pizza resource would appear like:

func TestAccResourcePizza(test *testing.T) {
    resource.Test(test, resource.TestCase{
        Providers: testAccProviders,
        Steps: []resource.TestStep{
            {
        // test resource config (see below)
        Config: testResourcePizzaConfig,
        Check: resource.ComposeTestCheckFunc(
          // validate resource results in successful creation
          testResourcePizzaExists("pizza.italian", test),
          // validate resource arguments successfully passed
          resource.TestCheckResourceAttr("pizza.italian", "size", 14),
          resource.TestCheckResourceAttr("pizza.italian", "toppings", ["pesto", "olives", "peppers", "basil"]),
          // validate resource attributes can be successfully set
          resource.TestCheckResourceAttrSet("pizza.italian", "id"),
        ),
      },
    }
  })
}

func TestResourcePizzaExists(name string, test *testing.T) resource.TestCheckFunc {
  return func(state *terraform.State) error {
    return nil
  }
}

var testResourcePizzaConfig = `
resource "pizza" "italian" {
  size     = 14
  toppings = ["pesto", "olives", "peppers", "basil"]
}`

You can create more sophisticated test resource configs, including exported attribute arguments, in order to more robustly test your provider, resource, and data functions. Unit testing, while also important, falls within the scope of standard Golang development and therefore will not be covered here.

Additionally, sweepers will also not be covered here as they do not significantly impact architecture. We will only note that, typically there is an isomorphic relationship (bijection) between the code invoking the sweeper functions and the code file containing them. Also, the individual sweeper functions reside in the code files for each associated resource test they sweep.

Makefile

Your Terraform provider requires many various commands for assorted executables to manage all of the processes involved in the software lifecycle. Thankfully, our old friend GNUMake is here to automate these process executions (until a native solution exists). You will often see a Makefile in a Terraform provider code repository, and here are some examples of helpful automated commands you can include courtesy of the AWS provider:

default: build

build: fmtcheck go install gen: rm -f aws/internal/keyvaluetags/*_gen.go go generate ./... test: fmtcheck go test $(TEST) $(TESTARGS) -timeout=120s -parallel=4 testacc: fmtcheck TF_ACC=1 go test $(TEST) -v -count $(TEST_COUNT) -parallel 20 $(TESTARGS) -timeout 120m fmt: @echo "==> Fixing source code with gofmt..." gofmt -s -w ./$(PKG_NAME) # Currently required by tf-deploy compile fmtcheck: @sh -c "'$(CURDIR)/scripts/gofmtcheck.sh'"

As you can see, it is very beneficial to create a Makefile to automate your processes around development, testing, compilation, linking, and other facets of the provider lifecycle.

Continuous Integration

Now that we have ensured there are tests for the code, and that we have a Makefile to automate all of the executable processes around our provider lifecycle, we can wrap these inside of a continuous integration software suite. For a closed source project, this will likely end up being Jenkins. However, for open source projects hosted on Github (which Terraform providers commonly are), this usually ends up being Travis. We can create a Travis yaml config to ensure continuous integration for our provider.

Your Travis matrix should generally include the make commands for at least compiling and linking your provider, and executing its tests. The Travis language support for this will be go. A simple example of this would look like:

dist: xenial
language: go

matrix: include: - go: '1.12.x' name: 'Code Compile' script: - make build - go: '1.12.x' name: 'Code Test' script: - make test - make testacc install: - make tools

Final Thoughts

We have now covered the major aspects of architecting a Terraform provider. This article touches on all of the major files and directories one will normally find in a provider repository, as well as the general content of each file and the content’s relation to the remainder of the codebase. For the sake of brevity, the examples given were simplified and sometimes also trimmed. An actual provider implementation will likely contain code far more sophisticated and complex than what was presented. However, the examples given were chosen to clearly illustrate the architecture.

If your organization has an interest in developing your own Terraform provider, then please reach out to Shadow-Soft’s technical sales team for guidance and assistance. We have experience developing Terraform providers for both organizations and software vendors, and we would be glad to work with you to ensure the quality of your providers. Robust and resilient providers for your and/or your customers’ infrastructure provisioning will ensure efficient, resilient, and stable operations for years to come.