Ansible and Puppet Orchestration: A Comparison


In August of 2005, Luke Kanies created Puppet because he was tired of remembering how the different package management systems worked, and wanted to unify them into a single interface. In February of 2012, Michael DeHaan created Ansible as an “extra-simple Python API for doing ‘remote things’ over SSH”. Since then, Puppet continued to grow naturally into a configuration management tool, and Ansible grew naturally as an orchestration tool. With the advent of imperative server builds, people realized Puppet worked as a software provisioner in a pinch, and Ansible enabled easy brownfield and greenfield implementations with low setup and cleanup. It is well known that Puppet is the best choice for configuration management, and Ansible for software provisioning.

In November of 2009, RIPienaar created the Marionette Collective for orchestration alongside Puppet. It utilized messaging queue middleware. Eventually RIPienaar created Choria as a successor to MCollective. Puppet instead decided to implement the Puppet Orchestrator as the official orchestration successor. After some initial speed bumps, the Puppet Tasks and Plans were created, and re-used some of the Puppet Orchestrator’s implementation. With all of that that history out of the way, we clearly see that we have two mature orchestration software tools available.

In this article, we will explore a comparison between orchestration in Ansible and Puppet on a case-by-case basis. By the end of the article, it should hopefully be clear which is a better fit for different specific environments.



Ansible famously utilizes the SSH protocol for communication between controller and managed nodes. Although Ansible is agentless, the managed node also requires Python to be installed at a recent version. The SSH communication can sometimes be inefficient, and therefore Mitogen exists to improve the communication efficiency here and elsewhere.

Puppet’s orchestration utilizes the pxp-agent (C++11) and the pcp-broker (Clojure). They each utilize a custom Puppet Communications Protocol (PCP) for the nested acronym PCP Execution Protocol (PXP). Puppet’s implementation therefore depends on custom first-party software.


Without optimization or Mitogen, then the Puppet Orchestration platform outperforms Ansible in terms of speed and efficiency. If Ansible is executing with Mitogen and some measure of optimization, then the race becomes much closer.

With regards to security, Ansible of course uses the standard SSH protocol encryption methods. These are normally the RSA ciphers (e.g. AES-256-GCM etc.) for what are commonly known as “SSH keys”. Puppet uses the GNU/TLS encryption for its communication, which actually mostly leverages the same encryption ciphers as SSH for its keys. Although the encryption strength and methods are so similar between the two, there is a chance that a company’s security standards will not allow SSH to some or more instances. In that situation Ansible’s use becomes more limited in that environment.

In terms of reliability, then this is where Ansible really shines. SSH essentially almost never becomes inoperable on a server (I cannot personally recall an incident where the SSHD died). If there are SSH connectivity issues, then these would be general networking issues, and therefore also be applicable to Puppet’s orchestration communication. On the other hand, Puppet’s pxp-agent is prone to stopping on servers. Puppet’s pxp-agent also requires an active client server subscription to a Puppet server. The pxp-agent connection to the server is not initiated until the first catalog application after bootstrapping. In general from customer experience over several years, there is about a 10-15% chance that a Puppet orchestration targeting large numbers of servers will fail for a managed node in the target set due to inability to communicate.

A worthy side note is that in the very rare instance where the managed node has Windows installed, then Ansible uses the WinRM communication protocol. WinRM is notoriously unwieldy, but still possible to successfully utilize with Ansible with some initial configuration bootstrapping on the managed nodes. However, the overhead is still rather high.



Ansible’s orchestration organization overlaps its software provisioning organization. Both are organized into playbooks that control the orchestration, and roles that define the specific tasks independent of the orchestration. Therefore these are functionally unionized.

Puppet’s orchestration organization is independent of its configuration management organization. Within a Puppet module exists a tasks directory for executing a block of code, and metadata for specifying the task information. Within a Puppet module also exists a plan directory for controlling the orchestration, and potentially also executing more code defining tasks. It is assumed that these share an area of responsibility with the module itself, but they are essentially functionally disjoint from the module and its configuration management.


The convenience of sharing code between different functionalities in Ansible should not be understated. Code that is utilized for software provisioning can be easily repurposed to orchestrate across various servers and groups. This convenience likely derives from Ansible’s original design pattern as an orchestrator and ad-hoc task executor.

On the other hand, tasks and plans require independent code development in Puppet. Additionally, the tasks require metadata to specify attributes such as inputs and their types. Although thiat type of functionality is intrinsically lacking in Ansible, the point here is that Puppet orchestration code development requires a good bit of extra time and effort in addition to the configuration management code and data development. This fact then motivates a cost-benefit consideration to the extra time and effort involved.

DSL and Syntax


Ansible’s straightforward YAML DSL is rather well-known at this point. Learning to code tasks in Ansible requires learning YAML and basic Jinja2 with a dash of Python. Writing a task is generally easy, and reading one is also not difficult. Tasks also have a corresponding name value to further explain the task’s functionality.

Puppet tasks are written in any language with a valid interpreter on the targets. The tasks will have a corresponding supporting metadata file in JSON format specifying task attributes. Puppet plans are written in the Puppet DSL, but language constructs related to catalog compilation are not allowed. However, the plans are compiled and interpreted imperatively instead of declaratively. There are multiple language options therefore for tasks, and the intrinsic Puppet DSL is available for plans.


Ansible tasks are written in YAML (sometimes also Jinja2) and Python interpreted. Puppet tasks are written and interpreted in any language, and JSON metadata provides supporting functionality. Puppet plans are written in Puppet DSL and Ruby (sometimes also C++) interpreted. Ansible is therefore easier for a team to support since there are not multiple potential additional languages to learn. Also, the only required interpreter on targets is Python, and therefore managing task dependencies (executables, packages, etc.) on targets requires far less maintenance. Ansible’s YAML-based DSL also presents a much shorter learning curve and ramp up time.

Functionality and Robustness


Although there may be a couple showstoppers at this point, overall the discussion thus far has pertained to cost-benefit. Now we discuss potential use cases, and observe how Ansible and Puppet actually support these scenarios.

Ensure Package at Latest Available Version

This is a pretty straightforward patching scenario. In Ansible, this would be really simple:

- name: ensure packages are at latest available version
    name: my_packages # assume this is a list of strings
    ensure: latest

In Puppet, the easiest procedure for this would be to use the package module by Puppet with the package task. You can then simply execute the package task with action=upgrade and name=my_package. However, this becomes awkward with multiples packages. We then leverage a Puppet plan:

plan my_module::my_packages(
  TargetSpec    $nodes,
  Array[String] $packages,
) {
  $packages.each |String $my_package| {
      "ensure ${my_package} is at latest version",
      'name'   => $my_package,
      'action' => 'upgrade',

So that is obviously a good bit more effort, but also obviously more robust since it is actual coding.

Execute Ad-Hoc Command

In Ansible we can simply do this with the CLI and the command module:

ansible <hosts> -a "/usr/bin/echo 'hello world'"

Wow that is easy. In Puppet, we can do this with the Bolt or Puppet Tasks CLI:

bolt command run "/usr/bin/echo 'hello world'" --nodes <hosts>

This one feels like a “draw” in every sense of the meaning.

Update Oracle Database Parameter

Easy tasks are one thing, but how about this one to really strain both orchestrator tools? For Ansible, this is possible by installing the third party Oracle Database collections. These contain custom Oracle modules developed in Python for interacting with the Oracle databases. You can then use these modules as expected:

- name: Reset the value of open_cursors
    hostname: remote-db-server
    service_name: orcl
    username: system
    password: manager
    name: open_cursors
    state: default
    scope: spfile

In Puppet, you would similarly leverage custom integrations with Oracle. The primary difference here would be that the bindings are in any language (assume Ruby for easy interfacing with Puppet bindings). The task could then be executed in a plan like:

  'update oracle database parameter',
  'os_user' => 'oracle',
  'param'   => 'my_param',
  'value'   => 'my_value',
  'scope'   => 'spfile',

Not too bad ether way besides the extra effort to install the third party support for Oracle Database.

Patch JBoss/Wildfly

This one is a bit more fun because there is not even third party support for doing this. Therefore, you would need to code your own custom Ansible module and Puppet task for doing this. You could alternatively attempt to do this with raw commands for both, but let us assume you have the time to implement according to standards.

Essentially the functionality we would need to code here in Python or [normally] Ruby would be to retrieve the JBoss patch from an artifact repository, and then to execute the JBoss CLI to patch. The CLI would need to be executed as the Wildfly user. Therefore, we need to do the normal checks on the existence of the user and artifact, that the CLI is executable, and that the artifact can be retrieved to a readable location on the local filesystem. Wildfly would then need to be restarted afterward.

The patching command code snippet would appear like the following for a Puppet task:

# apply the patch
stdout, stderr, status = Open3.capture3("#{params[:cli_path]}/ 'patch apply /tmp/'")

# check on success of patching
  puts 'JBOSS was not successfully patched.'
  message = stderr.empty? ? stdout : stderr
  raise Puppet::Error, _(message)

# remove the patch file

# restart the jboss eap server
stdout, stderr, status = Open3.capture3("#{params[:cli_path]}/ 'shutdown --restart=true'")

The task could then be executed with the CLI like puppet task run my_module::patch_jboss os_user=wildfly patch_source= --nodes Assuming a similar Pythonic implementation in Ansible corresponding to the Ruby code above, we would have a task like the following:

- name: jboss patch
    user: wildfly

Execute Host Orchestration in Parallel

In Ansible this occurs by default when targeting multiple hosts. However, allowing each host to execute to completion without waiting on the other hosts (assuming no inter-node dependencies) requires the free strategy.

In Puppet, we would need some code similar to process forking:

background() || {
$result = wait()

Inter-Host Communication

Both Ansible and Puppet orchestration adhere to a similar controller architecture pattern. This causes communication between managed client nodes to be more arduous than in a decentralized distributed architecture. Let us assume as a basic example that we want to copy a file from server “alice” to server “bob”. There are basically two design patterns for accomplishing this task.

The first is to copy the file from “alice” to the Puppet server/Ansible controller, and then to copy the file from there to “bob”. This can be accomplished in Ansible like:

 - hosts: servers
   - name: fetch the file
       src: /path/to/file
       dest: /path/to/storage
     when: "{{ inventory_hostname == 'alice' }}"

   - name: copy the file
       src: /path/to/storage/file
       dest: /path/to/file
     when: "{{ inventory_hostname == 'bob' }}"

This is rather awkward and makes assumptions about pre-existing conditions. Puppet falls prey to the same issues in its implementation:

$alice = get_target('alice')
$bob = get_target('bob')
download_file('/path/to/file', '/path/to/storage', $alice, 'fetch the file')
upload_file('/path/to/storage/file', '/path/to/file', $bob, 'copy the file')

The second design pattern is to directly copy the file from “alice” to “bob”. We will assume usage of rsync with the SSH transport protocol for the sake of simplicity. We must therefore also assume that the two servers are capable of communicating with each other from both a networking and security perspectives (which may not necessarily be true for all environments).

We use the synchronize module in Ansible (not too bad really):

- hosts: alice
  - name: copy file from alice to bob
      src: /path/to/file
      dest: /path/to/file
      mode: pull
    delegate_to: bob

and some interesting coding for Puppet (well then…):

$bob = get_target('bob')

# assemble facts for bob
run_plan('facts', 'targets' => $bob)

# assemble ip of bob
$bob_ip = facts($bob)['networking']['ip']

# custom rsync task
  'copy file from alice to bob',
  'source'           => '/path/to/file',
  'destination_host' => $bob_ip,
  'destination_path' => '/path/to/file',

# or raw command
run_command("<rsync file from alice to bob>", $alice)

Analysis complete?

We see that Ansible orchestration continues to be considerably easier to implement, but Puppet orchestration continues to be considerably more robust due to the extensive coding options supported.


In this article we examined the current state of, and differences between, orchestration in Ansible and Puppet as they exist today. With this information you should be able to make an informed decision about which is the best orchestration tool for you and your environment. Consideration should also be given as to whether your environment needs configuration management or software provisioning. In that case, then it can be convenient to leverage Ansible or Puppet for orchestration if you already need Puppet for configuration management, or Ansible for software provisioning. Regardless, orchestration is an important component of your software tooling ecosystem independent of your environment architecture, so adding one or the other to your toolbox will certainly pay dividends.

If your organization is interested in architecting and implementing orchestration solutions for your managed nodes in your environment to improve deployments, updates, and modifications, then contact Shadow-Soft below.

  • This field is for validation purposes and should be left unchanged.