RHEL7, Docker, Puppet & Openstack…Oh My!

Unless you’ve been living under a rock, you have certainly heard about Docker and probably its underlying technology too, Linux Containers.  If you attended the Docker.. I mean.. Red Hat Summit in San Francisco, CA 2 weeks ago then you certainly heard about the many and brilliant projects and integration efforts being performed with Docker right this second.  Shadow-Soft is officially a Foundation partner for Docker and it was great meeting the Docker team in SF last week.

 

For a moment, I’d like to look with you through my telescope into the future and describe how you will be thinking about your datacenter in a year or two, and how you will be modernizing the way you think of and deploy applications between now and then.

 

In brief, Docker is a user-friendly wrapper program around Linux Container (LXC) features of the newer Linux kernel releases.  Docker enables, using a Registry of Image definitions, the ability to create a Container running the image and holding any associated modifications.  The image itself is not modified, the changes on top are a layer.  Docker executes these containers as processes on the host, so they leverage the host policies for restriction of memory, disk, CPU and access control.  In other words, containers can be tiny.  Containers are also portable.  Developers can build a container on top of a blessed image from Operations and Security teams, and it will run.  What a powerful concept!

 

So, you are probably thinking that’s pretty cool and it kind of makes you think of how shipping was normalized with the advent of shipping containers of a uniform dimension with the intention of piling them on transports, lifting them with cranes, dropping them on railway cars or trucks – an entire distribution infrastructure specialized in dealing with the containers without regard to the contents.

 

#
# An example DockerFile
#
FROM my_blessed_rhel7
MAINTAINER Bob Opsman "buildspecialist@example.org"
RUN yum install -y apache2
RUN mkdir -p /var/log/services/apache2
ENV APACHE_LOG_DIR /var/log/services/apache2
ENV APACHE_RUN_GROUP www-services
ENV APACHE_RUN_USER apache
EXPOSE 80
ENTRYPOINT ["/usr/bin/apache2"]
CMD ["-D","FOREGROUND"]

 

If Docker is the solution for the container, what about the infrastructure?  That’s where Puppet and OpenStack come in.  With the inclusion of Docker in RHEL7, all the tools you need to manage containers on an individual system host comes out of the box when you install Red Hat Enterprise Linux in your enterprise or in your cloud hosting provider.

 

root#:  docker build -t="images/rhel7-apache2"
root#:  docker push images/rhel7-apache2
root#:  docker run -d -p 80 -v /var/www/my-web-app images/rhel7-apache2

 

But, if you can do it on the system command line, you can do it with automation, right?  Of course!  Enter Puppet.  Puppet is the crane that lifts the container off the ship and puts it on the RHEL7 container host.  You can fit as many containers on the host as you budget capacity for.  Since containers use physical system resources of CPU and Memory, you can size container host systems in the same way you would size a system to run all potential container services in parallel directly.  You don’t need to compute fixed memory sizes or fixed disk images sizes as you would with Virtual Machines.  If you need to, you can enforce maximums, but there is no minimum.  Yes, you read that right – this isn’t virtualization.  If the process isn’t consuming memory, that memory is free to be used elsewhere.

 

Take advantage of Gareth Rushgrove’s docker module for Puppet available on PuppetForge:  https://forge.puppetlabs.com/garethr/docker

 

root#: puppet module install garethr-docker

 

Now, you can provision Docker images and execute containers on any node managed by Puppet, just by using the right declarations.

 

include 'docker'
class { 'docker':
  version => '0.9'
}
docker::image { 'resources/registry':
}
docker::image { 'blessed_rhel7':
  image_tag => 'baseline'
}
docker::run { 'my-web-app':
  image => 'images/rhel7-apache2',
  command => '/usr/bin/apachectl restart'
  # other optional parameters ...
}

 

Awesome sauce.  What’s next?  What if I told you that when you are deploying elastic cloud architectures with OpenStack Icehouse that you can use Docker natively?  In fact, you can, using the Docker plugin for Heat available on GitHub:  https://github.com/openstack/heat/tree/master/contrib/docker/docker

 

root#:  git clone https://github.com/openstack/heat.git && cd heat
root#:  pip install -r requirements.txt
root#:  PLUGIN_PATH=$(cd heat/contrib/docker-plugin/plugin; pwd)
root#:  ln -sf $PLUGIN_PATH /usr/lib/heat/docker
root#:  echo "plugin_dirs=$PLUGIN_PATH" > /etc/heat/heat.conf

 

Now, just create your Heat template to leverage your Docker image to start containers:

 

heat_template_version: 2013-05-23
description: Single compute instance running an apache2 Docker container.
resources:
  my_instance:
    type: OS::Nova::Server
    properties:
      key_name: my_openstack_key
      image: rhel7-blessed
      flavor: m1.small
      user_data: #include https://get.docker.io
  my_docker_container:
    type: DockerInc::Docker::Container
    docker_endpoint: { get_attr: [my_instance, first_address] }
    image: rhel7-apache2

 

Congratulations young padawan, you have the mastered the basics.  What’s next you say?  What mysteries hide in the crystal ball?  Consider for a moment what we have:

 

  • Host level provisioning of isolated containers
  • Containers running on top of a stable RHEL7 kernel, with or without their own sbin/init
  • Containers leveraging filesystem copy-on-write layers to capture changes overlaid on a master image
  • These container overlays are portable, exportable
  • We can put apps of our choice in the containers
  • We can use automated, even elastic provisioning to deploy and start these containers

 

Now, there are a few logical next steps:

 

  • Why not make containers hold user applications, not just system services?  Imagine web browser, word processor or scientific application containers, delivered as a service.
  • Why not package and assemble whole application suites in the container model?  Imagine individual components of an application delivered as an install ready set of containers where components (each a container) can be upgraded independently.
  • Why not create an entire self-service collection of containers and authorized images, complete with setup automation and demand scaling methods of elastic provisioning and present them to organizations?  Imagine ala-carte system infrastructure build outs in the cloud, composite and ready to run and scale to serve the needs of the moment – with incremental or subscription billing.

 

Now, we’re getting to the future of IT.  Where multiple version, configurations and users of applications and services coexist and scale as fast as your hardware or provider can manage, with native levels of performance.  You pay for what you use, you never use more than you need, and your costs trend down as compute and storage get cheaper.  Oh, and maybe everyone gets to be root… and that’s ok… but I digress, because surely there are some things which will never happen.

 

We’re glad to have you along for the ride.

 

 

Filled under: Uncategorized