Software Engineer

Jason Hancock

last update:

Puppet Camp LA 2012 is being held on May 19th and hosted by Media Temple in Culver City. I’m excited to be a speaker this year. I’ll be talking about running Puppet on CloudStack instances and automating other parts of your infrastructure. Although the details will be focused on what it takes to run Puppet specifically on CloudStack instances, the methodology I will be presenting translates to other clouds and bare-metal infrastructure.

I wanted a subversion pre-commit hook script that did the following: Ensures all *.pp files in the transaction can be validated by the parser Ensures all *.pp files pass a puppet-lint check Ensures all *.erb files pass a syntax check I poked around a bit, but it looks like most of the existing pre-commit hook scripts were a bit out of date (wouldn’t work on puppet >= 2.7). Also, I didn’t see a script that also ran puppet-lint.


We have a couple of extra monitors lying around at work as well as some mac minis and other assorted hardware that isn’t being used. We wanted to mount the monitors to the wall and have the typical performance graphs, red/green monitors, etc. displayed on them. After attending SCALE 10X and seeing @lolcatstevens’ talk on haproxy where he used impress.js as the presntation software, I knew I wanted to use impress.

When we were building our CloudStack environment we wanted newly created instances to check into the puppetmaster and receive their configurations automatically. The goals for accomplishing this were: No human intervention Do not have to update a separate asset database/spreadsheet/text file/etc. Do NOT use Puppet’s auto-signing feature Instances receive all config via Puppet, thus minimizing the number of CloudStack templates we have to maintain by only having to keep base/minimal images for each OS that we are supporting (One el5 image, one el6 image, etc.

If you’re using Puppet, and using Puppet to automatically generate configurations for other parts of your infrastructure by using stored configurations, then one thing you have to do is clean up the stored configurations database once you have destroyed a node, otherwise whatever stored config you created will continue to persist despite you removing that node. Luckily for us, the folks at PuppetLabs ship a script that we can use to clean up after a node.

After spinning up CloudStack and playing with it for a bit, I decided it was time to add some monitors. We’re currently using Nagios, although I may start looking at Zenoss. Anyhow, I built a couple of plugins to mimic some of the functionality of the Zenoss ZenPack for CloudStack. For example, I can graph how many instances are in my cloud in addition to memory/cpu/storage/IP allocations: You can find all of the code at in my nagios-cloudstack project at GitHub.

Like I mentioned in my last post, I’ve been toying with a CloudStack based private cloud. Let me try to paint a picture of what we’re trying to accomplish with cloudstack: Our dispatch software decides it needs a box of flavor ‘foo’ and calls out to the API to deploy a new box Dispatch software waits until box is up before dispatching a job to the box Dispatch software monitors running job.

Update 12/06/2011: The facter code has been updated to support Fedora and to also load the instance-id of the VM from the metadata available on the virtual router. Grab the latest from GitHub. I’ve been playing with a proof-of-concept CloudStack based cloud at work. One of the things that caught my eye was that you can associate userdata to an instance. What I wanted to do was exploit this and use the userdata to populate facts that I could then use in my Puppet manifests.

Let’s say that you wanted to have a puppet fact that contained the version number of a particular package installed on each server. For this example, let’s use the nagios’ nrpe package. I put my custom facts into a <modulepath>/custom/lib/facter/, so I’ll call it <modulepath>/custom/lib/facter/nrpe.rb. The code looks like this: require'facter'result=%x{/bin/rpm -q --queryformat "%{VERSION}-%{RELEASE}" nrpe}Facter.add('nrpe')dosetcodedoresultendend This creates a fact called ‘nrpe’ that can be accessed in your puppet modules via $nrpe (or to be completely correct, $::nrpe).

I’ve updated my script that orders images shot from multiple cameras chronolgically based on EXIF data that was originally found here. The script now supports Canon CR2 and CRW raw files, Nikon NEF raw files, and jpegs. I’ve moved the script to my github account. You can find it here.