Software Engineer

Jason Hancock

last update:

When we were building our CloudStack environment we wanted newly created instances to check into the puppetmaster and receive their configurations automatically. The goals for accomplishing this were: No human intervention Do not have to update a separate asset database/spreadsheet/text file/etc. Do NOT use Puppet’s auto-signing feature Instances receive all config via Puppet, thus minimizing the number of CloudStack templates we have to maintain by only having to keep base/minimal images for each OS that we are supporting (One el5 image, one el6 image, etc.

If you’re using Puppet, and using Puppet to automatically generate configurations for other parts of your infrastructure by using stored configurations, then one thing you have to do is clean up the stored configurations database once you have destroyed a node, otherwise whatever stored config you created will continue to persist despite you removing that node. Luckily for us, the folks at PuppetLabs ship a script that we can use to clean up after a node.

After spinning up CloudStack and playing with it for a bit, I decided it was time to add some monitors. We’re currently using Nagios, although I may start looking at Zenoss. Anyhow, I built a couple of plugins to mimic some of the functionality of the Zenoss ZenPack for CloudStack. For example, I can graph how many instances are in my cloud in addition to memory/cpu/storage/IP allocations: You can find all of the code at in my nagios-cloudstack project at GitHub.

Like I mentioned in my last post, I’ve been toying with a CloudStack based private cloud. Let me try to paint a picture of what we’re trying to accomplish with cloudstack: Our dispatch software decides it needs a box of flavor ‘foo’ and calls out to the API to deploy a new box Dispatch software waits until box is up before dispatching a job to the box Dispatch software monitors running job.

Update 12/06/2011: The facter code has been updated to support Fedora and to also load the instance-id of the VM from the metadata available on the virtual router. Grab the latest from GitHub. I’ve been playing with a proof-of-concept CloudStack based cloud at work. One of the things that caught my eye was that you can associate userdata to an instance. What I wanted to do was exploit this and use the userdata to populate facts that I could then use in my Puppet manifests.

Let’s say that you wanted to have a puppet fact that contained the version number of a particular package installed on each server. For this example, let’s use the nagios’ nrpe package. I put my custom facts into a <modulepath>/custom/lib/facter/, so I’ll call it <modulepath>/custom/lib/facter/nrpe.rb. The code looks like this: require'facter'result=%x{/bin/rpm -q --queryformat "%{VERSION}-%{RELEASE}" nrpe}Facter.add('nrpe')dosetcodedoresultendend This creates a fact called ‘nrpe’ that can be accessed in your puppet modules via $nrpe (or to be completely correct, $::nrpe).

I’ve updated my script that orders images shot from multiple cameras chronolgically based on EXIF data that was originally found here. The script now supports Canon CR2 and CRW raw files, Nikon NEF raw files, and jpegs. I’ve moved the script to my github account. You can find it here.

It’s no secret that I use WordPress for this blog. One of the reasons I like WordPress is the wide variety of plugins that are available. Since I blog a lot about perl/php/puppet code, I like to have a plugin that does syntax highlighting. For this task, I use the WP-Syntax plugin which is built on top of GeSHi. Unfortunately, it doesn’t have a language file for puppet. A quick search turned up this language file.

One thing I’ve struggled with in puppet in the past was managing iptables firewalls. I used to end up building a few different templates for the various firewalls that I had to manage and just passed in a list of ports to open up, but it was kind of a nightmare to manage as more and more applications with different requirements were added. The number of templates began to sprawl.

puppet and xinetd

I ran into an interesting problem with puppet today involving xinetd on CentOS 5. In one of my manifests, I had declared the following service resource: <service{"xinetd":ensure=>stopped,enable=>false,hasstatus=>true,} The problem I was seeing was every time the puppet client ran, it stopped the xinetd service: Sep 5 01:10:21 someserver puppet-agent[2300]: (/Stage[main]/Xinetd/Service[xinetd]/ensure) ensure changed 'running' to 'stopped' Sep 5 01:10:21 someserver puppet-agent[2300]: Finished catalog run in 5.82 seconds Sep 5 01:20:33 someserver puppet-agent[2300]: (/Stage[main]/Xinetd/Service[xinetd]/ensure) ensure changed 'running' to 'stopped' Sep 5 01:20:36 someserver puppet-agent[2300]: Finished catalog run in 11.