dingus problems

This weekend I decided to takle both learning Ruby and working with AWS
via the Ruby API. Having only played with both of these in the past,
this presents two learning challenges at once. However, from past
projects, this is how I learn best. I am somewhat familiar with AWS
terms and once made a script in Python to fire up an instance. This was
before Amazon came out with their management console, so I imagine
things have come a long way since then (hopefully easier). I also played
with Ruby for a while, but didn't have a decent project for it. Having a
project with goals will hopefully keep me on track and give me a way to
measure my progress.

My goals for this project are as follows:

  1. Utilize a web based interface. Using rails seems to be the popular
    way to do this, and I'd like to base my template interface off of
    boilerstrap5, a combination of twitter-bootstrap and
    html5boilerplate. This will probably have the most trial and error
    to get it right.
  2. Connect to the AWS api and pull some basic information such as my
    account name.
  3. Fetch details about an AMI image. Maybe I'll be able to parse a list
    of public images, or maybe I can just punch in an image ID and pull
    up the details.
  4. Start an instance from an AMI image. This might require some steps
    like setting up a an S3 bucket -- we'll see.
  5. List my running instances.
  6. Control a running instance - ie, power cycle it.
  7. Destroy an instance.
  8. BONUS: Do something similar with S3 buckets - create, list, destroy.

First off, I need to setup a ruby development environment. Since I have
used PyCharm in the past, I will try JetBrain's RubyMine for my
editor environment. After installing this, the first thing I learned is
that rails is not installed. I could install using apt-get, but
Jetbrains recommends using [RVM]. It looks like a nice way to manage
different versions of Ruby, rails, and gems. I know when I have
installed Ruby applications requiring gems, gem versions was always a
source of concern. It is very easy to get mismatched gem versions in the
wild.

RVM install locally to ~/.rvm on linux, which is nice - you don't mess
up any system wide ruby installations and keep everything local to your
development environment. After installation, I had to figure out a
couple bits with rvm.

  • rvm install 1.9.2 # installs ruby 1.9.2
  • rvm list # lists versions of ruby installed
  • rvm use 1.8.7 # use ruby 1.8.7

First, your terminal has to be setup as a login shell. This tripped me
up for a while until I changed the settings in my terminal emulator.
terminator has this as checkmark option.

ytjohn@freshdesk:~$ rvm list

rvm rubies

   ruby-1.8.7-p371 [ x86_64 ]
   ruby-1.9.2-p320 [ x86_64 ]

# => - current
# =* - current && default
#  * - default

ytjohn@freshdesk:~$ rvm use 1.8.7

RVM is not a function, selecting rubies with 'rvm use ...' will not work.

You need to change your terminal emulator preferences to allow login shell.
Sometimes it is required to use `/bin/bash --login` as the command.
Please visit https://rvm.io/integration/gnome-terminal/ for a example.

After switching to login

ytjohn@freshdesk:~$ rvm use 1.8.7
Using /home/ytjohn/.rvm/gems/ruby-1.8.7-p371

Finally, once you get ruby and rails working, you can create your rails
project. I'm starting with a rails project because it's "all the rage"
and gives you a decent running start. Later, I'll work on switching the
supplied templates with boilerplate + bootstrap based ones.

This gets me started. Next up, I'll actually create the project from
within RubyMine and just work on basic web functionality.

[RVM]:

Bootstrap and CDNs

Often when creating a "modern" web page, it's very common to find yourself reinventing the wheel over and over again. I know any time I wanted to create a two-column layout, I would have to look at previous works of mine or search the Internet for a decent example. However, I recently came across Twitter's Bootstrap framework. At it's core, it's just a css file that divide your web page into a 12-column "grid". You create a "row" div, and inside that row you place your "span*" columns. Each span element spans from 1 to 12 columns, and should always add up to 12 for each row. You can also offset columns. There are css classes for large displays (1200px or higher), normal/default displays (980px), and smaller displays such as tablets (768px) or phones (480px). Elements can be made visible or hidden based on the device acessing the site (phone, tablet, or desktop). There is also a javascript component you can use for making the page more interactive.

If you download bootstrap, you get a collection of files to choose from. There's js/bootstrap.js, img/glyphicons-halflings.png, img/glyphicons-halflings-white.png, css/bootstrap.css, css/bootstrap-responsive.css. There is also a compress .min. version of the javascript and css files. You can read further about the responsive version of the css, or how to use the icons.

Normally, one would take these downloaded files and put them into their own web application directory tree. However, there is a better way. Unless you are planning to use this on an Intranet with limited Internet access, you can use a copy of these files hosted on a "content delivery network (CDN)". A good example of this is the jQuery library hosted on Google's CDN. Google has a number of hosted libraries on their network. This has several advantages, one of which being caching. If everyone is pointing at a common hosted library, that library gets cached on the end-user's machine instead of being reloaded on every site that uses that library.

While bootstrap is not hosted on google, there is another CDN running on CloudFlare called cdnjs that provides a lot of the "less popular" frameworks, including bootstrap. Here are the URLs to the most current version of bootstrap files (they have version 2.0.0 through 2.1.1 currently).

  • http://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/css/bootstrap.css
  • http://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/css/bootstrap.min.css
  • http://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/css/bootstrap-responsive.css
  • http://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/css/bootstrap-responsive.min.css
  • http://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/bootstrap.js
  • http://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/bootstrap.min.css
  • http://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/img/glyphicons-halflings-white.png
  • http://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/img/glyphicons-halflings.png

All one has to do in order to use these is to add the css and the javascript (optional) to their page. Since most CDNs support both http and https, you can leave the protocol identifier out.

<link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/css/bootstrap.min.css">
<script src="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/bootstrap.min.js">
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script>

Here's an example you can use on your own.

<!DOCTYPE html>
<html lang="en">

<body>
<div class="container-fluid">
        <div class="row-fluid">
         <div class="span12 label label-info">
                <h1>Header</h1>
         </div>
        </div>

        <div class="row-fluid">
         <div class="span2">
                left column
                <i class="icon-arrow-left"></i>
         </div>
         <div class="span6">

                <p>center column

                <i class="icon-tasks"></i></p>

                <div class="hero-unit">
                 <h1>This is hero unit</h1>
                 <p>It is pretty emphasized</p>
                </div>

                <p>still in the center, but not so heroic</p>

         </div>
         <div class="span4">
                right column
                <i class="icon-arrow-right"></i>
         </div>
        </div>
</div><!-- end container -->

<!-- load everything at end for fast content loading -->
<link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/css/bootstrap.min.css">
<script src="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/bootstrap.min.js">
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script>
</body>
</html>

Finally, I found that NetDNA also hosts bootstrap on their CDN at [www.bootstrapcdn.com]. I would say that either CDN would be fairly reliable, as they are both sponsored by their CDN they are running on. One advantage of this site is that they provide a lot more than just the basic bootstrap hosting such as custom themes and fonts.

To use them, you can simply swap out your css and js scripts.

<link href="//netdna.bootstrapcdn.com/twitter-bootstrap/2.1.1/css/bootstrap-combined.min.css" rel="stylesheet">
<script src="//netdna.bootstrapcdn.com/twitter-bootstrap/2.1.1/js/bootstrap.min.js"></script>
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script>

UPDATE: I added jquery into the above examples because several parts of bootstrap rely on it (such as the Modal dialogs).

20120918

python console import tip

Tags: python console, python, import, pysphere

This is a quick little python tip. When experimenting with python commands and modules, it's usually easiest to use the python console interactively, then create your programs later. The downside of this is that sometimes you have to do a bit of typing before you get to the specific command you want to try.

Imagine the following example:

johnh@puppet2:~$ python
Python 2.7.3 (default, Aug  1 2012, 05:14:39)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from pysphere import *
>>> server = VIServer()
>>> server.connect("vc1.example.com", "username", "password")
>>> print server.get_server_type(), server.get_api_version()
VMware vCenter Server 5.0

Here, I had to type in 3 lines, including my password in plaintext, to test out querying the server. I can't demonstrate this live, because then I reveal my password. Well last week, I made a [test1.py] file that reads a yaml configuration file and does the commands I just did. Here's the smart bit. I can import that file directly into the python console. Once it imports, it runs each python command and leaves me in the console, ready to query the system again. The only caveat is that my "server" variable is now part of the test1 module as "test1.server".

johnh@puppet2:~$ python
Python 2.7.3 (default, Aug  1 2012, 05:14:39)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import test1
VMware vCenter Server 5.0
puppet1-centos6 10.100.0.206
>>> test1.server.is_connected()
True
>>> vmlist = test1.server.get_registered_vms()
>>> for v in vmlist:
...     print v
...
[iscsi-244gb-raid10] rhel-puppet1/rhel-puppet1.vmx  
[iscsi-244gb-raid10] puppet2-ubuntu 12.04LTS server/puppet2-ubuntu 12.04LTS server.vmx
[iscsi-488gb-raid10] puppet3-solaris11/puppet3-solaris11.vmx

pysphere: VMWare in Python

Tags: api, python, programming, vmware, pysphere

I do a good bit of work with VMWare vSphere and I've been wanting to
work more with their API. Everything (except console) that you can do in
the vSphere client, you should be able to do through a web-based API.
Unfortunately, it seems that VMWare does not provide an SDK for Python,
my current language of choice. I could work in Perl or Java, but I want
to develop a web application, which I don't want to do in Perl.
Fortunately, I found pysphere, which is a fairly active project of
implementing the VI API in python. It might not fully implement the API,
but it looks relatively stable and easy to implement. Plus, if I find
any functionality missing, I can extend the class directly.
I followed their Getting Started page to get it installed
and get connected, but I didn't like having my password right there in
my working code. This was easily resolved by installing PyYaml and
creating a config.yaml file. Then, it was just a matter of following
along with the examples to make a good test script.
My config.yaml:


server: esxi1.example.com
user: john
pass: password

My test.py:

#!/usr/bin/python

import yaml
from pysphere import *

f = open('config.yaml')
config = yaml.load(f)
f.close()

server = VIServer()
server.connect(config["server"], config["user"], config["pass"])

print server.get_server_type(), server.get_api_version()
vm1 = server.get_vm_by_name("puppet1-centos6")
print vm1.get_property('name'), vm1.get_property('ip_address')

And does it work?

$ ./test.py
VMware vCenter Server 5.0
puppet1-centos6 10.100.0.206

I was even able to go so far as cloning a vm (vm2 = vm1.clone('new vm'))
and can already see massive possibilities with this library in its
current state. The API can be queried much like a simple database, and
objects acted upon with simple statements. Operations like my vm clone
can be setup as a task and run asynchrously. I could easily see
integrating this with something like tornado, twisted, or even
cyclone to make a non-blocking web framework.

Upgrade Redmine

Currently, I have Redmine version redmine 1.3.3 installed via the
ondrej/redmine PPA. I have been wanting to upgrade to the 2.x series
of redmine, but no PPA currently exists for it. Redmine is officially
provided by Ubuntu, but the version for Precise is 1.3.2, and Ondřej's
PPA is on 1.4.3. While I usually prefer to have my software installation
and updates handled by packages, it looks like to get to the 2.x series,
I'll have to go back to source.

I will be following the official upgrade guide closely, but with a
few variations.

  1. The apt-get/ppa version uses multiple file locations for source code
    and configuration. I'll have to consolidate to one place.
  2. My ruby and passenger modules were installed and modified to work
    with the ppa version of redmine. Adjustments will be needed.

My ruby version is 1.8.7 (1.8.7 min), rails 2.3.14 (2.3.6 min) and gem
1.8.15 (1.8 min. Already having the minimum requirements makes this a
bit easier.

After performing a mysql backup (hint: database.yml is in
/etc/redmine/default), I downloaded redmine to /usr/local/redmine-2.0. I
also decided to stop Apache so that Passenger wouldn't be running the
existing redmine instance. If I had other sites running on this server,
I would have disabled this virtual host or put up a maintenance page.

cp /etc/default/database.yml /usr/local/redmine-2.0/config
cp /var/lib/redmine/default/files/* /usr/local/redmine-2.0/files

I didn't have any plugins, but if I did they would either be in
/usr/share/redmine/vendor/plugins or /usr/share/redmine/lib/plugins. I
do intend to install a couple plugins when I get into 2.x though.

I found in step 3 that the rake commands didn't work. This is probably
because I wasn't working from an existing rails directory. I went to the
Redmine Installer page, which gave me the answer. "Since 1.4.0,
Redmine uses Bundler to manage gems dependencies. You need to install
Bundler first.".

cp /etc/default/database.yml /usr/local/redmine-2.0/config
cp /var/lib/redmine/default/files/* /usr/local/redmine-2.0/files

I ran into an error when bundle was installing json 1.7.4.

gem install bundler
# run the next command from /usr/local/redmine-2.0
# it reads the file "Gemfile"
bundle install --without development test

According to an about.com page, I need build-essentials,
libopenssl-ruby, and ruby1.8-dev installed. The one I was missing was
ruby1.8-dev. This is easily fixed with an apt-get install ruby1.8-dev.

I had to install the following other packages for certain gems. The
Gemfile includes items for postgresql and sqlite, even if you don't use
it. The install guide lets you know that you can
skip these with the --without option. You would just add "pg sqlite
rmagick" to the end of your bundle install line (above).

  • json: build-essentials, libopenssl-ruby, and ruby1.8-dev
  • mysql: libmysqlclient-dev
  • pg: libpq-dev (alternatively: add pg to the --without list)
  • rmagick: libmagickcore-dev, libmagickwand-dev (alternatively: add
    rmagick to the --without list)
  • sqlite: libsqlite3-dev

Once we got Bundler installed and all the required gems, we switch back
to the Upgrade Guide to update our session
store and migrate the database. I had no plugins, so I'm skipping that
step.

/usr/bin/ruby1.8 extconf.rb
extconf.rb:1:in `require': no such file to load -- mkmf (LoadError)
from extconf.rb:1

Let's start this locally before we mess with passenger or apache (be
sure to allow port 3000 via iptables or ufw).

rake generate_secret_token
rake db:migrate RAILS_ENV=production 
# unecessary, as this is a new directory, but why not clean up?
rake tmp:cache:clear
rake tmp:sessions:clear

This worked without a hitch for me. Now on to my passenger setup. I
already has this configured and installed previously, so all I have to
do is change my VirtualHost directory.

ServerName projects.example.com
DocumentRoot /usr/local/redmine-2.0/public
RailsSpawnMethod smart
# Keep the application instances alive longer. Default is 300 (seconds)
PassengerPoolIdleTime 1000
RailsAppSpawnerIdleTime 0
RailsFrameworkSpawnerIdleTime 0
PassengerMaxRequests 5000
PassengerStatThrottleRate 5

        AllowOverride all
        Options -MultiViews

I did have to change a few permisssions (all files installed as owned by
root)

chgrp -R www-data config
chown -R www-data files
chown -R www-data log

Markdown Blogging

I recently have started a process of migrating my website over to
blogger.com. One of the main reasons for this was because in my last
server move, I had broken my Movable Type installation, and found myself
too busy to fix it. I found I didn't want to spend my time fixing and
updating blogging software. I wanted to work on my projects, write them
up, and post them. It was time to move my content to an existing
platform that handled the back end. I looked at a few, and decided
blogger.com would be as good as any other service.
It only took a short time to setup a blog, point a CNAME at it, and then
to import my existing posts. When I started creating some new posts, I
immediately ran into some limitations.

  1. You used to be able to edit permalinks on blogger. Now, you can only
    do that before you publish. The only way to change a permalink after
    publishing is to create a new post with the desired permalink and
    delete the old one.
  2. Blogger has no built in formatting for code blocks. So if I want to
    show a config file, source code, or terminal session log, I have to
    fiddle with changing fonts, size, and "blockquote" to get it
    presentable. Even then, you run the risk of strange formatting of
    your raw text.

I found a solution that other bloggers use called SyntaxHighlighter.
This is a combination of javascript and css code that takes text within
your

; tags and gives you nice looking blocks of code, highlighted and (optionally) with line numbers. The catch is that your pre tags need to have a class name, along with the language (perl/c/bash/text) "brush" to use. If you go with pre tags, you have to change any angle bracks to their HTML escaped equivalents of < and >. They have a work-around using SCRIPT/CDATA, but it takes some getting used to. Adding this to your blog only requires a few steps.

I rather liked syntaxhighlighter, but it still seemed like I had to do a
lot of manual work with the code. Also, I had to select the brush each
time. Couldn't it guess? Notepad++ and some others will guess at what
language you're using and highlight accordingly. I found something
called prettify that does just that. You only need to load one js
file and one css file. Prettify works off of either

 or  tags and has similar limitations to SyntaxHighlighter regarding html tags. However, it has the advantage of being able to guess the language automatically.

Being able to use this code made my posts look much nicer, but the
entire process got me thinking. The way I "document" most of projects
typically invole using a notepad editor like Geany or
Notepad++. As I work, I add notes, copy in source code or shell
comands, and do everything in a plain text editor. Later, I add
commentary and clean up the document. I take this and paste it into the
WYSIWYG editor on blogger. Finally, I have to keep switching between
compose and html mode to get my text looking suitable. There are too
many steps for me to want to do this consistently. All I really want to
do is take my text file, add a little formatting in a few spots, a few
hyperlinks in others, and post it.
Enter markdown. Markdown is a text-to-HTML conversion tool for web
writers. Markdown allows you to write using an easy-to-read,
easy-to-write plain text format, then convert it to structurally valid
XHTML (or HTML)
. I have used this before, but didn't pay it close
enough attention. It's used on github and reddit, there are plugins for
it in dokuwiki and redmine. The idea is you write in text, adding
formatting using the markdown syntax. This format is both human readable
and machine readable. When read by the appropriate library, clean html
is generated. It also has a feature for wrapping blocks of code inside
of

&;lt;code> tags and html-escaping html inside of those tags.

Within the MarkDown project is a paged called "dingus" which means
"placeholder". You can paste your markdown text into one textarea and
get the generated html plus a preview back. I tested pasting that
generated html into Blogger's HTML box and it seems to work perfectly
fine. What this means is that I can type up my documentation completely
within my text editor of choice, save it locally, and then generate my
html code to paste into blogger.
Some of you may have realized that my

 tags are missing that class name (. Well, I copy the generated html, do a search and replace of  with  and then paste it, but that's adding more steps. Instead, I sought to make my own dingus that does this automatically. I found that there is an extension of markdown called Markdown Extra written in PHP. Extra adds a few features such as simple tables, but remains consistent with original Markdown formatting. Using that library, I was able to create my own dingus rather easily and alter the  tag with one line of code $render2 = str_replace("<pre>", "<pre class="prettyprint linenums">", $render);. In my experimentation, I made a parser that reads a text file and outputs html, and three dingus parsers. Dingus1 does straightforward conversion of markdown extra to html. Dingus2 and 3 provide the class names for prettified code, with #3 going the extra step of applying stylesheets for the preview.

With this setup, I can quickly paste in my text document and pull html
code to paste into blogger.com's html edit box. With some more research,
I can modify the dingus to interact with blogger's API and post on my
behalf. There are also some WYSIWYM live editors that show you an
instant render of your markdown as you type (you type in a textarea
while your html renders in a nearby div). This would be a good way to do
some tweaking to the markdown text before posting the html to the web.
My next plans are to make a better dingus, possibly with a live preview
and a "post to blogger" option.
Some other links:

  • http://balupton.github.com/jquery-syntaxhighlighter/demo/
  • http://code.google.com/p/pagedown/wiki/PageDown
  • http://markitup.jaysalvat.com/examples/markdown/

Gate One supervisor script

Yesterday, I setup gateone to run as a non-root user. I also spent
some time looking at potential init scripts for starting and stopping
this. The gateone project does not currently provide any init scripts,
but this is planned for the future ([Issue #47]). I tried to use one
of the scripts in that thread, but I wasn't really pleased with them.
The big issue is that gateone.py doesn't fork. However, I believe there
is a better solution.

supervisord is a python script designed to control and monitor
non-daemonizing python scripts. As Gate One is a foreground only
process, it seems particularly suited to this task - more so than
writing my own script in python or daemontools.

Installation can be done with python-pip or easy_install. On newer
systems, pip is recommended.

sudo pip install supervisor

On Ubuntu, pip installs supervisord to /usr/local/bin. By default,
/usr/local/bin is not in root's path, so it makes sense (to me at least)
to create symlinks to /usr/sbin.

johnh@puppet2:~$ ls /usr/local/bin
echo_supervisord_conf  pidproxy  supervisorctl  
supervisordjohnh@puppet2:~$ sudo ln -s /usr/local/bin/supervisord /usr/sbin
johnh@puppet2:~$ sudo ln -s /usr/local/bin/supervisorctl /usr/sbin

Now, we need to create a configuration file. Supervisord has a utility
to generate a sample one.

echo_supervisord_conf  > supervisord.conf

To get started, we can use the sample configuration and just add a
couple lines to the bottom for gateone.

 [program:gateone]
 command=/opt/gateone/gateone.py
 directory=/opt/gateone
 ;user=johnh   ; Default is root. Add a user= to setuid

Now, copy supervisord.conf to /etc/supervisord.conf and start
supervisord. Make sure gateone.py is not currently running. Then we'll
run supervisorctl to test things out.

johnh@puppet2:~$ sudo cp supervisord.conf /etc
johnh@puppet2:~$ sudo supervisord
johnh@puppet2:~$ sudo supervisorctl status
gateone                          RUNNING    pid 9549, uptime 0:00:05
johnh@puppet2:~$ ps ax | grep gateone
 9549 ?        Sl     0:00 python /opt/gateone/gateone.py
johnh@puppet2:~$ sudo supervisorctl stop gateone
gateone: stopped
johnh@puppet2:~$ ps ax | grep gateone
 9605 ?        Ss     0:00 dtach -c /opt/gateone/tmp/gateone/../dtach_3 -E -z -r none /opt/gateone/plugins/ssh/scripts/ssh_connect.py -S /tmp/gateone/.../%SHORT_SOCKET% --sshfp -a -oUserKnownHostsFile=/opt/gateone/users/user1@puppet2/ssh/known_hosts
 9606 pts/3    Ss+    0:00 python /opt/gateone/plugins/ssh/scripts/ssh_connect.py -S /tmp/gateone/.../%SHORT_SOCKET% --sshfp -a -oUserKnownHostsFile=/opt/gateone/users/user1@puppet2/ssh/known_hosts

In this example, we see that gateone.py is started and stopped by
supervisorctl, but because we have dtach enabled, our sessions are still
in place. If we restart gateone.py, we can connect to it again and have
our sessions resumed. While we could probably configure supervisord to
kill these terminals, I believe we'd normally want to keep them running.
The few times I would want to stop those terminals would be a) manually
reconfiguring/troubleshooting opengate, b) updating software, or c)
rebooting the server. For a&b, running the command "gateone.py -kill"
will kill those terminals. For a server shutdown or reboot, the act of
shutting down the OS will kill these terminals.

Finally, we need a way to start and stop supervisord itself.
Fortunately, the supervisord project has provided a number of init
scripts
. I was able to use the Debian script in Ubuntu with only
a few minor changes.

  1. I had symlinked supervisord and supervisorctl to /usr/sbin. The
    script expects them in /usr/bin (but even says that /usr/sbin is a
    better location). I had to change /usr/bin to /usr/sbin.
    Alternatively, you can symlink the files into /usr/bin
  2. I added a status option that runs $SUPERVISORCTL status
  3. If you started supervisord manually, you must shut it down and start
    it with the script. The script won't be able to stop supervisord
    unless /var/run/supervisord.pid is current.

Here is my complete init script for Ubuntu:

#!/bin/sh
### BEGIN INIT INFO
# Provides:          supervisord
# Required-Start:    $local_fs $remote_fs $networking
# Required-Stop:     $local_fs $remote_fs $networking
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Starts supervisord - see http://supervisord.org
# Description:       Starts and stops supervisord as needed 
### END INIT INFO
# Author: Leonard Norrgard 
# Version 1.0-alpha
# Based on the /etc/init.d/skeleton script in Debian.
# Please note: This script is not yet well tested. What little testing
# that actually was done was only on supervisor 2.2b1.
# Do NOT "set -e"
# PATH should only include /usr/* if it runs after the mountnfs.sh script
PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="Run a set of applications as daemons."
NAME=supervisord
# Supervisord is installed in /usr/bin by default, but /usr/sbin would 
# make more sense
DAEMON=/usr/sbin/$NAME   
SUPERVISORCTL=/usr/sbin/supervisorctl
PIDFILE=/var/run/$NAME.pid
DAEMON_ARGS="--pidfile ${PIDFILE}"
SCRIPTNAME=/etc/init.d/$NAME
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
# Read configuration variable file if it is present
[ -r /etc/default/$NAME ] && . /etc/default/$NAME
# Load the VERBOSE setting and other rcS variables
. /lib/init/vars.sh
# Define LSB log_* functions.
# Depend on lsb-base (>= 3.0-6) to ensure that this file is present.
. /lib/lsb/init-functions
#
# Function that starts the daemon/service
#
do_start()
{
        # Return
        #   0 if daemon has been started
        #   1 if daemon was already running
        #   2 if daemon could not be started
        [ -e $PIDFILE ] && return 1
        start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- 
                $DAEMON_ARGS  
                || return 2
        # Add code here, if necessary, that waits for the process to be ready
        # to handle requests from services started subsequently which depend
        # on this one.  As a last resort, sleep for some time.
}
#
# Function that stops the daemon/service
#
do_stop()
{
        # Return
        #   0 if daemon has been stopped
        #   1 if daemon was already stopped
        #   2 if daemon could not be stopped
        #   other if a failure occurred
        [ -e $PIDFILE ] || return 1
        # Stop all processes under supervisord control.
        $SUPERVISORCTL stop all
        start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE  
             --name $NAME
        RETVAL="$?"
        [ "$RETVAL" = 2 ] && return 2
        # Wait for children to finish too if this is a daemon that forks
        # and if the daemon is only ever run from this initscript.
        # If the above conditions are not satisfied then add some other code
        # that waits for the process to drop all resources that could be
        # needed by services started subsequently.  A last resort is to
        # sleep for some time.
        start-stop-daemon --stop --quiet --oknodo --retry=0/30/KILL/5 --exec $DAEMON
        [ "$?" = 2 ] && return 2
        # Many daemons don't delete their pidfiles when they exit.
        rm -f $PIDFILE
        return "$RETVAL"
}
#
# Function that sends a SIGHUP to the daemon/service
#
do_reload() {
        #
        # If the daemon can reload its configuration without
        # restarting (for example, when it is sent a SIGHUP),
        # then implement that here.
        #
        start-stop-daemon --stop --signal 1 --quiet --pidfile $PIDFILE --name $NAME
        return 0
}
case "$1" in
  start)
        [ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME"
        do_start
        case "$?" in
                0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
                2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
        esac
        ;;
  stop)
        [ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME"
        do_stop
        case "$?" in
                0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
                2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
        esac
        ;;
  #reload|force-reload)
        #
        # If do_reload() is not implemented then leave this commented out
        # and leave 'force-reload' as an alias for 'restart'.
        #
        #log_daemon_msg "Reloading $DESC" "$NAME"
        #do_reload
        #log_end_msg $?
        #;;
  restart|force-reload)
        #
        # If the "reload" option is implemented then remove the
        # 'force-reload' alias
        #
        log_daemon_msg "Restarting $DESC" "$NAME"
        do_stop
        case "$?" in
          0|1)
                do_start
                case "$?" in
                        0) log_end_msg 0 ;;
                        1) log_end_msg 1 ;; # Old process is still running
                        *) log_end_msg 1 ;; # Failed to start
                esac
                ;;
          *)
                # Failed to stop
                log_end_msg 1
                ;;
        esac
        ;;
  status)
        $SUPERVISORCTL status
        RETVAL=$?
        ;;
  *)
        #echo "Usage: $SCRIPTNAME {start|stop|restart|reload|force-reload}" >&2
        echo "Usage: $SCRIPTNAME {start|stop|restart|force-reload|status}" >&2
        exit 3
        ;;
esac

And here is a complete copy of my supervisord.conf file:

[unix_http_server]
file=/tmp/supervisor.sock   ; (the path to the socket file)
[supervisord]
logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB        ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10           ; (num of main logfile rotation backups;default 10)
loglevel=info                ; (log level;default info; others: debug,warn,trace)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false               ; (start in foreground if true;default false)
minfds=1024                  ; (min. avail startup file descriptors;default 1024)
minprocs=200                 ; (min. avail process descriptors;default 200)
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL  for a unix socket
[program:gateone]
command=/opt/gateone/gateone.py
directory=/opt/gateone
stdout_logfile=/opt/gateone/logs/supervisor.log
user=johnh

Exploring GateOne Browser SSH terminal

I came across a program called Gate One by LiftOff Software that
just amazed me. This is an open-source, web-based ssh terminal. It is
capable of multiple users, sessions, and bookmarks. I've tried a number
of AJAX terminals or Java applet based ones in the past. The javascript
ones usually did not have very good terminal emulation, while the Java
apps worked, but worked just like a local desktop app (making it's own
connection to port 22). Gate One uses WebSockets, allowing for full
duplex communication through your web browser over the same port 80 or
443 used to serve up the web page.

Installation

Gate One is a python application using the Tornado framework. As
such, at runs independently of an existing web server and handles
connections from browsers internally. If you already have a web server
running on your system, you will need to tell Gate One to use a
different IP or a different port.

Installation using pre-built binaries or the source is fairly
straightforward and detailed in the documentation.

The installer creates a directory of /opt/gateone and places all
necessary files there. You can run it by changing to that directory and
running gateone.py as root.

johnh@puppet2:/opt/gateone$ sudo ./gateone.py
[W 120801 13:52:06 terminal:166] Could not import the Python Imaging Library (PIL) so images will not be displayed in the terminal
[W 120801 13:52:06 gateone:2232] dtach command not found.  dtach support has been disabled.
[I 120801 13:52:06 gateone:1800] No authentication method configured. All users will be ANONYMOUS
[I 120801 13:52:06 gateone:1876] Loaded plugins: bookmarks, help, logging, logging_plugin, notice, playback, ssh
[I 120801 13:52:06 gateone:2329] Listening on https://*:443/

At this point, gateone is running in the foreground and you can view as
connections occur and any errors. Pressing Ctrl If you conect to gateone
using your webbrowser, you are logged in as user ANONYMOUS and can
connect to any ssh host, either localhost or something remote.

If you edit /opt/gateone/server.conf, you can change authentication to
"pam" or "google". Using pam will perform a Basic HTTP style
authenication requiring a system-level username and password. Using
google will log you in with your google account. Both of these "just
work" without complicated setup.

Running as a Non-Root

Before I put something like this in production, I wanted to apply some
additional security. First off, I want to see if I can get this to run
as a non-root user.

Since gateone ran as root user initially, it has files owned by rootOnly UID 0 can open ports below 1024.gateone may need permission to write to system directories

To solve the first one, I chowned the /opt/gateone directory to my
username. In the future, I'll want to run it under its own user, but
I'll use mine for now for simplicity. To solve the second and third, I
edited server.conf.

johnh@puppet2:/opt/gateone$ sudo chown -R johnh:johnh .
johnh@puppet2:/opt/gateone$ vi server.conf# change/add the following lines appropriatelyport = 2443session_dir = "/opt/gateone/tmp/gateone"pid_file = "/opt/gateone/tmp/gateone.pid"uid = 1000gid = 1000
johnh@puppet2:/opt/gateone$ ./gateone.py
[W 120801 14:06:01 terminal:166] Could not import the Python Imaging Library (PIL) so images will not be displayed in the terminal
[W 120801 14:06:01 gateone:2232] dtach command not found.  dtach support has been disabled.
[I 120801 14:06:01 gateone:1802] No authentication method configured. All users will be ANONYMOUS
[I 120801 14:06:01 gateone:1876] Loaded plugins: bookmarks, help, logging, logging_plugin, notice, playback, ssh
[I 120801 14:06:01 gateone:2329] Listening on https://*:2443/

Authentication

Running as a lower uid, you can use authentication of None or "google"
without issue. If you use "pam", you discover you can only login with
the username that gateone is running under. If you are the only intended
user of the service, this may not be an issue. But if you want to allow
other users, this becomes an issue. If you are fine with running as root
or using Google as your authentication provider, you can ignore this
next step.

Fortunately, pam is highly configurable. You aren't required to
authenticate against shadow passwords. You can also authenticate against
db4 files with pam_userdb, msyql, or even htpasswd files. To start off,
I'm going to use htpasswd files. Note that Ubuntu doesn't provide
pam_pwdfile.so by default. You need to install libpam-pwdfile ("sudo
apt-get install libpam-pwdfile").

Note - in testing, I discovered gateone uses Crypt encryption while htpasswd defaults to MD5. Use -d to switch to crypt encryption.

johnh@puppet2:/opt/gateone$ htpasswd -c -d users.passwd user1
New password:
Re-type new password:
Adding password for user user1
johnh@puppet2:/opt/gateone$ cat users.passwd
user1:KKEPyZtUf9sadf9

Create a pam module called gateone under /etc/pam.d

johnh@puppet2:/opt/gateone$ cat /etc/pam.d/gateone
#%PAM-1.0
# Login using a htpasswd file
@include common-sessionauth    
required pam_pwdfile.so          pwdfile /opt/gateone/users.passwdaccount 
required pam_permit.so

Modify server.conf to use pam and pam_service of gateone:

auth = "pam"
pam_service = "gateone"

Now start gateone and log in.

johnh@puppet2:~/g1/gateone$ ./gateone.py
[W 120801 14:59:16 terminal:168] Could not import the Python Imaging Library (PIL) so images will not be displayed in the terminal
[W 120801 14:59:16 gateone:2577] dtach command not found.  dtach support has been disabled.
[I 120801 14:59:16 gateone:2598] Connections to this server will be allowed from the following origins: 'http://localhost https://localhost http://127.0.0.1 https://127.0.0.1 https://puppet2 https://127.0.1.1 https://puppet2:2443'
[I 120801 14:59:16 gateone:2023] Using pam authentication
[I 120801 14:59:16 gateone:2101] Loaded plugins: bookmarks, help, logging, logging_plugin, mobile, notice, playback, ssh
[I 120801 14:59:16 gateone:2706] Listening on https://*:2443/
[I 120801 14:59:16 gateone:2710] Process running with pid 32591
[I 120801 14:59:17 gateone:949] WebSocket opened (user1@gateone).

One additional nice feature with authentication enabled is the ability
to resume sessions - even across different computers or browsers.

Reverse Proxy

(I failed on this part, but felt it was worth recording)

Once I got it working in single user mode, I wanted to go ahead and set
this up under a reverse proxy under Apache. This would allow me to
integrate it into my existing web server under a sub-directory.

First, I edited server.conf to use a URL prefix of /g1/

Second, I tried setting up a ReverseProxy in Apache.

# GateOne 
ProxySSLProxyEngine 
OnProxyPass /g1/ https://localhost:2443/g1/
ProxyPassReverse /g1/ https://localhost:2443/g1/
ProxyPassReverseCookieDomain localhost localhost
ProxyPassReverseCookiePath / /g1/

This almost worked. I had no errors, but the resulting page was
unreadable. However, at the bottom was a clue. "The WebSocket connection
was closed. Will attempt to reconnect every 5 seconds... NOTE: Some web
proxies do not work properly with WebSockets." The problem was Apache
not properly proxying my websocket connection. People have managed to
get this working under nginx, but not Apache.

Searching for a solution led me to a similar question on ServerFault, an
apache-websocket module on github, and a websocket tcp proxy based on
that module.

  • http://serverfault.com/questions/290121/configuring-apache2-to-proxy-websocket
  • https://github.com/disconnect/apache-websocket
  • http://blog.alex.org.uk/2012/02/16/using-apache-websocket-to-proxy-tcp-connection/

In order to get this work, I'll need to download and compile some code.
The apxs command requires the apache-prefork-dev package in
Debian/Ubuntu. Install it with "sudo apt-get install
apache-prefork-dev".

Now we are ready to download the code and install the module:

johnh@puppet2:~$ git clone https://github.com/disconnect/apache-websocket.git
Cloning into 'apache-websocket'..... done
johnh@puppet2:~$ wget http://blog.alex.org.uk/wp-uploads/mod_websocket_tcp_proxy.tar.gz
johnh@puppet2:~$ cd apache-websocket
johnh@puppet2:~/apache-websocket$ sudo apxs2 -i -a -c mod_websocket.c*snip*
johnh@puppet2:~/apache-websocket$ sudo apxs2 -i -a -c mod_websocket_draft76.c*snip*
johnh@puppet2:~$ cd examples
johnh@puppet2:~$ tar -xzvf ../../mod_websocket_tcp_proxy.tar.gzmod_websocket_tcp_proxy.c
johnh@puppet2:~$ cd apache-websocket/examples/
johnh@puppet2:~/apache-websocket/examples$ sudo apxs2 -c -i -a -I.. mod_websocket_tcp_proxy.c
*snip*
chmod 644 /usr/lib/apache2/modules/mod_websocket_tcp_proxy.so
[preparing module `websocket_tcp_proxy' in /etc/apache2/mods-available/websocket_tcp_proxy.load]
Enabling module websocket_tcp_proxy.To activate the new configuration, you need to run:service apache2 restart
johnh@puppet2:~$

Before we restart, I want to remove my Proxy lines and replace them with
the mod_websocket_tcp_proxy lines.

SetHandler websocket-handler        
WebSocketHandler  /usr/lib/apache2/modules/mod_websocket_tcp_proxy.so tcp_proxy_init        
WebSocketTcpProxyBase64 on        
WebSocketTcpProxyHost 127.0.0.1        
WebSocketTcpProxyPort 2443        
WebSocketTcpProxyProtocol base64

Despite all this, I was still unable to get this to work. I even
attempted using the web root (/) as my location. If the Location matches
and your HTTP request is handled by mod_websocket, you get a 404. If
you use Proxy, then your websocket request is handled by mod_proxy.
Mod_proxy wins out over Location matches. Perhaps you can modify
gateone code to have one URL for the web interface and one for
websockets (or maybe it's already in place and we just need to know),
but I don't see a way at this time to get this working under Apache. I
may be able to work with the gateone author and the
mod_websocket_tcp_proxy.c author to come up with a solution. Or I
could try installing nginx. In the meantime, I can continue to run Open
Gate as a non-root user on a non-standard port. Alternatively, I could
find a wrapper to bring port 443 to 2443.

Puppet Dashboard and selinux

Tags: puppet selinux

Once I got a rough handle on setting up Puppet, I decided to get Puppet
Dashboard
working on my puppetmaster (a Centos 6 server) to have a
sort of web interface to view puppet statuses. Since I already have the
puppetlabs repositories setup from when I installed puppet, this was
almost as simple as running "yum install puppet-dashboard".

You then go to /usr/share/puppet-dashboard and follow the install
guide
to get the database working. Then you can run it locally using
the "sudo -u puppet-dashboard ./script/server -e production" command or
starting the service (service puppet-dashboard start). I recommed
running it manually the first few times to make sure everything is
working.

At this point, I was able to browse to http://puppetmaster:3000/ and
view a nice looking dashboard with no hosts checked in. I then made sure
to add the following to puppetmaster's puppet.conf [master] section and
restart puppet.

reports = http, store
reporturl = http://localhost:3000/reports/upload

I performed a "puppet agent --test" on one of my puppets, and here is
where I ran into trouble. Everything appeared to work, but no report or
"pending task" showed up in the Dashboard. I ran the Dashboard locally
so I could see the http request coming in. No request. I double checked
my configuration, everything looked good.

Finally, I ran my puppetmaster in debug/no-daemonize mode.
(/usr/sbin/puppetmaster --debug --no-daemonize) so I could watch what
was happening. However, in this mode, it worked fine. I ran
/usr/sbin/puppetmaster without debugging and it still worked. The
reports would get submitted to dashboard if I ran puppetmaster directly,
but not if I started it with the init scripts.

I couldn't find any differences between how the init script was starting
puppetmaster vs me starting it manually. However, I did come across this
entry in /var/log/messages:

 Jul 25 10:15:16 puppetmasterj puppet-master[11988]: Compiled catalog for puppet2.lab in environment production in 1.16 seconds
 Jul 25 10:15:17 puppetmasterj puppet-master[11988]: Report processor failed: Permission denied - connect(2)

Which led me to the following entry in audit.log

 type=AVC msg=audit(1343225819.078:1582): avc:  denied  { name_connect } for  pid=11988 comm="puppetmasterd" dest=3000 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:ntop_port_t:s0 tclass=tcp_socket

This was an selinux issue. I quickly ascertained that turning enforcing
off in selinux allowed my reports to get through. I couldn't find anyone
else online encountering this issue. However, many people disable
selinux enforcing early on and I guess the cross-section of
puppet-dashboard users and those using selinux enforcing is somewhat
small.

How to fix this? There is a set of python programs called "audit2why"
and "audit2allow" as part of the policycoreutils-python package. These
tools will parse entries from the audit.log and either explain why an
action was denied or provide a solution. You can get these tools by
doing a "yum install policycoreutils-python".

Now we can use audit2allow to parse our audit.log error. You'll want to
run the tool, paste in your log entry, and then hit Ctrl+D on a blank
line.

 [root@puppetmasterj tmp]# audit2allow -m puppetmaster
 type=AVC msg=audit(1343232143.497:1617): avc:  denied  { name_connect } for  pid=12552 comm="puppetmasterd" dest=3000 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:ntop_port_t:s0 tclass=tcp_socket
 module puppetmaster 1.0;
 require {
     type puppetmaster_t;
     type ntop_port_t;
     class tcp_socket name_connect;
 }
 #============= puppetmaster_t ==============
 allow puppetmaster_t ntop_port_t:tcp_socket name_connect;

The above gives you a textual view of a module you can create to allow
puppetmaster to make an outbound connection. audit2allow will even
generate that module with the -M option.

 [root@puppetmasterj tmp]# audit2allow -M puppetmaster
 type=AVC msg=audit(1343232143.497:1617): avc:  denied  { name_connect } for  pid=12552 comm="puppetmasterd" dest=3000 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:ntop_port_t:s0 tclass=tcp_socket
 ******************** IMPORTANT ***********************
 To make this policy package active, execute:
 semodule -i puppetmaster.pp

That generated the module "puppetmaster.pp". Despite the pp extension,
this is not a puppet manifest, but an selinux binary module. Let's
install it.

 [root@puppetmasterj tmp]# semodule -i puppetmaster.pp

With that, puppetmaster can communicate with dashboard and reports are
flowing. The only remaining thing left to do is to file a bug report. As
it happened, someone had posted a bug report similar to this for
documentation on the puppetdb project. I decided to append to that
issue, but I suggested migrating the issue to the main puppet project.
Issue #15567.

Using puppet to install djbdns

This is a basic walkthrough of getting a slightly complex "step by step
to install" program like djbdns to install under puppet (in this case,
under Ubuntu 12.04). It shows building the manifest, testing it, and
some possible gotchas.

I am generally following the guide put together by Higher Logic[1], with
a few changes of my own.

Step 1: Installation
I use the dbndns fork of djbdns, which has a few patches installed that
djbdns lacks. In fact, the djbdns package in Debian/Ubuntu is a virtual
package that really install dbndns. To install it normally, you would
type "sudo apt-get install dbndns". This would also install daemontools
and daemontools-run. However, we'll also need make and ucspi-tcp.

We're going to do this the puppet way. I'm assuming my puppet
configuration in in /etc/puppet, node manifests are in
/etc/puppet/nodes, and modules are in /etc/puppet/modules.

a. Create the dbndns module with a package definition to install

sudo mkdir -p /etc/puppet/modules/dbndns/manifests
    sudo vi /etc/puppet/modules/dbndns/manifests/init.pp

        class dbndns {
            package {
                    dbndns:
                    ensure => present;





ucspi-tcp:
                    ensure => present;

make:
                    ensure => present;
            }

}

b. Create a file for your node (ie: puppet2.example.net)

sudo vi /etc/puppet/nodes/puppet2.example.net.pp

        node    'puppet2.lab.example.net' {
            include dbndns
        }





c. Test
Ok, to test on your puppet client, run "sudo puppet agent --test"

johnh@puppet2:~# sudo puppet agent --test
    info: Retrieving plugin
    info: Loading facts in /var/lib/puppet/lib/facter/facter_dot_d.rb
    info: Loading facts in /var/lib/puppet/lib/facter/root_home.rb
    info: Loading facts in /var/lib/puppet/lib/facter/puppet_vardir.rb
    info: Caching catalog for puppet2.lab.example.net
    info: Applying configuration version '1340213237'
    notice: /Stage[main]/Dbndns/Package[dbndns]/ensure: created
    notice: Finished catalog run in 3.39 seconds







Here we can see our dbndns package installed. But is it running? Well,
djbdns uses daemontools, which runs svscan, and some searching online
shows that in Ubuntu 12.04/Precise, this is now an upstart job. svscan
is not running. So let's make it run. Add the following to your init.pp
(within the module definition):

define the service to restart

        service { "svscan":
                ensure  => "running",
                provider => "upstart",
                require => Package["dbndns"],
        }




Now back on puppet2, let's test it.

johnh@puppet2:~# sudo puppet agent --test
    info: Retrieving plugin
    info: Loading facts in /var/lib/puppet/lib/facter/facter_dot_d.rb
    info: Loading facts in /var/lib/puppet/lib/facter/root_home.rb
    info: Loading facts in /var/lib/puppet/lib/facter/puppet_vardir.rb
    info: Caching catalog for puppet2.lab.example.net
    info: Applying configuration version '1340213237'
    notice: /Stage[main]/Dbndns/Service[svscan]/ensure: ensure changed
'stopped' to 'running'
    notice: Finished catalog run in 0.47 seconds







We now told puppet to ensure that svscan is running. The "provider"
option tells it to use upstart instead of /etc/init.d/ scripts or the
service command. Also, we make sure that it doesn't attempt to start
svscan unless dbndns is already installed.

Now we have daemontools running, but we haven't got it start our tinydns
service yet. To do that, we need to create some users and configure the
service.

Step 2: Create users

Going back to our guide, our next step is to create users. We can do
that in puppet as well.
    # Users for the chroot jail
    adduser --no-create-home --disabled-login --shell /bin/false dnslog
    adduser --no-create-home --disabled-login --shell /bin/false
tinydns
    adduser --no-create-home --disabled-login --shell /bin/false
dnscache



So in our init.pp module file, we need to define our users:

user { "dnslog":
            shell => "/bin/false",
            managehome => "no",
            ensure => "present",
        }

    user { "tinydns":
            shell => "/bin/false",
            managehome => "no",
            ensure => "present",
        }

    user { "dnscache":
            shell => "/bin/false",
            managehome => "no",
            ensure => "present",
        }















Back on puppet2, we can give that a test.

johnh@puppet2:~$ sudo puppet agent --test
    info: Retrieving plugin
    info: Loading facts in /var/lib/puppet/lib/facter/facter_dot_d.rb
    info: Loading facts in /var/lib/puppet/lib/facter/root_home.rb
    info: Loading facts in /var/lib/puppet/lib/facter/puppet_vardir.rb
    info: Caching catalog for puppet2.lab.example.net
    info: Applying configuration version '1340215757'
    notice: /Stage[main]/Dbndns/User[dnscache]/ensure: created
    notice: /Stage[main]/Dbndns/User[tinydns]/ensure: created
    notice: /Stage[main]/Dbndns/User[dnslog]/ensure: created
    notice: Finished catalog run in 0.86 seconds
    johnh@puppet2:~$ cat /etc/passwd | grep dns
    dnscache:x:1001:1001::/home/dnscache:/bin/false
    tinydns:x:1002:1002::/home/tinydns:/bin/false
    dnslog:x:1003:1003::/home/dnslog:/bin/false













So far, so good. Now we have to do the configuration, which will require
executing some commands.

Step 3 - Configuration
Our next step are the following commands:

Config

    tinydns-conf tinydns dnslog /etc/tinydns/ 1.2.3.4
    dnscache-conf dnscache dnslog /etc/dnscache 127.0.0.1
    cd /etc/dnscache; touch /etc/dnscache/root/ip/127.0.0
    mkdir /etc/service ; cd /etc/service ; ln -sf /etc/tinydns/ ; ln -sf
/etc/dnscache

The first two commands create our service directories. Authoratative
tinydns is set to listen on 1.2.3.4 and dnscache is set to listen on
127.0.0.1. The 3rd command creates a file that restricts dnscache to
only respond to requests from IPs starting with 127.0.0. This is isn't
necessary, but the challenge is interesting.





What we want to do first is see if /etc/tinydns and /etc/dnscache exist
and if not, run the -conf program. We also need to know the IP address.
Fortunately, puppet provides this as a variable "$ipaddress". Try
running the "facter" command.

Puppet has a property call creates that is ideal. If the directory
specified by creates does not exist, it will perform the associated
commands. Here are our new lines:

exec { "configure-tinydns":
            command => "/usr/bin/tinydns-conf tinydns dnslog
/etc/tinydns $ipaddress",
            creates => "/etc/tinydns",
            require => Package['dbndns'],
    }



exec { "configure-dnscache":
            command => "/usr/bin/dnscache-conf dnscache dnslog
/etc/dnscache 127.0.0.1",
            creates => "/etc/dnscache",
            require => Package['dbndns'],
    }



Thos will configure tinydns and dnscache, and then we can restrict
dnscache

file { "/etc/dnscache/root/ip/127.0.0":
            ensure => "present",
            owner => "dnscache",
            require => Exec["configure-dnscache"],
    }



Then, we need to create the /etc/service directory and bring tinydns and
dnscache under svscan's control.

    file { "/etc/service":
            ensure => "directory",
            require => Package["dbndns"],
    }




file { "/etc/service/tinydns":
            ensure => "link",
            target => "/etc/tinydns",
            require => [ File['/etc/service'],
Exec["configure-tinydns"], ],
    }



file { "/etc/service/dnscache":
            ensure => "link",
            target => "/etc/dnscache",
            require => [  File['/etc/service'],
Exec["configure-dnscache"]  ],
    }



And our tests:

johnh@puppet2:~$ sudo puppet agent --test
    info: Retrieving plugin
    info: Loading facts in /var/lib/puppet/lib/facter/facter_dot_d.rb
    info: Loading facts in /var/lib/puppet/lib/facter/root_home.rb
    info: Loading facts in /var/lib/puppet/lib/facter/puppet_vardir.rb
    info: Caching catalog for puppet2.lab.example.net
    info: Applying configuration version '1340218775'
    notice: /Stage[main]/Dbndns/Exec[configure-dnscache]/returns:
executed successfull
    notice:
/Stage[main]/Dbndns/File[/etc/dnscache/root/ip/127.0.0]/ensure: created
    notice: /Stage[main]/Dbndns/File[/etc/service/dnscache]/ensure:
created
    notice: /Stage[main]/Dbndns/Exec[configure-tinydns]/returns:
executed successfully
    notice: /Stage[main]/Dbndns/File[/etc/service/tinydns]/ensure:
created
    notice: Finished catalog run in 0.59 seconds
    johnh@puppet2:~$ ls /etc/service/tinydns/root/
    add-alias  add-alias6  add-childns  add-host  add-host6  add-mx 
add-ns  data  Makefile
    johnh@puppet2:~$ ps ax | grep supervise
     7932 ?        S      0:00 supervise dnscache
     7933 ?        S      0:00 supervise log
     7934 ?        S      0:00 supervise tinydns
     7935 ?        S      0:00 supervise log


















Doing a dig www.example.net @localhost returns 192.0.43.10, so dnscache
works.

Now, let's check tinydns. No domains are configured yet, so let's put
example.com in there. Edit /etc/tinydns/root/data and put these lines in
it, substituting 10.100.0.178 for your own "public" IP address.

&example.com::ns0.example.com.:3600 
Zexample.com:ns0.example.com.:hostmaster.example.com.:1188079131:16384:2048:1048576:2560:2560
 +ns0.example.com:10.100.0.178:3600

Then "make" the data.cdb file:

cd /etc/tinydns/root ; sudo make

Now test:

johnh@puppet2:/etc/tinydns/root$ dig ns0.example.com @10.100.0.178

; <<>> DiG 9.8.1-P1 <<>> ns0.example.com @10.100.0.178
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 25433
    ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL:
0
    ;; WARNING: recursion requested but not available




;; QUESTION SECTION:
    ;ns0.example.com.               IN      A

;; ANSWER SECTION:
    ns0.example.com.        3600    IN      A       10.100.0.178

;; AUTHORITY SECTION:
    example.com.            3600    IN      NS      ns0.example.com.

Ok, for a final test, let's remove everything and run it again.

sudo service svscan stop
    sudo apt-get purge daemontools daemontools-run ucspi-tcp dbndns
    sudo rm -rf /etc/service /etc/tinydns /etc/dnscache
    sudo userdel tinydns
    sudo userdel dnslog
    sudo userdel dnscache

Let's do this:






johnh@puppet2:~$ sudo puppet agent --test
    info: Retrieving plugin
    info: Loading facts in /var/lib/puppet/lib/facter/facter_dot_d.rb
    info: Loading facts in /var/lib/puppet/lib/facter/root_home.rb
    info: Loading facts in /var/lib/puppet/lib/facter/puppet_vardir.rb
    info: Caching catalog for puppet2.lab.example.net
    info: Applying configuration version '1340220032'
    notice: /Stage[main]/Dbndns/Service[svscan]/ensure: ensure changed
'stopped' to 'running'
    err: /Stage[main]/Dbndns/Exec[configure-dnscache]/returns: change
from notrun to 0 failed: /usr/bin/dnscache-conf dnscache dnslog
/etc/dnscache 127.0.0.1 returned 111 instead of one of [0] at
/etc/puppet/modules/dbndns/manifests/init.pp:47
    notice: /Stage[main]/Dbndns/User[dnscache]/ensure: created
    notice: /Stage[main]/Dbndns/User[tinydns]/ensure: created
    notice: /Stage[main]/Dbndns/File[/etc/service]/ensure: created
    notice: /Stage[main]/Dbndns/File[/etc/service/dnscache]: Dependency
Exec[configure-dnscache] has failures: true
    warning: /Stage[main]/Dbndns/File[/etc/service/dnscache]: Skipping
because of failed dependencies
    notice: /Stage[main]/Dbndns/User[dnslog]/ensure: created
    notice: /Stage[main]/Dbndns/File[/etc/dnscache/root/ip/127.0.0]:
Dependency Exec[configure-dnscache] has failures: true
    warning: /Stage[main]/Dbndns/File[/etc/dnscache/root/ip/127.0.0]:
Skipping because of failed dependencies
    notice: /Stage[main]/Dbndns/Exec[configure-tinydns]/returns:
executed successfully
    notice: /Stage[main]/Dbndns/File[/etc/service/tinydns]/ensure:
created
    notice: Finished catalog run in 0.98 seconds


















Looks like we had something fail. Oops! configure-dnscache failed. We
see that the user dnscache and tinydns were created after. So we need to
make sure that the users are created before we can configure the
service. This needs to happen to tinydns as well as dnscache. Good thing
we did this test so it doesn't bite us in the future. Let's adjust our
init.pp

exec { "configure-tinydns":
                command => "/usr/bin/tinydns-conf tinydns dnslog
/etc/tinydns $ipaddress",
                creates => "/etc/tinydns",
                require => [ Package['dbndns'], User['dnscache'],
User['dnslog'] ],
        }



exec { "configure-dnscache":
                command => "/usr/bin/dnscache-conf dnscache dnslog
/etc/dnscache 127.0.0.1",
                creates => "/etc/dnscache",
                require => [ Package['dbndns'],  User['dnscache'],
User['dnslog'] ],
        }



Also, let's go ahead and run our commands above to get rid of everything
again.

johnh@puppet2:~$ sudo puppet agent --test
    info: Retrieving plugin
    info: Loading facts in /var/lib/puppet/lib/facter/facter_dot_d.rb
    info: Loading facts in /var/lib/puppet/lib/facter/root_home.rb
    info: Loading facts in /var/lib/puppet/lib/facter/puppet_vardir.rb
    info: Caching catalog for puppet2.lab.example.net
    info: Applying configuration version '1340220641'
    notice: /Stage[main]/Dbndns/Service[svscan]/ensure: ensure changed
'stopped' to 'running'
    notice: /Stage[main]/Dbndns/User[dnscache]/ensure: created
    notice: /Stage[main]/Dbndns/User[tinydns]/ensure: created
    notice: /Stage[main]/Dbndns/File[/etc/service]/ensure: created
    notice: /Stage[main]/Dbndns/User[dnslog]/ensure: created
    notice: /Stage[main]/Dbndns/Exec[configure-dnscache]/returns:
executed successfully
    notice: /Stage[main]/Dbndns/File[/etc/service/dnscache]/ensure:
created
    notice:
/Stage[main]/Dbndns/File[/etc/dnscache/root/ip/127.0.0]/ensure: created
    notice: /Stage[main]/Dbndns/Exec[configure-tinydns]/returns:
executed successfully
    notice: /Stage[main]/Dbndns/File[/etc/service/tinydns]/ensure:
created
    notice: Finished catalog run in 1.05 seconds
















Everything looks good, but when we run "ps ax | grep svscan" we don't
see svscan running. So we check /var/log/syslog and see this

Jun 20 19:31:35 puppet2 kernel: [ 9646.348251] init: svscan main
process ended, respawning
    Jun 20 19:31:35 puppet2 kernel: [ 9646.359074] init: svscan
respawning too fast, stopped

If we start it by hand, it works, so what happened? /etc/service didn't
exist yet.


johnh@puppet2:~$ sudo service svscan start
    svscan start/running, process 9726
    johnh@puppet2:~$ ps ax | grep supervise
     9730 ?        S      0:00 supervise dnscache
     9731 ?        S      0:00 supervise log
     9732 ?        S      0:00 supervise tinydns
     9733 ?        S      0:00 supervise log





Let's fix that.

define the service to restart

        service { "svscan":
                ensure  => "running",
                provider => "upstart",
                require => [ Package["dbndns"], File["/etc/service"] ]
        }




Now, let's give it a go:

johnh@puppet2:~$ sudo puppet agent --test
    info: Retrieving plugin
    info: Loading facts in /var/lib/puppet/lib/facter/facter_dot_d.rb
    info: Loading facts in /var/lib/puppet/lib/facter/root_home.rb
    info: Loading facts in /var/lib/puppet/lib/facter/puppet_vardir.rb
    info: Caching catalog for puppet2.lab.example.net
    info: Applying configuration version '1340220885'
    notice: /Stage[main]/Dbndns/User[dnscache]/ensure: created
    notice: /Stage[main]/Dbndns/User[tinydns]/ensure: created
    notice: /Stage[main]/Dbndns/File[/etc/service]/ensure: created
    notice: /Stage[main]/Dbndns/Service[svscan]/ensure: ensure changed
'stopped' to 'running'
    notice: /Stage[main]/Dbndns/User[dnslog]/ensure: created
    notice: /Stage[main]/Dbndns/Exec[configure-dnscache]/returns:
executed successfully
    notice: /Stage[main]/Dbndns/File[/etc/service/dnscache]/ensure:
created
    notice:
/Stage[main]/Dbndns/File[/etc/dnscache/root/ip/127.0.0]/ensure: created
    notice: /Stage[main]/Dbndns/Exec[configure-tinydns]/returns:
executed successfully
    notice: /Stage[main]/Dbndns/File[/etc/service/tinydns]/ensure:
created
    notice: Finished catalog run in 1.24 seconds
    johnh@puppet2:~$ ps ax | grep svscan
    10613 ?        Ss     0:00 /bin/sh /usr/bin/svscanboot
    10615 ?        S      0:00 svscan /etc/service
    10639 pts/0    S+     0:00 grep --color=auto svscan
    johnh@puppet2:~$ ps ax | grep supervise
    10630 ?        S      0:00 supervise dnscache
    10631 ?        S      0:00 supervise log
    10632 ?        S      0:00 supervise tinydns
    10633 ?        S      0:00 supervise log
    10641 pts/0    S+     0:00 grep --color=auto supervise


























Excellent! We now have a working puppet class that will install puppet,
configure it, and get it up and running. At this point, we don't have
any records being served by tinydns, but it wouldn't be hard to push a
file to /etc/tinydns/root/data and execute a command to perform the
make. In my case, I will be using VegaDNS's update-data.sh[2] to pull
the data remotely.

Here is our completed modules/dbndns/init.pp:


class dbndns {

package {
                dbndns:
                ensure => present;

ucspi-tcp:
                ensure => present;

make:
                ensure => present;
        }

define the service to restart

        service { "svscan":
                ensure  => "running",
                provider => "upstart",
                require => [ Package["dbndns"], File["/etc/service"] ]
        }




user { "dnslog":
                        shell => "/bin/false",
                        managehome => false,
                        ensure => "present",
                }



user { "tinydns":
                        shell => "/bin/false",
                        managehome => false,
                        ensure => "present",
                }



user { "dnscache":
                        shell => "/bin/false",
                        managehome => false,
                        ensure => "present",
                }



exec { "configure-tinydns":
                command => "/usr/bin/tinydns-conf tinydns dnslog
/etc/tinydns $ipaddress",
                creates => "/etc/tinydns",
                require => [ Package['dbndns'], User['dnscache'],
User['dnslog'] ],
        }



exec { "configure-dnscache":
                command => "/usr/bin/dnscache-conf dnscache dnslog
/etc/dnscache 127.0.0.1",
                creates => "/etc/dnscache",
                require => [ Package['dbndns'],  User['dnscache'],
User['dnslog'] ],
        }



file { "/etc/dnscache/root/ip/127.0.0":
                ensure => "present",
                owner => "dnscache",
                require => Exec["configure-dnscache"],
        }



file { "/etc/service":
                ensure => "directory",
                require => Package["dbndns"],
        }


file { "/etc/service/tinydns":
                ensure => "link",
                target => "/etc/tinydns",
                require => [ File['/etc/service'],
                                        Exec["configure-tinydns"],
                                ],
        }





file { "/etc/service/dnscache":
                ensure => "link",
                target => "/etc/dnscache",
                require => [  File['/etc/service'],
                                        Exec["configure-dnscache"]
                                ],
        }





}


[1]
http://higherlogic.com.au/2011/djbdns-on-ubuntu-10-04-server-migration-from-bind-and-zone-transfers-to-secondaries-bind/
[2] https://github.com/shupp/VegaDNS/blob/master/update-data.sh