C270 Review

Acer C270 Chromebook

Last week I picked up a Chromebook, specifically the Acer C270. One of my coworkers uses one and kind of convinced me that one would be worth checking. I compared what was out there and other than the Pixel, the C270 had the best specs, including battery life (8.5 hours). It’s an Intel chipset, 16GB space, 2Gb of RAM, 1.4Ghz.

The good:

  • You get a lot of functionality for $200.
  • 7 seconds from power off to browser up and running.
  • Both an ssh client and vim are installed by default.
  • The keyboard is fairly responsive.
  • Runs forever without needing a charge.
  • You can install a real OS alongside ChromeOS (using Crouton) and switch between the two without rebooting.
  • It can do web pages and has a built in terminal shell.
  • With Ubuntu installed in a chroot environment, it does everything my regular computer does (and I almost never have to fire up the gui, which would be a battery drain).
  • This does Netflix, Amazon Prime, Hulu and
  • I haven’t done this, but one could replace the drive with a 128GB SSD for $100.

The bad:

  • Unfortunately the SD card slot isn’t very deep. Any SD Card you insert will stick out the side. I was hoping to drop a 64GB sdcard inside here and store videos/music/projects on this card. But since it sticks out so far, I might as well use a USB drive.
  • The trackpad requires an annoying amount of pressure to right click or click and hold (dragging/highlighting). It also makes a loud click when you do this.

After working on this for the last week, I decided that this Chromebook is exactly what I was hoping my Asus 904 netbook would be. A responsive, long lasting, carry anywhere type computer. It has made me regret purchasing my Nexus 7. I get a lot more daily use out o fthe 11″ Chromebook.

So what am I using this for? A lot of things. Anything really. My primary use case for this was actually amateur radio. In my ubuntu chroot environment, I installed CHIRP to program my radios, flidgi for operating digital modes, xlog for QSO logging, and Wine+APRSISCE for APRS action. So far these programs seem to work without issue. I’ve been able to program radios, send PSK data to PSKDroid on my cell phone using acoustic coupling, and view APRS activity using APRSISCE. When I’m not doing ham radio activity, I leave the X-windows session shut down (and it only takes about 5 seconds to start it up again).

Beyond ham radio, I’m using it to browse the web (duh), ssh into my servers, write python code, and right now… writing this post. Earlier today I fired up a Jenkins server on AWS. I’ve been meaning to set one up for a long time, and I have a year of AWS’s free tier currently. So from this chromebook, I selected a jenkins instance from the AWS market, SSH’d into and add pelican to it. Then I was able to setup a simple jenkins job that would publish my website from github. Then I went over to github and added a commit hook that calls my new jenkins job. Now, anytime I commit into my website’s github repository (including editing it from github directly), jenkins will automatically publish it. Doing this on my tablet, even with the keyboard, would have been frustrating.

Basically, despite even Google’s marketing, their Chromebook is NOT a web browser only type device. It is a full computer. If you’re used to Linux desktops and servers, you can do anything on it that you can on your existing setup. Yes, the impressive battery life is obtainable by using ChromeOS and using X-Windows or other cpu heavy applications will bring that battery life down. But even when I left X-windows up for long periods of time (for instance, letting the linux side download updates) it only had a moderate impact on my battery life.

If you’re looking for a “carry anywhere, do anything” computing device, the C270 is well worth the price.

Foreman managed virtual datacenter

I ordered the KS2 from <www.ovh.com> – $50/month, 3.4ghz, 16gb ram, and a 1TB software raid setup. My plan is to set this
up as a single server virtual datacenter. They also have the SP2 with twice the ram and storage for $90, but I figured I’d test out
the cheaper option first. I can always upgrade and migrate to a larger server later if I get into heavier usage. The prices are rather cheap and they have scripts that will automatically provision physical servers.

I had this installed with Centos 6. I tried first using the “ovh secure” kernel, but I could not get KVM working with that kernel, so I had it reinstalled with the “vendor” kernel. I allocated 20GB to “/” and the remainder (898GB) to “/data”.

Installing kvm and libvirt is a simple yum command.

yum install kvm libvirt python-virtinst qemu-kvm

Then, on my workstation, I installed virt-manager, which allowed me to graphically create and install virtual machines (I can do this by hand on the server, but virt-manager is a nice way to get started). The connection is done over ssh, so it will either ask for your username/password, or it can use ssh-key authentication (preferred).

I created /data/isos and /data/vms to hold my installation isos and virual machines respectively. The trick I had to work out is that I couldn’t just add “/data” as a directory-based storage volume, I had to make one for isos and one for vms. I also found that the default directory (/var/lib/libvirt/images) is rather difficult to remove. I disabled it and removed it, but it showed back up later. When creating through the dialog, virt-manager wants to put your vm image in “default”.

Creating a new virtual machine using virt-manager and a downloaded ubuntu 12.04 iso image (in /data/isos) was rather slick. I created a new volume in /data/vms, set the memory and cpu and started it. The default networking is a NAT’d network in the 192.168.122.x/24 network. As ovh only provides 3 IP addresses for free, I’m content to start with this network for testing, but I plan to move to a different subnet mask.

If I need to nat ports, the libvirt site has a useful page on forwarding incoming connections.

iptables -t nat -A PREROUTING -p tcp --dport HOST_PORT -j DNAT --to $GUESTIP:$GUESTPORT
iptables -I FORWARD -d $GUESTHOST/32 -p tcp -m state --state NEW -m tcp --dport $GUESTPORT -j ACCEPT

I have been reading some good things about The Formean, and how you can manage an infrastructure with it, so my next real VM will be an install of foreman. This will hopefully let me setup an enviroment where I can build virtual machines and provision them automatically. I don’t know (yet) if The Foreman will handle iptable rules on the host, but it seems to have the ability to call external scripts and be customized, so I should be able to provision NAT on the host when provisioning a new VM.

Foreman utilizes DHCP and PXE to install “bare metal” VMs, so we need a network without DHCP. Now, to create my non-dhcp managed nat, I copy the default network xml file and modify it with my new range and remove the dhcp address

cd /usr/share/libvirt/networks
cp default.xml netmanaged.xml

Modified netmanaged.xml:

<network>
  <name>managednat</name>
  <bridge name="virbr1" />
  <forward/>
  <ip address="172.16.25.1" netmask="255.255.255.0">
  </ip>
</network>

It should show up with virsh net-list --all and I can activate it.

# virsh net-list --all
Name                 State      Autostart     Persistent
--------------------------------------------------
default              active     yes           yes
managednat           inactive   yes           yes
# virsh net-autostart managednat
Network managednat marked as autostarted

# virsh net-list --all
Name                 State      Autostart     Persistent
--------------------------------------------------
default              active     yes           yes
managednat           inactive   yes           yes

# virsh net-start managednat
Network managednat started

# virsh net-list --all
Name                 State      Autostart     Persistent
--------------------------------------------------
default              active     yes           yes
managednat           active     yes           yes

The gateway will be 172.16.25.1, and I will assign the IP 172.16.25.5 to my Foreman virtual machine, aptly called “builder”. Once the basic ubuntu machine is installed by hand (hopefully the last one we do in this environment), I’ll want access to it. Ideally, this would be behind a firewall with vpn access, but I haven’t got that far yet. So for now, I’ll just setup some NAT for port 22 and 443.

iptables -t nat -A PREROUTING -p tcp --dport 8122 -j DNAT --to 172.16.25.5:22
iptables -I FORWARD -d 172.16.25.5/32 -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
iptables -t nat -A PREROUTING -p tcp --dport 8143 -j DNAT --to 172.16.25.5:443
iptables -I FORWARD -d 172.16.25.5/32 -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT

Using “foreman-installer” is the most recommended method, ensuring that we have current packages directly from theforeman.org. I’ve installed 12.04 LTS (precise), so it’s fairly straightforward, though I modified slightly from the installation documentation. The original instruction rely on running as root.

# get some apt support programs 
sudo apt-get install python-software-properties

# add the deb.theforeman.org repository
sudo bash -c "echo deb http://deb.theforeman.org/ precise stable > /etc/apt/sources.list.db/foreman.list"  
# add the key
wget -q http://deb.theforeman.org/foreman.asc -O- | sudo apt-key add -
# install the installer
sudo apt-get update && sudo apt-get install foreman-installer

# run the installer
sudo ruby /usr/share/foreman-installer/generate_answers.rb

At this point, Foreman is running on port 443, or in my case “https://externalip:8143/”. I can login with the username “admin” and the password “changeme”.

I’ve been reading the manual more at this point, but I think my next step is to watch the video Foreman Quickstart: unattendend installation. If I can grok that (and it looks nicely step by step) I’ll try and setup an unattended install.

Starting with Ruby and AWS

This weekend I decided to takle both learning Ruby and working with AWS
via the Ruby API. Having only played with both of these in the past,
this presents two learning challenges at once. However, from past
projects, this is how I learn best. I am somewhat familiar with AWS
terms and once made a script in Python to fire up an instance. This was
before Amazon came out with their management console, so I imagine
things have come a long way since then (hopefully easier). I also played
with Ruby for a while, but didn’t have a decent project for it. Having a
project with goals will hopefully keep me on track and give me a way to
measure my progress.

My goals for this project are as follows:

  1. Utilize a web based interface. Using rails seems to be the popular
    way to do this, and I’d like to base my template interface off of
    boilerstrap5, a combination of twitter-bootstrap and
    html5boilerplate. This will probably have the most trial and error
    to get it right.
  2. Connect to the AWS api and pull some basic information such as my
    account name.
  3. Fetch details about an AMI image. Maybe I’ll be able to parse a list
    of public images, or maybe I can just punch in an image ID and pull
    up the details.
  4. Start an instance from an AMI image. This might require some steps
    like setting up a an S3 bucket — we’ll see.
  5. List my running instances.
  6. Control a running instance – ie, power cycle it.
  7. Destroy an instance.
  8. BONUS: Do something similar with S3 buckets – create, list, destroy.

First off, I need to setup a ruby development environment. Since I have
used PyCharm in the past, I will try JetBrain’s RubyMine for my
editor environment. After installing this, the first thing I learned is
that rails is not installed. I could install using apt-get, but
Jetbrains recommends using RVM. It looks like a nice way to manage
different versions of Ruby, rails, and gems. I know when I have
installed Ruby applications requiring gems, gem versions was always a
source of concern. It is very easy to get mismatched gem versions in the
wild.

RVM install locally to ~/.rvm on linux, which is nice – you don’t mess
up any system wide ruby installations and keep everything local to your
development environment. After installation, I had to figure out a
couple bits with rvm.

  • rvm install 1.9.2 # installs ruby 1.9.2
  • rvm list # lists versions of ruby installed
  • rvm use 1.8.7 # use ruby 1.8.7

First, your terminal has to be setup as a login shell. This tripped me
up for a while until I changed the settings in my terminal emulator.
terminator has this as checkmark option.

[email protected]:~$ rvm list

rvm rubies

   ruby-1.8.7-p371 [ x86_64 ]
   ruby-1.9.2-p320 [ x86_64 ]

# => - current
# =* - current && default
#  * - default

[email protected]:~$ rvm use 1.8.7

RVM is not a function, selecting rubies with 'rvm use ...' will not work.

You need to change your terminal emulator preferences to allow login shell.
Sometimes it is required to use `/bin/bash --login` as the command.
Please visit https://rvm.io/integration/gnome-terminal/ for a example.

After switching to login

[email protected]:~$ rvm use 1.8.7
Using /home/ytjohn/.rvm/gems/ruby-1.8.7-p371

Finally, once you get ruby and rails working, you can create your rails
project. I’m starting with a rails project because it’s “all the rage”
and gives you a decent running start. Later, I’ll work on switching the
supplied templates with boilerplate + bootstrap based ones.

This gets me started. Next up, I’ll actually create the project from
within RubyMine and just work on basic web functionality.

dingus problems

This weekend I decided to takle both learning Ruby and working with AWS
via the Ruby API. Having only played with both of these in the past,
this presents two learning challenges at once. However, from past
projects, this is how I learn best. I am somewhat familiar with AWS
terms and once made a script in Python to fire up an instance. This was
before Amazon came out with their management console, so I imagine
things have come a long way since then (hopefully easier). I also played
with Ruby for a while, but didn’t have a decent project for it. Having a
project with goals will hopefully keep me on track and give me a way to
measure my progress.

My goals for this project are as follows:

  1. Utilize a web based interface. Using rails seems to be the popular
    way to do this, and I’d like to base my template interface off of
    boilerstrap5, a combination of twitter-bootstrap and
    html5boilerplate. This will probably have the most trial and error
    to get it right.
  2. Connect to the AWS api and pull some basic information such as my
    account name.
  3. Fetch details about an AMI image. Maybe I’ll be able to parse a list
    of public images, or maybe I can just punch in an image ID and pull
    up the details.
  4. Start an instance from an AMI image. This might require some steps
    like setting up a an S3 bucket — we’ll see.
  5. List my running instances.
  6. Control a running instance – ie, power cycle it.
  7. Destroy an instance.
  8. BONUS: Do something similar with S3 buckets – create, list, destroy.

First off, I need to setup a ruby development environment. Since I have
used PyCharm in the past, I will try JetBrain’s RubyMine for my
editor environment. After installing this, the first thing I learned is
that rails is not installed. I could install using apt-get, but
Jetbrains recommends using [RVM]. It looks like a nice way to manage
different versions of Ruby, rails, and gems. I know when I have
installed Ruby applications requiring gems, gem versions was always a
source of concern. It is very easy to get mismatched gem versions in the
wild.

RVM install locally to ~/.rvm on linux, which is nice – you don’t mess
up any system wide ruby installations and keep everything local to your
development environment. After installation, I had to figure out a
couple bits with rvm.

  • rvm install 1.9.2 # installs ruby 1.9.2
  • rvm list # lists versions of ruby installed
  • rvm use 1.8.7 # use ruby 1.8.7

First, your terminal has to be setup as a login shell. This tripped me
up for a while until I changed the settings in my terminal emulator.
terminator has this as checkmark option.

[email protected]:~$ rvm list

rvm rubies

   ruby-1.8.7-p371 [ x86_64 ]
   ruby-1.9.2-p320 [ x86_64 ]

# => - current
# =* - current && default
#  * - default

[email protected]:~$ rvm use 1.8.7

RVM is not a function, selecting rubies with 'rvm use ...' will not work.

You need to change your terminal emulator preferences to allow login shell.
Sometimes it is required to use `/bin/bash --login` as the command.
Please visit https://rvm.io/integration/gnome-terminal/ for a example.

After switching to login

[email protected]:~$ rvm use 1.8.7
Using /home/ytjohn/.rvm/gems/ruby-1.8.7-p371

Finally, once you get ruby and rails working, you can create your rails
project. I’m starting with a rails project because it’s “all the rage”
and gives you a decent running start. Later, I’ll work on switching the
supplied templates with boilerplate + bootstrap based ones.

This gets me started. Next up, I’ll actually create the project from
within RubyMine and just work on basic web functionality.

[RVM]:

Bootstrap and CDNs

Often when creating a “modern” web page, it’s very common to find yourself reinventing the wheel over and over again. I know any time I wanted to create a two-column layout, I would have to look at previous works of mine or search the Internet for a decent example. However, I recently came across Twitter’s Bootstrap framework. At it’s core, it’s just a css file that divide your web page into a 12-column “grid“. You create a “row” div, and inside that row you place your “span*” columns. Each span element spans from 1 to 12 columns, and should always add up to 12 for each row. You can also offset columns. There are css classes for large displays (1200px or higher), normal/default displays (980px), and smaller displays such as tablets (768px) or phones (480px). Elements can be made visible or hidden based on the device acessing the site (phone, tablet, or desktop). There is also a javascript component you can use for making the page more interactive.

If you download bootstrap, you get a collection of files to choose from. There’s js/bootstrap.js, img/glyphicons-halflings.png, img/glyphicons-halflings-white.png, css/bootstrap.css, css/bootstrap-responsive.css. There is also a compress .min. version of the javascript and css files. You can read further about the responsive version of the css, or how to use the icons.

Normally, one would take these downloaded files and put them into their own web application directory tree. However, there is a better way. Unless you are planning to use this on an Intranet with limited Internet access, you can use a copy of these files hosted on a “content delivery network (CDN)”. A good example of this is the jQuery library hosted on Google’s CDN. Google has a number of hosted libraries on their network. This has several advantages, one of which being caching. If everyone is pointing at a common hosted library, that library gets cached on the end-user’s machine instead of being reloaded on every site that uses that library.

While bootstrap is not hosted on google, there is another CDN running on CloudFlare called cdnjs that provides a lot of the “less popular” frameworks, including bootstrap. Here are the URLs to the most current version of bootstrap files (they have version 2.0.0 through 2.1.1 currently).

  • http://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/css/bootstrap.css
  • http://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/css/bootstrap.min.css
  • http://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/css/bootstrap-responsive.css
  • http://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/css/bootstrap-responsive.min.css
  • http://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/bootstrap.js
  • http://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/bootstrap.min.css
  • http://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/img/glyphicons-halflings-white.png
  • http://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/img/glyphicons-halflings.png

All one has to do in order to use these is to add the css and the javascript (optional) to their page. Since most CDNs support both http and https, you can leave the protocol identifier out.

<link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/css/bootstrap.min.css">
<script src="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/bootstrap.min.js">
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script>

Here’s an example you can use on your own.

<!DOCTYPE html>
<html lang="en">

<body>
<div class="container-fluid">
        <div class="row-fluid">
         <div class="span12 label label-info">
                <h1>Header</h1>
         </div>
        </div>

        <div class="row-fluid">
         <div class="span2">
                left column
                <i class="icon-arrow-left"></i>
         </div>
         <div class="span6">

                <p>center column

                <i class="icon-tasks"></i></p>

                <div class="hero-unit">
                 <h1>This is hero unit</h1>
                 <p>It is pretty emphasized</p>
                </div>

                <p>still in the center, but not so heroic</p>

         </div>
         <div class="span4">
                right column
                <i class="icon-arrow-right"></i>
         </div>
        </div>
</div><!-- end container -->

<!-- load everything at end for fast content loading -->
<link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/css/bootstrap.min.css">
<script src="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/bootstrap.min.js">
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script>
</body>
</html>

Finally, I found that NetDNA also hosts bootstrap on their CDN at [www.bootstrapcdn.com]. I would say that either CDN would be fairly reliable, as they are both sponsored by their CDN they are running on. One advantage of this site is that they provide a lot more than just the basic bootstrap hosting such as custom themes and fonts.

To use them, you can simply swap out your css and js scripts.

<link href="//netdna.bootstrapcdn.com/twitter-bootstrap/2.1.1/css/bootstrap-combined.min.css" rel="stylesheet">
<script src="//netdna.bootstrapcdn.com/twitter-bootstrap/2.1.1/js/bootstrap.min.js"></script>
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script>

UPDATE: I added jquery into the above examples because several parts of bootstrap rely on it (such as the Modal dialogs).

20120918

python console import tip

Tags: python console, python, import, pysphere

This is a quick little python tip. When experimenting with python commands and modules, it’s usually easiest to use the python console interactively, then create your programs later. The downside of this is that sometimes you have to do a bit of typing before you get to the specific command you want to try.

Imagine the following example:

[email protected]:~$ python
Python 2.7.3 (default, Aug  1 2012, 05:14:39)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from pysphere import *
>>> server = VIServer()
>>> server.connect("vc1.example.com", "username", "password")
>>> print server.get_server_type(), server.get_api_version()
VMware vCenter Server 5.0

Here, I had to type in 3 lines, including my password in plaintext, to test out querying the server. I can’t demonstrate this live, because then I reveal my password. Well last week, I made a [test1.py] file that reads a yaml configuration file and does the commands I just did. Here’s the smart bit. I can import that file directly into the python console. Once it imports, it runs each python command and leaves me in the console, ready to query the system again. The only caveat is that my “server” variable is now part of the test1 module as “test1.server”.

[email protected]:~$ python
Python 2.7.3 (default, Aug  1 2012, 05:14:39)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import test1
VMware vCenter Server 5.0
puppet1-centos6 10.100.0.206
>>> test1.server.is_connected()
True
>>> vmlist = test1.server.get_registered_vms()
>>> for v in vmlist:
...     print v
...
[iscsi-244gb-raid10] rhel-puppet1/rhel-puppet1.vmx  
[iscsi-244gb-raid10] puppet2-ubuntu 12.04LTS server/puppet2-ubuntu 12.04LTS server.vmx
[iscsi-488gb-raid10] puppet3-solaris11/puppet3-solaris11.vmx

pysphere: VMWare in Python

Tags: api, python, programming, vmware, pysphere

I do a good bit of work with VMWare vSphere and I’ve been wanting to
work more with their API. Everything (except console) that you can do in
the vSphere client, you should be able to do through a web-based API.
Unfortunately, it seems that VMWare does not provide an SDK for Python,
my current language of choice. I could work in Perl or Java, but I want
to develop a web application, which I don’t want to do in Perl.
Fortunately, I found pysphere, which is a fairly active project of
implementing the VI API in python. It might not fully implement the API,
but it looks relatively stable and easy to implement. Plus, if I find
any functionality missing, I can extend the class directly.
I followed their Getting Started page to get it installed
and get connected, but I didn’t like having my password right there in
my working code. This was easily resolved by installing PyYaml and
creating a config.yaml file. Then, it was just a matter of following
along with the examples to make a good test script.
My config.yaml:


server: esxi1.example.com
user: john
pass: password

My test.py:

#!/usr/bin/python

import yaml
from pysphere import *

f = open('config.yaml')
config = yaml.load(f)
f.close()

server = VIServer()
server.connect(config["server"], config["user"], config["pass"])

print server.get_server_type(), server.get_api_version()
vm1 = server.get_vm_by_name("puppet1-centos6")
print vm1.get_property('name'), vm1.get_property('ip_address')

And does it work?

$ ./test.py
VMware vCenter Server 5.0
puppet1-centos6 10.100.0.206

I was even able to go so far as cloning a vm (vm2 = vm1.clone(‘new vm’))
and can already see massive possibilities with this library in its
current state. The API can be queried much like a simple database, and
objects acted upon with simple statements. Operations like my vm clone
can be setup as a task and run asynchrously. I could easily see
integrating this with something like tornado, twisted, or even
cyclone to make a non-blocking web framework.

Upgrade Redmine

Currently, I have Redmine version redmine 1.3.3 installed via the
ondrej/redmine PPA. I have been wanting to upgrade to the 2.x series
of redmine, but no PPA currently exists for it. Redmine is officially
provided by Ubuntu, but the version for Precise is 1.3.2, and Ondřej’s
PPA is on 1.4.3. While I usually prefer to have my software installation
and updates handled by packages, it looks like to get to the 2.x series,
I’ll have to go back to source.

I will be following the official upgrade guide closely, but with a
few variations.

  1. The apt-get/ppa version uses multiple file locations for source code
    and configuration. I’ll have to consolidate to one place.
  2. My ruby and passenger modules were installed and modified to work
    with the ppa version of redmine. Adjustments will be needed.

My ruby version is 1.8.7 (1.8.7 min), rails 2.3.14 (2.3.6 min) and gem
1.8.15 (1.8 min. Already having the minimum requirements makes this a
bit easier.

After performing a mysql backup (hint: database.yml is in
/etc/redmine/default), I downloaded redmine to /usr/local/redmine-2.0. I
also decided to stop Apache so that Passenger wouldn’t be running the
existing redmine instance. If I had other sites running on this server,
I would have disabled this virtual host or put up a maintenance page.

cp /etc/default/database.yml /usr/local/redmine-2.0/config
cp /var/lib/redmine/default/files/* /usr/local/redmine-2.0/files

I didn’t have any plugins, but if I did they would either be in
/usr/share/redmine/vendor/plugins or /usr/share/redmine/lib/plugins. I
do intend to install a couple plugins when I get into 2.x though.

I found in step 3 that the rake commands didn’t work. This is probably
because I wasn’t working from an existing rails directory. I went to the
Redmine Installer page, which gave me the answer. “Since 1.4.0,
Redmine uses Bundler to manage gems dependencies. You need to install
Bundler first.”.

cp /etc/default/database.yml /usr/local/redmine-2.0/config
cp /var/lib/redmine/default/files/* /usr/local/redmine-2.0/files

I ran into an error when bundle was installing json 1.7.4.

gem install bundler
# run the next command from /usr/local/redmine-2.0
# it reads the file "Gemfile"
bundle install --without development test

According to an about.com page, I need build-essentials,
libopenssl-ruby, and ruby1.8-dev installed. The one I was missing was
ruby1.8-dev. This is easily fixed with an apt-get install ruby1.8-dev.

I had to install the following other packages for certain gems. The
Gemfile includes items for postgresql and sqlite, even if you don’t use
it. The install guide lets you know that you can
skip these with the –without option. You would just add “pg sqlite
rmagick” to the end of your bundle install line (above).

  • json: build-essentials, libopenssl-ruby, and ruby1.8-dev
  • mysql: libmysqlclient-dev
  • pg: libpq-dev (alternatively: add pg to the –without list)
  • rmagick: libmagickcore-dev, libmagickwand-dev (alternatively: add
    rmagick to the –without list)
  • sqlite: libsqlite3-dev

Once we got Bundler installed and all the required gems, we switch back
to the Upgrade Guide to update our session
store and migrate the database. I had no plugins, so I’m skipping that
step.

/usr/bin/ruby1.8 extconf.rb
extconf.rb:1:in `require': no such file to load -- mkmf (LoadError)
from extconf.rb:1

Let’s start this locally before we mess with passenger or apache (be
sure to allow port 3000 via iptables or ufw).

rake generate_secret_token
rake db:migrate RAILS_ENV=production 
# unecessary, as this is a new directory, but why not clean up?
rake tmp:cache:clear
rake tmp:sessions:clear

This worked without a hitch for me. Now on to my passenger setup. I
already has this configured and installed previously, so all I have to
do is change my VirtualHost directory.

ServerName projects.example.com
DocumentRoot /usr/local/redmine-2.0/public
RailsSpawnMethod smart
# Keep the application instances alive longer. Default is 300 (seconds)
PassengerPoolIdleTime 1000
RailsAppSpawnerIdleTime 0
RailsFrameworkSpawnerIdleTime 0
PassengerMaxRequests 5000
PassengerStatThrottleRate 5

        AllowOverride all
        Options -MultiViews

I did have to change a few permisssions (all files installed as owned by
root)

chgrp -R www-data config
chown -R www-data files
chown -R www-data log

Markdown Blogging

I recently have started a process of migrating my website over to
blogger.com. One of the main reasons for this was because in my last
server move, I had broken my Movable Type installation, and found myself
too busy to fix it. I found I didn’t want to spend my time fixing and
updating blogging software. I wanted to work on my projects, write them
up, and post them. It was time to move my content to an existing
platform that handled the back end. I looked at a few, and decided
blogger.com would be as good as any other service.
It only took a short time to setup a blog, point a CNAME at it, and then
to import my existing posts. When I started creating some new posts, I
immediately ran into some limitations.

  1. You used to be able to edit permalinks on blogger. Now, you can only
    do that before you publish. The only way to change a permalink after
    publishing is to create a new post with the desired permalink and
    delete the old one.
  2. Blogger has no built in formatting for code blocks. So if I want to
    show a config file, source code, or terminal session log, I have to
    fiddle with changing fonts, size, and “blockquote” to get it
    presentable. Even then, you run the risk of strange formatting of
    your raw text.

I found a solution that other bloggers use called SyntaxHighlighter.
This is a combination of javascript and css code that takes text within
your

; tags and gives you nice looking blocks of code, highlighted and (optionally) with line numbers. The catch is that your pre tags need to have a class name, along with the language (perl/c/bash/text) "brush" to use. If you go with pre tags, you have to change any angle bracks to their HTML escaped equivalents of < and >. They have a work-around using SCRIPT/CDATA, but it takes some getting used to. Adding this to your blog only requires a few steps.

I rather liked syntaxhighlighter, but it still seemed like I had to do a
lot of manual work with the code. Also, I had to select the brush each
time. Couldn’t it guess? Notepad++ and some others will guess at what
language you’re using and highlight accordingly. I found something
called prettify that does just that. You only need to load one js
file and one css file. Prettify works off of either

 or  tags and has similar limitations to SyntaxHighlighter regarding html tags. However, it has the advantage of being able to guess the language automatically.

Being able to use this code made my posts look much nicer, but the
entire process got me thinking. The way I “document” most of projects
typically invole using a notepad editor like Geany or
Notepad++. As I work, I add notes, copy in source code or shell
comands, and do everything in a plain text editor. Later, I add
commentary and clean up the document. I take this and paste it into the
WYSIWYG editor on blogger. Finally, I have to keep switching between
compose and html mode to get my text looking suitable. There are too
many steps for me to want to do this consistently. All I really want to
do is take my text file, add a little formatting in a few spots, a few
hyperlinks in others, and post it.
Enter markdown. Markdown is a text-to-HTML conversion tool for web
writers. Markdown allows you to write using an easy-to-read,
easy-to-write plain text format, then convert it to structurally valid
XHTML (or HTML)
. I have used this before, but didn’t pay it close
enough attention. It’s used on github and reddit, there are plugins for
it in dokuwiki and redmine. The idea is you write in text, adding
formatting using the markdown syntax. This format is both human readable
and machine readable. When read by the appropriate library, clean html
is generated. It also has a feature for wrapping blocks of code inside
of

&;lt;code> tags and html-escaping html inside of those tags.

Within the MarkDown project is a paged called “dingus” which means
“placeholder”. You can paste your markdown text into one textarea and
get the generated html plus a preview back. I tested pasting that
generated html into Blogger’s HTML box and it seems to work perfectly
fine. What this means is that I can type up my documentation completely
within my text editor of choice, save it locally, and then generate my
html code to paste into blogger.
Some of you may have realized that my

 tags are missing that class name (. Well, I copy the generated html, do a search and replace of  with  and then paste it, but that's adding more steps. Instead, I sought to make my own dingus that does this automatically. I found that there is an extension of markdown called Markdown Extra written in PHP. Extra adds a few features such as simple tables, but remains consistent with original Markdown formatting. Using that library, I was able to create my own dingus rather easily and alter the  tag with one line of code $render2 = str_replace("<pre>", "<pre class="prettyprint linenums">", $render);. In my experimentation, I made a parser that reads a text file and outputs html, and three dingus parsers. Dingus1 does straightforward conversion of markdown extra to html. Dingus2 and 3 provide the class names for prettified code, with #3 going the extra step of applying stylesheets for the preview.

With this setup, I can quickly paste in my text document and pull html
code to paste into blogger.com’s html edit box. With some more research,
I can modify the dingus to interact with blogger’s API and post on my
behalf. There are also some WYSIWYM live editors that show you an
instant render of your markdown as you type (you type in a textarea
while your html renders in a nearby div). This would be a good way to do
some tweaking to the markdown text before posting the html to the web.
My next plans are to make a better dingus, possibly with a live preview
and a “post to blogger” option.
Some other links:

  • http://balupton.github.com/jquery-syntaxhighlighter/demo/
  • http://code.google.com/p/pagedown/wiki/PageDown
  • http://markitup.jaysalvat.com/examples/markdown/

Gate One supervisor script

Yesterday, I setup gateone to run as a non-root user. I also spent
some time looking at potential init scripts for starting and stopping
this. The gateone project does not currently provide any init scripts,
but this is planned for the future ([Issue #47]). I tried to use one
of the scripts in that thread, but I wasn’t really pleased with them.
The big issue is that gateone.py doesn’t fork. However, I believe there
is a better solution.

supervisord is a python script designed to control and monitor
non-daemonizing python scripts. As Gate One is a foreground only
process, it seems particularly suited to this task – more so than
writing my own script in python or daemontools.

Installation can be done with python-pip or easy_install. On newer
systems, pip is recommended.

sudo pip install supervisor

On Ubuntu, pip installs supervisord to /usr/local/bin. By default,
/usr/local/bin is not in root’s path, so it makes sense (to me at least)
to create symlinks to /usr/sbin.

[email protected]:~$ ls /usr/local/bin
echo_supervisord_conf  pidproxy  supervisorctl  
[email protected]:~$ sudo ln -s /usr/local/bin/supervisord /usr/sbin
[email protected]:~$ sudo ln -s /usr/local/bin/supervisorctl /usr/sbin

Now, we need to create a configuration file. Supervisord has a utility
to generate a sample one.

echo_supervisord_conf  > supervisord.conf

To get started, we can use the sample configuration and just add a
couple lines to the bottom for gateone.

 [program:gateone]
 command=/opt/gateone/gateone.py
 directory=/opt/gateone
 ;user=johnh   ; Default is root. Add a user= to setuid

Now, copy supervisord.conf to /etc/supervisord.conf and start
supervisord. Make sure gateone.py is not currently running. Then we’ll
run supervisorctl to test things out.

[email protected]:~$ sudo cp supervisord.conf /etc
[email protected]:~$ sudo supervisord
[email protected]:~$ sudo supervisorctl status
gateone                          RUNNING    pid 9549, uptime 0:00:05
[email protected]:~$ ps ax | grep gateone
 9549 ?        Sl     0:00 python /opt/gateone/gateone.py
[email protected]:~$ sudo supervisorctl stop gateone
gateone: stopped
[email protected]:~$ ps ax | grep gateone
 9605 ?        Ss     0:00 dtach -c /opt/gateone/tmp/gateone/../dtach_3 -E -z -r none /opt/gateone/plugins/ssh/scripts/ssh_connect.py -S /tmp/gateone/.../%SHORT_SOCKET% --sshfp -a -oUserKnownHostsFile=/opt/gateone/users/[email protected]/ssh/known_hosts
 9606 pts/3    Ss+    0:00 python /opt/gateone/plugins/ssh/scripts/ssh_connect.py -S /tmp/gateone/.../%SHORT_SOCKET% --sshfp -a -oUserKnownHostsFile=/opt/gateone/users/[email protected]/ssh/known_hosts

In this example, we see that gateone.py is started and stopped by
supervisorctl, but because we have dtach enabled, our sessions are still
in place. If we restart gateone.py, we can connect to it again and have
our sessions resumed. While we could probably configure supervisord to
kill these terminals, I believe we’d normally want to keep them running.
The few times I would want to stop those terminals would be a) manually
reconfiguring/troubleshooting opengate, b) updating software, or c)
rebooting the server. For a&b, running the command “gateone.py -kill”
will kill those terminals. For a server shutdown or reboot, the act of
shutting down the OS will kill these terminals.

Finally, we need a way to start and stop supervisord itself.
Fortunately, the supervisord project has provided a number of init
scripts
. I was able to use the Debian script in Ubuntu with only
a few minor changes.

  1. I had symlinked supervisord and supervisorctl to /usr/sbin. The
    script expects them in /usr/bin (but even says that /usr/sbin is a
    better location). I had to change /usr/bin to /usr/sbin.
    Alternatively, you can symlink the files into /usr/bin
  2. I added a status option that runs $SUPERVISORCTL status
  3. If you started supervisord manually, you must shut it down and start
    it with the script. The script won’t be able to stop supervisord
    unless /var/run/supervisord.pid is current.

Here is my complete init script for Ubuntu:

#!/bin/sh
### BEGIN INIT INFO
# Provides:          supervisord
# Required-Start:    $local_fs $remote_fs $networking
# Required-Stop:     $local_fs $remote_fs $networking
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Starts supervisord - see http://supervisord.org
# Description:       Starts and stops supervisord as needed 
### END INIT INFO
# Author: Leonard Norrgard 
# Version 1.0-alpha
# Based on the /etc/init.d/skeleton script in Debian.
# Please note: This script is not yet well tested. What little testing
# that actually was done was only on supervisor 2.2b1.
# Do NOT "set -e"
# PATH should only include /usr/* if it runs after the mountnfs.sh script
PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="Run a set of applications as daemons."
NAME=supervisord
# Supervisord is installed in /usr/bin by default, but /usr/sbin would 
# make more sense
DAEMON=/usr/sbin/$NAME   
SUPERVISORCTL=/usr/sbin/supervisorctl
PIDFILE=/var/run/$NAME.pid
DAEMON_ARGS="--pidfile ${PIDFILE}"
SCRIPTNAME=/etc/init.d/$NAME
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
# Read configuration variable file if it is present
[ -r /etc/default/$NAME ] && . /etc/default/$NAME
# Load the VERBOSE setting and other rcS variables
. /lib/init/vars.sh
# Define LSB log_* functions.
# Depend on lsb-base (>= 3.0-6) to ensure that this file is present.
. /lib/lsb/init-functions
#
# Function that starts the daemon/service
#
do_start()
{
        # Return
        #   0 if daemon has been started
        #   1 if daemon was already running
        #   2 if daemon could not be started
        [ -e $PIDFILE ] && return 1
        start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- 
                $DAEMON_ARGS  
                || return 2
        # Add code here, if necessary, that waits for the process to be ready
        # to handle requests from services started subsequently which depend
        # on this one.  As a last resort, sleep for some time.
}
#
# Function that stops the daemon/service
#
do_stop()
{
        # Return
        #   0 if daemon has been stopped
        #   1 if daemon was already stopped
        #   2 if daemon could not be stopped
        #   other if a failure occurred
        [ -e $PIDFILE ] || return 1
        # Stop all processes under supervisord control.
        $SUPERVISORCTL stop all
        start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE  
             --name $NAME
        RETVAL="$?"
        [ "$RETVAL" = 2 ] && return 2
        # Wait for children to finish too if this is a daemon that forks
        # and if the daemon is only ever run from this initscript.
        # If the above conditions are not satisfied then add some other code
        # that waits for the process to drop all resources that could be
        # needed by services started subsequently.  A last resort is to
        # sleep for some time.
        start-stop-daemon --stop --quiet --oknodo --retry=0/30/KILL/5 --exec $DAEMON
        [ "$?" = 2 ] && return 2
        # Many daemons don't delete their pidfiles when they exit.
        rm -f $PIDFILE
        return "$RETVAL"
}
#
# Function that sends a SIGHUP to the daemon/service
#
do_reload() {
        #
        # If the daemon can reload its configuration without
        # restarting (for example, when it is sent a SIGHUP),
        # then implement that here.
        #
        start-stop-daemon --stop --signal 1 --quiet --pidfile $PIDFILE --name $NAME
        return 0
}
case "$1" in
  start)
        [ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME"
        do_start
        case "$?" in
                0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
                2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
        esac
        ;;
  stop)
        [ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME"
        do_stop
        case "$?" in
                0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
                2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
        esac
        ;;
  #reload|force-reload)
        #
        # If do_reload() is not implemented then leave this commented out
        # and leave 'force-reload' as an alias for 'restart'.
        #
        #log_daemon_msg "Reloading $DESC" "$NAME"
        #do_reload
        #log_end_msg $?
        #;;
  restart|force-reload)
        #
        # If the "reload" option is implemented then remove the
        # 'force-reload' alias
        #
        log_daemon_msg "Restarting $DESC" "$NAME"
        do_stop
        case "$?" in
          0|1)
                do_start
                case "$?" in
                        0) log_end_msg 0 ;;
                        1) log_end_msg 1 ;; # Old process is still running
                        *) log_end_msg 1 ;; # Failed to start
                esac
                ;;
          *)
                # Failed to stop
                log_end_msg 1
                ;;
        esac
        ;;
  status)
        $SUPERVISORCTL status
        RETVAL=$?
        ;;
  *)
        #echo "Usage: $SCRIPTNAME {start|stop|restart|reload|force-reload}" >&2
        echo "Usage: $SCRIPTNAME {start|stop|restart|force-reload|status}" >&2
        exit 3
        ;;
esac

And here is a complete copy of my supervisord.conf file:

[unix_http_server]
file=/tmp/supervisor.sock   ; (the path to the socket file)
[supervisord]
logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB        ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10           ; (num of main logfile rotation backups;default 10)
loglevel=info                ; (log level;default info; others: debug,warn,trace)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false               ; (start in foreground if true;default false)
minfds=1024                  ; (min. avail startup file descriptors;default 1024)
minprocs=200                 ; (min. avail process descriptors;default 200)
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL  for a unix socket
[program:gateone]
command=/opt/gateone/gateone.py
directory=/opt/gateone
stdout_logfile=/opt/gateone/logs/supervisor.log
user=johnh