Worst Practice Lab VM Automation

Worst Practice Lab VM Automation

I've started the process of switching my lab over from unmanaged to ansible. I've used Puppet and Salt quite extensively through work, but after a handful of false starts with the lab, I think ansible is the way to go.g his is a series of what
many (including myself) would consider "worst practices", but are more along the lines of "rapid iteration". The goal here
is to get something working in a short period of time, without spending hours, days, or weeks researching best practices.
This is instead something someone can put together on a Sunday afternoon, in between chasing after a 3 year old.

These are a handful of manual steps, each of which could be easily automated once you determine your "starting point".

Background: When I clone a VM in proxmox, it comes up with the hostname "xenial-template". I should be able to do something like I do with cloud-init under kvm, but I haven't gotten that far under the proxmox setup. Additionally, these hosts are not in dns until they are entered into the freeipa server. Joining a client to IPA will automatically create the entry. So the first thing I need to do to any VM is to set the hostname, fqdn, and then register it with IPA. My template
has a user called "yourtech", which I can use to login and configure the VM.

First, create an ansible vault password file: echo secret> ~/.vault_pass.txt. Next, create an and inventory directory and setup an encrypted group_vars/all.

mkdir -p inventory/group_vars
touch inventory/group_vars/all

Add some common variables to all:

---
ansible_ssh_user: yourtech
ansible_ssh_pass: secret
ansible_sudo_pass: secret
freeipaclient_server: dc01.lab.ytnoc.net
freeipaclient_domain: lab.ytnoc.net
freeipaclient_enroll_user: admin
freeipaclient_enroll_pass: supersecret

Then encrypt it: ansible-vault --vault-password-file=~/.vault_pass.txt encrypt inventory/group_vars/all

Generate inventory files.

With the following script, I can run ./add-new.sh example 192.168.0.121. If ansible failes, then I need to
troubleshoot. A better approach would be to add these entries into a singular inventory file, or better yet,
a database, providing a constantly updated and dynamic inventory. Put that on the later pile.

#!/usr/bin/env bash

NEWNAME=$1
IP=$2
DOMAIN=lab.ytnoc.net
FQDN="${NEWNAME}.${DOMAIN}"
ANSIBLE_VAULT_PASSFILE=~/.vault_pass.txt
BASEDIR=~/projects/ytlab/inventory
FILENAME="${BASEDIR}/${NEWNAME}"
LINE="${FQDN} ansible_host=${IP}"

export ANSIBLE_HOST_KEY_CHECKING=False

echo ${LINE} > ${FILENAME}

echo "Removing any prior host keys"
ssh-keygen -R ${NEWNAME}
ssh-keygen -R ${FQDN}
ssh-keygen -R ${IP}

echo "${FILENAME} created, testing"
ansible --vault-password-file ${ANSIBLE_VAULT_PASSFILE} -i ${FILENAME} ${FQDN} -m ping -vvvv

Let's go to work.

At this point, I should have a working inventory file for a single host and I've validated that ansible can
connect. Granted, I haven't tested sudo, but in my situation, I'm pretty sure that will work. But I haven't
actually done anything with the VM. It's still just this default template.

FQDN

Ansible provides a module to set the hostname, but does not modify /etc/hosts to get the FQDN resolving. As with
many things, I'm not the first to encounter this, so I found a premade role holms/ansible-fqdn.

mkdir roles
cd roles
git clone https://github.com/holms/ansible-fqdn.git fqdn

This role will read inventory_hostname for fqdn, and inventory_hostname_short for hostname. You can override
this, but these are perfect defaults based on my script above.

FreeIPA

Once again, we're saved by the Internet. alvaroaleman/ansible-freeipa-client is an already designed role that installs the necessary freeipa packages and runs the
ipa-join commands.

# assuming still in roles
git clone https://github.com/alvaroaleman/ansible-freeipa-client.git freeipa

The values this module needs just happens to perfectly match the freeipa_* variables I put in my all file earlier. I
think that's just amazing luck.

Make a playbook.

I call mine bootstrap.yml.

---
- hosts: all
become: yes
roles:
- fqdn
- freeipa

Execute

Let's run our playbook against host "pgdb02"

ansible-playbook -i inventory/pgdb02 --vault-password-file=~/.vault_pass.txt bootstrap.yml

Output:

ytjohn@corp5510l:~/projects/ytlab$ ansible-playbook -i inventory/pgdb02 --vault-password-file=~/.vault_pass.txt base.yml

PLAY ***************************************************************************

TASK [setup] *******************************************************************
ok: [pgdb02.lab.ytnoc.net]

TASK [fqdn : fqdn | Configure Debian] ******************************************

TASK [fqdn : fqdn | Configure Redhat] ******************************************
skipping: [pgdb02.lab.ytnoc.net]

TASK [fqdn : fqdn | Configure Linux] *******************************************
included: /home/ytjohn/projects/ytlab/roles/fqdn/tasks/linux.yml for pgdb02.lab.ytnoc.net

TASK [fqdn : Set Hostname with hostname command] *******************************
changed: [pgdb02.lab.ytnoc.net]

TASK [fqdn : Re-gather facts] **************************************************
ok: [pgdb02.lab.ytnoc.net]

TASK [fqdn : Build hosts file (backups will be made)] **************************
changed: [pgdb02.lab.ytnoc.net]

TASK [fqdn : restart hostname] *************************************************
ok: [pgdb02.lab.ytnoc.net]

TASK [fqdn : fqdn | Configure Windows] *****************************************
skipping: [pgdb02.lab.ytnoc.net]

TASK [freeipa : Assert supported distribution] *********************************
ok: [pgdb02.lab.ytnoc.net]

TASK [freeipa : Assert required variables] *************************************
ok: [pgdb02.lab.ytnoc.net]

TASK [freeipa : Import variables] **********************************************
ok: [pgdb02.lab.ytnoc.net]

TASK [freeipa : Set DNS server] ************************************************
skipping: [pgdb02.lab.ytnoc.net]

TASK [freeipa : Update apt cache] **********************************************
ok: [pgdb02.lab.ytnoc.net]

TASK [freeipa : Install required packages] *************************************
changed: [pgdb02.lab.ytnoc.net] => (item=[u'freeipa-client', u'dnsutils'])

TASK [freeipa : Check if host is enrolled] *************************************
ok: [pgdb02.lab.ytnoc.net]

TASK [freeipa : Enroll host in domain] *****************************************
changed: [pgdb02.lab.ytnoc.net]

TASK [freeipa : Include Ubuntu specific tasks] *********************************
included: /home/ytjohn/projects/ytlab/roles/freeipa/tasks/ubuntu.yml for pgdb02.lab.ytnoc.net

TASK [freeipa : Enable mkhomedir] **********************************************
changed: [pgdb02.lab.ytnoc.net]

TASK [freeipa : Enable sssd sudo functionality] ********************************
changed: [pgdb02.lab.ytnoc.net]

RUNNING HANDLER [freeipa : restart sssd] ***************************************
changed: [pgdb02.lab.ytnoc.net]

RUNNING HANDLER [freeipa : restart ssh] ****************************************
changed: [pgdb02.lab.ytnoc.net]

PLAY RECAP *********************************************************************
pgdb02.lab.ytnoc.net : ok=18 changed=8 unreachable=0 failed=0

Recap

Essentially, we created a rather basic inventory generator script, we encrypted some
credentials into a variables file using ansible-vault, and we downloaded some roles
"off the shelf" and executed them both with a single "bootstrap" playbook.

If I was doing this for work, I would first create at least one Vagrant VM and work through
an entire development cycle. I would probably rewrite these roles I downloaded to make them
more flexible and variable driven.

In case you got lost where these files go:

.
├── add-new.sh
├── bootstrap.yml
├── inventory
│   ├── group_vars
│   │   ├── all
│   ├── pgdb01
│   ├── pgdb02
│   └── sstorm01
└── roles
├── fqdn
└── freeipa

Salt The Earth

Learning Salt

I am just beginning to read up on SaltStack, but I am really liking it. It has a number of things I like from Ansible (separation from code and state), the targeting abilities of mcollective, and the centralized control of Puppet/Chef. Salt can be run masterless, but in a master/minion configuration, it uses a message queue (0mq) to control minions and get information back. All messages in this queue are encrypted using keys on both the minion and the master. If you distribute this key, you can consume salt generated data in other programs.

Running commands across all minions could look lik this:

salt '*' test.ping
salt '*' disk.percent

In these, test.ping and disk.percent are known as 'execution modules', which are essentially python modules that contain defined functions. For example, disk would be an 'execution module' and "percent" would be a function defined. Here, test.ping runs on all target hosts and returns "True"; disk.percent returns the percentage of disk usage on all minions.

You can also run ad-hoc commands.

salt '*' cmd.run 'ls -l /etc'

Of course, you can target blocks of systems by hostname (web* will match web, web1, web02, webserver,..). Salt also has something called Grains which everyone else calls facts. You can target based on the grains, or use salt to provide a report based on the grains. The following command will return the number of cpus for every 64-bit cpu.

salt -G 'cpuarch:x86_64' grains.item num_cpus

I think this would work as well:

salt -G 'cpuarch:x86_64 and num_cpus:4' test.ping

If not, these would work (Compound match):

salt -C 'G@cpuarch:x86_64 and G@num_cpus:4' test.ping
salt -c 'G@os:Ubuntu or [email protected]' test.ping

Salt States

In Puppet, your manifests and modules are very closely coupled in puppet code. In Ansible, they separate things into modules and "playbooks". These playbooks are yaml files detailing what modules and values for ansible to use. Salt follows this pattern as well, separating execution modules from states with "Salt States", aka SLS formulas (aka state modules).

A sample SLS formual for installing nginx would look like this:

nginx:
  pkg:
    - installed
  service:
    - running
    - require:
      - pkg: nginx

Assuming you save that in the right spot (/srv/salt/nginx/init.sls), you can apply this state module to all servers starting with the name 'web'.

salt 'web*' state.sls nginx

So with one tool, you can query a large amount of data from all your minions, move them to a specific state, run ad-hoc style modules, etc. This can also all be expanded by writing your own python modules. Also, I'm mostly interested in the ability to target modules and groups of hosts in one command, but it's worh noting that salt will do scheduling of jobs just like puppet and chef do.

Foreman managed virtual datacenter

I ordered the KS2 from <www.ovh.com> - $50/month, 3.4ghz, 16gb ram, and a 1TB software raid setup. My plan is to set this
up as a single server virtual datacenter. They also have the SP2 with twice the ram and storage for $90, but I figured I'd test out
the cheaper option first. I can always upgrade and migrate to a larger server later if I get into heavier usage. The prices are rather cheap and they have scripts that will automatically provision physical servers.

I had this installed with Centos 6. I tried first using the "ovh secure" kernel, but I could not get KVM working with that kernel, so I had it reinstalled with the "vendor" kernel. I allocated 20GB to "/" and the remainder (898GB) to "/data".

Installing kvm and libvirt is a simple yum command.

yum install kvm libvirt python-virtinst qemu-kvm

Then, on my workstation, I installed virt-manager, which allowed me to graphically create and install virtual machines (I can do this by hand on the server, but virt-manager is a nice way to get started). The connection is done over ssh, so it will either ask for your username/password, or it can use ssh-key authentication (preferred).

I created /data/isos and /data/vms to hold my installation isos and virual machines respectively. The trick I had to work out is that I couldn't just add "/data" as a directory-based storage volume, I had to make one for isos and one for vms. I also found that the default directory (/var/lib/libvirt/images) is rather difficult to remove. I disabled it and removed it, but it showed back up later. When creating through the dialog, virt-manager wants to put your vm image in "default".

Creating a new virtual machine using virt-manager and a downloaded ubuntu 12.04 iso image (in /data/isos) was rather slick. I created a new volume in /data/vms, set the memory and cpu and started it. The default networking is a NAT'd network in the 192.168.122.x/24 network. As ovh only provides 3 IP addresses for free, I'm content to start with this network for testing, but I plan to move to a different subnet mask.

If I need to nat ports, the libvirt site has a useful page on forwarding incoming connections.

iptables -t nat -A PREROUTING -p tcp --dport HOST_PORT -j DNAT --to $GUESTIP:$GUESTPORT
iptables -I FORWARD -d $GUESTHOST/32 -p tcp -m state --state NEW -m tcp --dport $GUESTPORT -j ACCEPT

I have been reading some good things about The Formean, and how you can manage an infrastructure with it, so my next real VM will be an install of foreman. This will hopefully let me setup an enviroment where I can build virtual machines and provision them automatically. I don't know (yet) if The Foreman will handle iptable rules on the host, but it seems to have the ability to call external scripts and be customized, so I should be able to provision NAT on the host when provisioning a new VM.

Foreman utilizes DHCP and PXE to install "bare metal" VMs, so we need a network without DHCP. Now, to create my non-dhcp managed nat, I copy the default network xml file and modify it with my new range and remove the dhcp address

cd /usr/share/libvirt/networks
cp default.xml netmanaged.xml

Modified netmanaged.xml:

<network>
  <name>managednat</name>
  <bridge name="virbr1" />
  <forward/>
  <ip address="172.16.25.1" netmask="255.255.255.0">
  </ip>
</network>

It should show up with virsh net-list --all and I can activate it.

# virsh net-list --all
Name                 State      Autostart     Persistent
--------------------------------------------------
default              active     yes           yes
managednat           inactive   yes           yes
# virsh net-autostart managednat
Network managednat marked as autostarted

# virsh net-list --all
Name                 State      Autostart     Persistent
--------------------------------------------------
default              active     yes           yes
managednat           inactive   yes           yes

# virsh net-start managednat
Network managednat started

# virsh net-list --all
Name                 State      Autostart     Persistent
--------------------------------------------------
default              active     yes           yes
managednat           active     yes           yes

The gateway will be 172.16.25.1, and I will assign the IP 172.16.25.5 to my Foreman virtual machine, aptly called "builder". Once the basic ubuntu machine is installed by hand (hopefully the last one we do in this environment), I'll want access to it. Ideally, this would be behind a firewall with vpn access, but I haven't got that far yet. So for now, I'll just setup some NAT for port 22 and 443.

iptables -t nat -A PREROUTING -p tcp --dport 8122 -j DNAT --to 172.16.25.5:22
iptables -I FORWARD -d 172.16.25.5/32 -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
iptables -t nat -A PREROUTING -p tcp --dport 8143 -j DNAT --to 172.16.25.5:443
iptables -I FORWARD -d 172.16.25.5/32 -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT

Using "foreman-installer" is the most recommended method, ensuring that we have current packages directly from theforeman.org. I've installed 12.04 LTS (precise), so it's fairly straightforward, though I modified slightly from the installation documentation. The original instruction rely on running as root.

# get some apt support programs 
sudo apt-get install python-software-properties

# add the deb.theforeman.org repository
sudo bash -c "echo deb http://deb.theforeman.org/ precise stable > /etc/apt/sources.list.db/foreman.list"  
# add the key
wget -q http://deb.theforeman.org/foreman.asc -O- | sudo apt-key add -
# install the installer
sudo apt-get update && sudo apt-get install foreman-installer

# run the installer
sudo ruby /usr/share/foreman-installer/generate_answers.rb

At this point, Foreman is running on port 443, or in my case "https://externalip:8143/". I can login with the username "admin" and the password "changeme".

I've been reading the manual more at this point, but I think my next step is to watch the video Foreman Quickstart: unattendend installation. If I can grok that (and it looks nicely step by step) I'll try and setup an unattended install.