Mocking Consul for fun and profit.

I’ve been creating a fun microservice tool that provides a single API frontend and merges data from multiple backends. Since the app itself relies entirely on external data, I was wondering how in the world I would write unit tests for it. It’s written in python using the amazing apistar framework. All of my external data so far is gathered using the requests library. The answer for this, turns out to requests-mock. Requests-mock will allow you create mock responses to requests.

The documentation is pretty straightforward, but I was having some trouble wrapping my head around how I would use it to test the code in my app. To start simple, I decided to mock consul, which is one of my datasources.

Get a value into consul

First off, let’s go ahead and setup. Go to https://www.consul.io/ and download the consul binary for your OS.

  1. Start consul in dev mode/foreground: consul agent -dev
  2. Insert a key: consul kv put foo bar
  3. Let’s request that key with curl. Be verbose, because there are some headers you’ll want later.
$ curl http://127.0.0.1:8500/v1/kv/foo -v
* Connected to localhost (127.0.0.1) port 8500 (#0)
> GET /v1/kv/foo HTTP/1.1
> Host: localhost:8500
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/json
< X-Consul-Index: 7
< X-Consul-Knownleader: true
< X-Consul-Lastcontact: 0
< Date: Thu, 04 Jan 2018 11:04:20 GMT
< Content-Length: 158
<
[
    {
        "LockIndex": 0,
        "Key": "foo",
        "Flags": 0,
        "Value": "YmFy",
        "CreateIndex": 7,
        "ModifyIndex": 7
    }
]

You may be wondering why “Value” is YmFy. That’s because consul uses base64 encoding. Running echo YmFy | base64 -d will give you the string bar.

Read consul with python requests

Awesome, now I can write a fancy python script to show off my foo. Create a file called requestkey.py.

import base64
import json
import requests

URL = "http://127.0.0.1:8500/v1/kv"

def requestkey(key):
  url = "{}/{}".format(URL, key)
  r = requests.get(url)
  data = r.json()
  v = base64.b64decode(data[0]['Value'])
  # FYI v will return a byte instead of a string (b'foo'), we'll decode that to a string
  return (v.decode())

if __name__ == '__main__':
  v = requestkey('foo')
  print("Here is my foo: {}".format(v))

Run it:

$ python requestkey.py
FETCHED: foo RESULT: b'bar

Live unit test

Let’s create a test.py file. This will ensure the value of foo is equal to bar.

import unittest
from requestkey import requestkey

class TestStringMethods(unittest.TestCase):

    def test_foo(self):
        v = requestkey('foo')
        self.assertEqual(v, 'bar')

if __name__ == '__main__':
    unittest.main()

And run it:

$ python test.py
.
----------------------------------------------------------------------
Ran 1 test in 0.011s

OK

This works great! But what if the consul server on the CICD server running my test.py has a different value for foo? Or no consul server at all?

(py3) jh1:mocktests ytjohn$ consul kv delete foo
Success! Deleted key: foo
(py3) jh1:mocktests ytjohn$ python test.py
$ consul kv put foo bard
Success! Data written to: foo
(py3) jh1:mocktests ytjohn$ python test.py
F
======================================================================
FAIL: test_foo (__main__.TestStringMethods)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test.py", line 8, in test_foo
    self.assertEqual(v, 'bar')
AssertionError: 'bard' != 'bar'
- bard
?    -
+ bar


----------------------------------------------------------------------
Ran 1 test in 0.012s

FAILED (failures=1)

This is a problem. My code is still perfectly fine, but because the live data has changed, my test fails. That is what we hope to solve.

Let’s mock consul.

As I alluded at the top of my post, I hope to solve this with requests-mock. There’s some fancy things I see like registering URIs, but to start, I am just going use the Mocker example they have. It’s a good thing I did a curl request earlier to see what the actual response will be.

import json
import requests_mock
import unittest

from requestkey import requestkey

class TestStringMethods(unittest.TestCase):

    def setUp(self):
       self.baseurl = 'http://127.0.0.1:8500/v1/kv'

    def test_foo(self):
        key = 'foo'
        url = '{}/{}'.format(self.baseurl, key)
        response = [{
                    "LockIndex": 0,
                    "Key": "foo",
                    "Flags": 0,
                    "Value": "YmFy",
                    "CreateIndex": 7,
                    "ModifyIndex": 7}]

        with requests_mock.Mocker() as m:
          m.get(url, text=json.dumps(response))
          v = requestkey('foo')
          self.assertEqual(v, 'bar')

if __name__ == '__main__':
    unittest.main()

Let’s try our test:

(py3) jh1:mocktests ytjohn$ consul kv get foo
bard
(py3) jh1:mocktests ytjohn$ python test.py
.
----------------------------------------------------------------------
Ran 1 test in 0.010s

OK

This is great. I can develop locally, store working examples in my test code, and test against that.

Requests is fun, but what about python-consul?

The truth is, I don’t talk to consul using requests and base 64 decoding. For some reason, I thought it would
be easier for you to follow along if I did straight requests. But in reality, most people are going to use python-consul. In fact, here is my getkey.py file doing just that.

import consul

def getkey(key):
  c = consul.Consul() # consul defaults to 127.0.0.1:8500
  index, data = c.kv.get(key, index=None)
  return data['Value']

if __name__ == '__main__':
  v = getkey('foo')
  print("Here is my foo: {}".format(v))

Now I’m going to rewrite my test.py to test both of these. I moved my response into the setUp
class, renamed test_foo to test_request and added a test_get to test my getkey function.

import json
import requests_mock
import unittest

from requestkey import requestkey
from getkey import getkey

class TestStringMethods(unittest.TestCase):

    def setUp(self):
       self.baseurl = 'http://127.0.0.1:8500/v1/kv'
       self.response_foo = [{
                    "LockIndex": 0,
                    "Key": "foo",
                    "Flags": 0,
                    "Value": "YmFy",
                    "CreateIndex": 7,
                    "ModifyIndex": 7}]

    def test_request(self):
        key = 'foo'
        url = '{}/{}'.format(self.baseurl, key)

        with requests_mock.Mocker() as m:
          m.get(url, text=json.dumps(self.response_foo))
          v = requestkey('foo')
          self.assertEqual(v, 'bar')

    def test_get(self):
        key = 'foo'
        url = '{}/{}'.format(self.baseurl, key)

        with requests_mock.Mocker() as m:
          m.get(url, text=json.dumps(self.response_foo))
          v = getkey('foo')
          self.assertEqual(v, 'bar')

if __name__ == '__main__':
    unittest.main()

Let’s see how this does:

$ python test.py
E.
======================================================================
ERROR: test_get (__main__.TestStringMethods)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test.py", line 35, in test_get
    v = getkey('foo')
  File "/Users/ytjohn/vsprojects/unsafe/mocktests/getkey.py", line 7, in getkey
    index, data = c.kv.get(key, index=None)
  File "/Users/ytjohn/.venvs/py3/lib/python3.6/site-packages/consul/base.py", line 538, in get
    params=params)
  File "/Users/ytjohn/.venvs/py3/lib/python3.6/site-packages/consul/std.py", line 22, in get
    self.session.get(uri, verify=self.verify, cert=self.cert)))
  File "/Users/ytjohn/.venvs/py3/lib/python3.6/site-packages/consul/base.py", line 227, in cb
    return response.headers['X-Consul-Index'], data
  File "/Users/ytjohn/.venvs/py3/lib/python3.6/site-packages/requests/structures.py", line 54, in __getitem__
    return self._store[key.lower()][1]
KeyError: 'x-consul-index'

----------------------------------------------------------------------
Ran 2 tests in 0.016s

FAILED (errors=1)

Oh what disaster! My fancy getkey code is failing tests!. What is x-consul-index anyways? Well, it looks
to be response.headers['X-Consul-Index'], which is a header we saw in the curl request. Fortunately,
mock allows you to provide headers as well.

  1. Add headers to setUp: self.headers = {'X-Consul-Index': "7"} (yes, value must be a string)
  2. Add the header response to your mockup: m.get(url, text=json.dumps(self.response_foo), headers=self.headers_foo)
$ python test.py
..
----------------------------------------------------------------------
Ran 2 tests in 0.012s

OK

Outstanding. And for completion, here is the final test.py:

import json
import requests_mock
import unittest

from requestkey import requestkey
from getkey import getkey

class TestStringMethods(unittest.TestCase):

    def setUp(self):
       self.baseurl = 'http://127.0.0.1:8500/v1/kv'
       self.headers_foo = {'X-Consul-Index': "7"}
       self.response_foo = [{
                    "LockIndex": 0,
                    "Key": "foo",
                    "Flags": 0,
                    "Value": "YmFy",
                    "CreateIndex": 7,
                    "ModifyIndex": 7}]

    def test_request(self):
        key = 'foo'
        url = '{}/{}'.format(self.baseurl, key)

        with requests_mock.Mocker() as m:
          m.get(url, text=json.dumps(self.response_foo))
          v = requestkey('foo')
          self.assertEqual(v, 'bar')

    def test_get(self):
        key = 'foo'
        url = '{}/{}'.format(self.baseurl, key)

        with requests_mock.Mocker() as m:
          m.get(url, text=json.dumps(self.response_foo), headers=self.headers_foo)
          v = getkey('foo')
          self.assertEqual(v, 'bar')

if __name__ == '__main__':
    unittest.main()

What is the point?

There isn’t much pointing to having a block of code produce a static value and then check to see if it is that value. However, when we start taking actions based on values (live, maintenance, true, false, -1), we can definitely check to see if our code behaves an expected way based on a collection of sample data we store. I can also check for how I handle incomplete data. A big part of my microservice correlates devices with network interfaces, ip addresses, and vlans. Not every interface has an ip, not every ip has a vlan. Not every network has a default gateway. I have to determine which ip is “primary”. So as I collect examples of devices with different configurations, I should be able to register urls and responses for each device. If my code is expecting a vlan to be a number, but instead I receive a “None” – will I handle that or will I throw an exception error?

Looking forward, I can envision having sample json data stored with functions to provide the desired response and headers needed.

First impressions with emacs

Learning emacs really sucks. Let’s do this.

My current stack

Because I’m insane, I decided to give Emacs a try. While I’m still pretty pleased with Omnifocus, I do find limitations with it. I also store all kinds of notes in Quiver. Anytime I’m working on anything, I keep my notes in Quiver. I had been keeping a worklog in it, but it got goofy when work (and the associated notes) span multiple days. I also use PyCharm as my IDE. That I definitely like, but if I want to open a single file, I’m going to pop open vim, sublime text, or even just textedit.

Why I’m looking at Emacs

For years, I’ve been hearing amazing things about orgmode. It’s used for todo lists, project planning, documents, and journals. People also like using emacs as an python ide. Everyone that uses emacs seems to really like it, and it keeps showing up as a good looking solution to different problems I’d like to solve. There’s even a slack client for it. I decided that I should really give emacs a shot before discarding any solution because it happened to be emacs based. You see, emacs has a super steep learning curve, and when you use it, you essentially join a cult.

Round 1

So I decided to dive in. I found a number of people recommending How to learn emacs as a good place for beginners. The first page has this juicy tidbit:

What I need from you is commitment (a couple of dedicated hours of study per week, and to use Emacs as your day-to-day editor) and patience (be willing to give up your favorite IDE’s features, at least for a month or two).

That’s pretty intimidating, but to be fair, Vim takes a long time to master. Those firs starting out need to learn what a modal editor is, how to switch between insert and command mode, how to navigate, how to do search and replace, how to delete lines, and possibly how to use vim’s internal clipboard operations. That’s all before you get into customizing and extending the program to turn it into an ide.

I put a couple hours in the first weekend, and a little bit of time the following week going through the examples. But I got bored and real life kept me away.

Round 2

Seeing sometime ahead of me, I figured I’d try again. I went back and forth between plain emacs, spacemacs and prelude. I did research all over about how people got started with emacs. Lots of heavy opinions on “starting pure”, or using a starter pack like spacemacs/prelude. For those with vim leanings, there is an “evil mode” that provides some vim keybindings and emulation. I came across the Mastering Emacs book which gots some good feedback on reddit.
I started reading a copy of the book with pure emacs open. It’s 277 pages long and I got to about page 30 before I started falling asleep. However, here are some key things to know:

  • Emacs isn’t based on files, but based on buffers (which may contain a file).
  • What we call a window, emacs calls a frame.
  • A frame contains multiple windows (or window panes)
  • You can assign any buffer to any window, or even have multiple windows showing the same buffer.

Round 3 – Just edit some python already

Screw reading books and working through tutorials. I’m just going to go ahead and start using emacs and then google every time I got stuck. That’s how I learned vim back in the late 90s, except I didn’t have google back then. In fact, I didn’t even have Internet at home back then. I had to go to school, search altavista, download manuals and deb packages to floppy disk, then take them home and transfer them.

I figure, just use the stupid program this week and expect normal operations to take longer while I work things out.

I don't know how to exit vim

So first things first, I took the popular advice and installed spacemacs, which gives me fancy color themes and evil mode.

  1. If you fire up emacs, it’s a decent gui, complete with helpful menus and mouse integration. You can open a file, edit, save it exit almost as easily as any other text editor. File -> Visit new file and File -> Open do make you type in the path the file instead of a file open dialog gui, but there is a sort of autocomplete/directory listing interface.
  2. Emacs with spacemacs takes a long time to load. It’s on par with pycharm slow load times. Kind of sucks if emacs is closed and you just want to open a file. The one book I says that I should run emacs in a a server mode and just use the emacsclient binary for all subsequent starts. Ok, fine – if I have emacs up all day long, that’s doable.
  3. Emacs can run a shell. You can run shell which runs your default shell (bash in my case) in a buffer. Emacs fans call this the inferior shell. The “emacs shell” or shell is promoted as superior. It’s a bash-inspired shell written in elisp. Both shells suck. I thought I’d be able to run a terminal in a window below the file I’m editing like I do in pycharm, but it’s extremely frustrating working in this shell. Ctrl-C is “C-c C-c”, and it’s really easy to end up no typing over output from a pervious command. Worst of all, I could not activate a virtualenv in either shell. This means I couldn’t drop to a shell and run python ad-hoc python tools. While there may be some amazing tweaks and features these shells bring, I found it much like working on a bad serial connection.
  4. When I opened a python file, spacemacs detected this and asked if I wanted to install the python layer. This gave me nice syntax highlighting, but I didn’t get any autocomplete like I was hoping for. I know that “helm” is enabled, but there is perhaps something else I have to do for autocomplete to work.

projectile for projects

Spacemacs bundled an add-on called projectile. This is pretty nice. Incidentally, “bbatsov” writes a lot of stuff for emacs, including the previously mentioned prelude. People recommend prelude over spacemacs because they feel spacemacs adds could add complexity that could confuse a beginner once they get past the initial learning curve. Ie, spacemacs is good for super beginners, but bad for intermediate users. Or so I’ve heard.

Anyways, this add some nice functionality. Open a file inside a directory that is under git control, and it establishes all files in the directory as a project. If you have all your projects in a directory like ~/projects, you can teach emacs about all them at once.

M-x projectile-discover-projects-in-directory ~/projects

Once you scan them all, you can run C-c p F to show a list of all known projects and select one to open. Open a file in any project and it puts you in project mode. There are shortcuts to see all files in the project, if you open a shell it drops you in the project directory. You can also quickly switch between recently opened files, perform in-project search and replace.

org-mode

So far, org-mode has been my most positive experience. I wrote up a general outline of a software I’m working on and I found it much easier to write with and organize than when I write in markdown.

It’s not markdown, and expecting to be able to use things like markdown code blocks will disapoint you. But it’s definitely learnable and I can see myself using it.

You just go ahead and open a file ending in .org and start writing. Headers start with * instead of # but otherwise will be familiar to a markdown user.

The real nice bit of org mode is as you learn the hot keys and easy shortcuts. Key combinations will create new headings and list entry, or you can move an entire section up, down, indent or outdent.

If you type < s <TAB>, it expands to a ‘src’ code block:

#+BEGIN_SRC 

#+END_SRC

I only did some basic outlining, but it seemed workable. I can see emacs/orgmode possibly replacing quiver as my primary notebook. It won’t be easy, because quiver has a this nice feature were you just start writing a note and that note may or may not get a title. There is no need to save that note to a file, because it’s saved within the quiver datastore. Emacs will want me to save a file for each note.

Probably a next step is to test out the orgmode-journal. After that, dive into orgmode and Getting Things Done. If I can put my omnifocus tasks into emacs and use it as a daily work notebook, then this time invested so far won’t be entirely put to waste.

Follow up: I came across this orgmode day planner approach, which seems even more workable than the GTD approach linked above.

bits and bobs for 2017

Abusing my blog to record a few things. This is kind of a year-end wrap-up before Christmas, or a pre-pre New Years Eve post. I am off work till the end of the year, so this is kind of a good week to reflect and prepare for the upcoming year.

I cover a couple different things in this post, find the section you want.

  • Home Security Cameras
  • Financial Tools (quicken, ledger, hledger, beancount)

Security Cameras

tl;dr – I recommend Q-See NVR, refurbished 16-channel for $250 with 2TB drive

I’ve got a few cameras around my house. Two of them are analog cameras on a cheap LA View dvr bought off of woot (I’ve replaced it twice). I’ve ran zoneminder to pull the RTSP stream from it.

I also have some really cheap ESCAM 720p cameras. These things are amazingly cheap, and can be had for under $40. Zoneminder can pull these in as well.

The problem is that I keep finding zoneminder in some broken state. I also am not a fan of the “each frame is a file” apprach. I’ve started using shinobi. I like that better, but feel limited. Also, several times a week, one of the camera feeds goes dark and I have to re-enable it. I got a Windows PC setup and tried out iSpy. GenisuVision, and Xeoma. None of them have really stood out as a great system.

I decided to try out a more expensive hardware NVR. First I tried an Amcrest NVR, but it couldn’t work with any of my existing cameras. Returned. I have a colleague that is a big fan of Q-See Analog DVRs, and the mobile app is pretty slick. I found a good deal on a refurbished 16-channel NVR for $250 with a 2TB hard drive included. This was an instant success. It picked up my ESCAM right away and starts recording.

The mobile app is a dream.


I do get higher resolution images, but when scaled for mobile playback and on a screenshot, obviously the resolution suffers.

Downsides: There is a downside in that accessing it with a web browser sucks – on MacOS it requires Safari and a binary plugin. However, they do have desktop clients for Mac and Windows. Once again, Linux is left out in the cold. I might still end up running Shinobi against the QTSee (or the cams directly) for a simple remote web interface.

The other downside is that I couldn’t get my analog dvr added to the system. This isn’t too big of a downer, because I’m going to replace those analog 480p cams with the higher quality 720p ESCams (maybe eventually 1080p cams).

Finances

tl;dr – I’m going to switch from Quicken to beancount for double-entry plain-text accounting.

I am planning to get a better handle on my finances in 2018. We’re meeting all our bill payments, but I’m definitely not where I want to be with knowing where our money is going and planning for the future. For years, I used nothing substantial. I would do some balancing in spreadsheets, or try to use Mint, and I had Quicken for my business. In 2016, I used hledger for about 4 months to track finances. I really liked the concept of plain text accounting, but ultimately ended up purchasing Quicken and using that through the end of 2016 and all of 2017. I can essentially take a Tuesday and sync all my transactions down and reconcile it. In the world of personal finance, there are several camps, but two big ones are those that prefer syncing historical data (mint, quicken) and those that want you to be budget every transaction in advance, such as You Need A Budget and Every Dollar. There is overlap, especially since YNAB and EveryDollar have added syncing to their offerings. Plain Text Accounting/ledger/hledger fall into the second camp, with no sync capabilities.

That being said, I have used a program called reckon to import my main bank account into hledger. You go onto your bank website, download a CSV for a certain date range, and import it in. Even with Reckon, it was time consuming, and that’s what led me to switch to Quicken. However, after using Quicken for 1.5 years, that can get time consuming as well. My family and I have a handful of credit cards, a mortgage, a car loan, checking accounts, savings accounts, 401k, roth ira, college savings, student loans, and a lot of transactions. For the most part, the bulk of our activity centers around a joint checking account. Just maintaining that one account in Quicken is a big time sink. If I don’t update every Tuesday, it can take several hours to catch it up. This is because Quicken might mis or duplicate a transaction from the bank. Or something weird will happen. I might have everything caught up perfectly, and then the next time I’m in, I’ll discover my balances are off going back 3 months. I’ll have to spend time comparing statements and daily balances, going almost transaction by transaction – finding the most recent time when the balances match, then moving forward and fixing whatever caused it to diverge. I’ll get things looking correct, then I’ll jump forward a month and realize I had missed a divergence somewhere and I’ll have to go back. By the time I get the main account squared away, I don’t really feel like validating all the other accounts. If my Discover card balance is off, then I’ll just have to go in an add a BALANCE ADJUSTMENT entry to bring it in line. I was trying to split my loan payments between principal and interest, but that went by the wayside.

Since I’m spending all of this time on Quicken reading every statement anyways, I decided I wouldn’t be loosing much by going back to ledger. In fact, some banks such as US Bank has stopped offering integration with Quicken. So I’m going to start a brand new file and start tracking. This time, I’m going to dig around into web scraping. There are a lot of people out there that write tools to automatically log into their bank and download their CSV files. If I can semi-automate their retrieval, that will be a big win. I will also continue to use quickbooks to at least sync the data, but mainly to keep it as a backup if I decide to stop using ledger. I probably will not use it, but there is a quickbook to ledger converter

While I was reviewing hledger, I found another system called beancount. It is another plaintext double-entry accounting system, but it’s designed to have [less trust in the user entering data]
(https://docs.google.com/document/d/1dW2vIjaXVJAf9hr7GlZVe3fJOkM-MtlVjvCO1ZpNLmg/edit?pli=1#heading=h.2ax1dztqboy7). There is a ledger2beancount tool, so I can import any ledger files I had previously or make along the way (though right now I’m looking at a fresh start), and beancount itself provides a solid export to ledger.

I’m going to start with beancount and see where it takes me. I might bounce a bit between beancount and ledger/hledger along the way. Beancount has some really nice web reports, and their example user in the tutorial sounds rather familiar.

Worst case, I can drift back to Quicken.

Vault Standup

This is a little walkthrough of settng up a “production-like” vault server with etcd backend (Not really production, no TLS and one person with all the keys). Hashicorp Vault is incredibly easy to setup. Going through the dev walkthrough is pretty easy, but when you want to get a little more advanced, you start getting bounced around the documentation. So these are my notes of setting up a vault server with an etcd backend and a few policies/tokens for access. Consider this part 1, and in “part 2”, I’ll setup an ldap backend.

Q: Why etcd instead of consul?
A: Most of the places I know that run consul, run it across multiple datacenters, and a few thousand servers, and interacts with lots of different services. Even if the secrets are protected, the metadata is quite visible. I want a rather compact and isolated backend for my eventual cluster.

Let’s get started.

First off, create a configuration file for vault.

vaultserver.hcl:

[email protected]:~$ cat vaultserver.hcl
storage "etcd" {
  address  = "http://localhost:2379"
  etcd_api = "v2"
  path = "corevault"
}

listener "tcp" {
  address = "0.0.0.0:8200"
  tls_disable = 1
}

disable_mlock = true
cluster_name = "corevault"

Start the server (in its own terminal)

[email protected]:~$ vault server -config=vaultserver.hcl
==> Vault server configuration:

                     Cgo: disabled
              Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", tls: "disabled")

Init the server

dfzmbp:~ ytjohn$ export VAULT_ADDR=http://vaultcore01.pool.lab.ytnoc.net:8200
dfzmbp:~ ytjohn$ vault init
Unseal Key 1: f9XJwuxla/H86t8pbWVPnI6Tfi3nQtkasq303Oi8B+ep
Unseal Key 2: jFqEmE1c/lei+C1aIju6JM2t5fSI534g26E7Nv83t9RV
Unseal Key 3: ty/P+Jubm1BukPcdZ16eJFD0JQ9BFGqOSgft35/fvHXr
Unseal Key 4: 6k4aPjuKgz0UNe+hTVAOKUzrIvbS9w8UszB0HX3Au496
Unseal Key 5: PYNjRe9vBvHAGE9peiotrtjoYuVlAV/9QJ0NvqZScd2a
Initial Root Token: b6eac78d-f278-4d32-6894-a8168d055340

That Initial Root Token is your only means of accessing the vault once it’s unsealed. Don’t lose it until you replace it.

And this creates a directory in etcd (or consul)

[email protected]:~$ etcdctl ls
/test1
/corevault
[email protected]:~$ etcdctl ls /corevault
/corevault/sys
/corevault/core

Unseal it:

dfzmbp:~ ytjohn$ vault unseal
Key (will be hidden):
Sealed: true
Key Shares: 5
Key Threshold: 3
Unseal Progress: 1
Unseal Nonce: d860cb16-f084-925d-6f41-d80ef15e297c
dfzmbp:~ ytjohn$ vault unseal
Key (will be hidden):
Sealed: true
Key Shares: 5
Key Threshold: 3
Unseal Progress: 2
Unseal Nonce: d860cb16-f084-925d-6f41-d80ef15e297c
dfzmbp:~ ytjohn$ vault unseal
Key (will be hidden):
Sealed: false
Key Shares: 5
Key Threshold: 3
Unseal Progress: 0
Unseal Nonce:
dfzmbp:~ ytjohn$ vault unseal
Vault is already unsealed.

Now let’s take that root token and save it in our home directory. Not safe, because it’s the all-powerful root token, you shold create a user token for yourself. But that’s later.

Save your token (or export it as VAULT_TOKEN), then write and read some secrets.

echo b6eac78d-f278-4d32-6894-a8168d055340 > ~/.vault-token
dfzmbp:~ ytjohn$ vault read secret/hello
Key                 Value
---                 -----
refresh_interval    768h0m0s
value               world

dfzmbp:~ ytjohn$ vault read -format=json secret/hello
{
    "request_id": "a4b199e7-ff7c-e249-2944-17424bf1f05c",
    "lease_id": "",
    "lease_duration": 2764800,
    "renewable": false,
    "data": {
        "value": "world"
    },
    "warnings": null
}

dfzmbp:~ ytjohn$ helloworld=`vault read -field=value secret/hello`
dfzmbp:~ ytjohn$ echo $helloworld
world

Ok, that’s the basics of getting vault up and running. Now we want to get more users to access it. What I want is to create three “users” and give them each a path.

infra admins = able to create, read, and write to secret/infra/*
infra compute = work within the secret/infra/compute area.
infra network = work within the secret/infra/network area

infraadmin.hcl

path "secret/infra/*" {
  capabilities = ["create"]
}

path "auth/token/lookup-self" {
  capabilities = ["read"]
}

infracompute.hcl

path "secret/infra/compute/*" {
  capabilities = ["create"]
}

path "auth/token/lookup-self" {
  capabilities = ["read"]
}

infranetwork.hcl

path "secret/infra/network/*" {
  capabilities = ["create"]
}

path "secret/infra/compute/obm/*" {
  capabilities = ["read"]
}

path "auth/token/lookup-self" {
  capabilities = ["read"]
}

Now, we write these policies in.

dfzmbp:vault ytjohn$ vault policy-write infraadmin infraadmin.hcl
Policy 'infraadmin' written.
dfzmbp:vault ytjohn$ vault policy-write infracompute infracompute.hcl
Policy 'infracompute' written.
dfzmbp:vault ytjohn$ vault policy-write infranetwork infranetwork.hcl
Policy 'infranetwork' written.

Let’s create a token “user” for each policy.

dfzmbp:vault ytjohn$ vault token-create -policy="infraadmin"
Key             Value
---             -----
token           d16dd3dc-cd9e-15e1-8e41-fef4168a429e
token_accessor  50a1162f-58a2-474c-466d-ec68fac9a2f9
token_duration  768h0m0s
token_renewable true
token_policies  [default infraadmin]

dfzmbp:vault ytjohn$ vault token-create -policy="infracompute"
Key             Value
---             -----
token           d156326d-1ee6-7a93-d9d3-428e2211962d
token_accessor  daf3beb4-6c31-4115-2d00-ba811c50b05b
token_duration  768h0m0s
token_renewable true
token_policies  [default infracompute]

dfzmbp:vault ytjohn$ vault token-create -policy="infranetwork"
Key             Value
---             -----
token           84faa448-20d9-b472-349f-1053c81ff4c9
token_accessor  68eea7ec-78c0-4be1-03c4-f2ec155b66de
token_duration  768h0m0s
token_renewable true
token_policies  [default infranetwork]

Let’s login as with the infranetwork token and attempt to write to compute. I have not yet created secret/infra/compute or secret/infra/network and I’m curious if infraadmin is needed to make those first.

dfzmbp:vault ytjohn$ vault auth 84faa448-20d9-b472-349f-1053c81ff4c9
Successfully authenticated! You are now logged in.
token: 84faa448-20d9-b472-349f-1053c81ff4c9
token_duration: 2764764
token_policies: [default infranetwork]
dfzmbp:vault ytjohn$ vault write secret/infra/compute/notallowed try=wemust
Error writing data to secret/infra/compute/notallowed: Error making API request.

URL: PUT http://vaultcore01.pool.lab.ytnoc.net:8200/v1/secret/infra/compute/notallowed
Code: 403. Errors:

* permission denied
dfzmbp:vault ytjohn$ vault write secret/infra/network/allowed alreadyexists=maybe
Success! Data written to: secret/infra/network/allowed

I got blocked from creating a path inside of compute, and I didn’t need secret/infra/network created before making a child path. That infraadmin account is really not needed at all. Let’s go ahead and try infracompute.

$ vault auth d156326d-1ee6-7a93-d9d3-428e2211962d # auth as infracompute
$ vault write secret/infra/compute/obm/idrac/oem username=root password=calvin
Success! Data written to: secret/infra/compute/obm/idrac/oem
$ vault read secret/infra/compute/obm/idrac/oem
Error reading secret/infra/compute/obm/idrac/oem: Error making API request.

URL: GET http://vaultcore01.pool.lab.ytnoc.net:8200/v1/secret/infra/compute/obm/idrac/oem
Code: 403. Errors:

* permission denied

Oh my. I gave myself create, but not read permissions. New policies.

infranetwork.hcl

path "secret/infra/network/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}

path "secret/infra/compute/obm/*" {
  capabilities = ["read", "list"]
}

path "auth/token/lookup-self" {
  capabilities = ["read"]
}

infracompute.hcl

path "secret/infra/compute/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}

path "auth/token/lookup-self" {
  capabilities = ["read"]
}

Let’s update our policy list and cleanup.

vault auth b6eac78d-f278-4d32-6894-a8168d055340 # auth as root token
vault policy-delete infraadmin # delete unneeded infradmin policy
vault token-revoke d16dd3dc-cd9e-15e1-8e41-fef4168a429e # remove infraadmin token
vault policy-write infranetwork infranetwork.hcl
vault policy-write infracompute infracompute.hcl

Try again:

$ vault auth d156326d-1ee6-7a93-d9d3-428e2211962d # auth as infracompute
Successfully authenticated! You are now logged in.
token: d156326d-1ee6-7a93-d9d3-428e2211962d
token_duration: 2762315
token_policies: [default infracompute]
$ vault read secret/infra/compute/obm/idrac/oem
Key                 Value
---                 -----
refresh_interval    768h0m0s
password            calvin
username            root

And as network

$ vault auth 84faa448-20d9-b472-349f-1053c81ff4c9 #infranetwork
$ vault list secret/infra/compute
Error reading secret/infra/compute/: Error making API request.

URL: GET http://vaultcore01.pool.lab.ytnoc.net:8200/v1/secret/infra/compute?list=true
Code: 403. Errors:

* permission denied
$ vault list secret/infra/compute/obm
Keys
----
idrac/

$ vault list secret/infra/compute/obm/idrac
Keys
----
oem

$ vault read secret/infra/compute/obm/idrac/oem
Key                 Value
---                 -----
refresh_interval    768h0m0s
password            calvin
username            root

Camp night

My wife and son will be spending the night camping outside. I’d love to join them, but someone has to stay inside and have the entire bed to himself.

The Cheap Machete Problem

This weekend, I was finally clearing out the bit of walk between my pole barn and the hill behind it. Erosion has been filling it in, making it hard to get back there with any sort of push mower. Vines and brush have been filling in around it. I did what I could with the trimmer, then I went back in with the machete. Now, this machete was $6 at Harbor Freight and I haven’t used it much because the cheap plastic handle was starting to break.

I didn’t get a before photo, but one can see the kind of brush I was dealing with slightly up the hill. It wasn’t long before the handle was completely apart. The blade was fine, but the only thing holding the handle on the blade was.. my hand. I eventually ditched the handle, wrapped the base with 550 cord, and was able to finish, though my rope handle too started to unravel.

Last night I considered that today I would run up to Harbor Freight and pick up another cheap machete. They’re $6 and I don’t use them that often. But I would really like to be sure that the machete gets through a full job. There’s also this adage concerning tools: Buy the cheap one first and if you use it enough to wear it out, then buy a more expensive one. I went and looked on amazon, and they started off around $14. One these, I saw some reviews about the blades bending right away or rusting. Well.. this started me on a pretty dangerous journey to find the best machete to buy.

My first step on this journey was a pair of articles on bushcraftpro.com,
one focused on clearing brush and another on chopping wood. Both are written by the same author, and follow a similar format (buying a better machete for the husband to use). They certainly opened my eyes to the much larger world of machetes. The machetes that I’m used to seeing with a blade and saw on the back are common, but are never in the top of the list. A number of people prefer machetes over hatchets for chopping wood. There’s a youtube video showing propper chopping technique. I also learned that “the best” machete you can get is probably anything by Condor Tool & Knife. From their line, expect to pay around $50 to $90. This was a far cry from $14. I said “that’s too much, and this article is bunk” and started looking for other opinions. I was awoken to the idea that instead of the hatchet I keep in my car, I might want a machete instead.

I came across survival prep forums. People making their machete pick based on the impending collapse of society and/or the zombie apocalypse. I learned that machetes are “old hat” and I should consider Malaysian Parangs or Nepalese Kukris instead. I found the woodmans pal, which looks amazing and has stories off WW2 servicemen using them against enemies with katanas. I discovered Gerber used to make a good machete, but then they lowered the quality, so now it sucks.

After all this research, I went to bed thinking of what kind of awesome machete, Parang, or Kukri I would buy. Something I could keep in the car for brush clearing emergencies, perhaps with a sheath I could hook to my belt. Something I could use while camping or clearing out more brush from around the buildings. I had machete fever.

This morning, I woke with a slightly clearer head. I’ve used a machete about twice in the last 6 years. I’ve only once encountered a fallen tree on the road once that I can recall. In that case, I had no tools to clear it with, but was able to edge the car around it and leave the problem for the next unfortunate traveler. I keep thinking I’d like to go camping, but I haven’t done that in 20 years. I doubt buying a $80 knife would change that. If anyone learned how much I paid for my expensive knife, they would ridicule me and rightly so after a couple months, I would have forgotten all the details as to why this was so incredible and not be able to justify it other than ‘better handle’.

This has been about machetes, but it’s really a dangerous issue with learning too much. You start looking for something a little better. Then you learn that the one simple tool you never really thought about has an entire world of clashing opinions around it, and suddenly you start identifying with people who (claim to) use that tool for things that you never actually do.

So what blade am I planning to get? I’m not sure yet, but I do have it narrowed down.

  • The most sensible blade would probably be the $17 Whetstone Machete, recommended in the first brush clearing link above. It is full-tang with a reliable handle. The price is under $20, and it would probably do anything I plan to do it with.
  • The $20 Ontario Knife Co 1-18″ Military Machete has slightly better reviews than the Whetstone above, but some point out that the handle is a bit slippy.
  • The third (more expensive) choice is available for $40 and is Condor Eco Parang Machete. This is shorter, 11-inch blade and claims to have an unbreakable handle. “This tool’s high impact Polypropylene handle is strong and indestructible. These handles are molded directly into the machetes and knives blades making them impossible to separate”. It only has 11 reviews on Amazon, but after watching this youtube video and seeing them go from “meh” to impressed while chopping wood, shaving wood, and clearing brush. They mention the longer Condor Bushcraft, which is about $10 more.
  • The Condor Eco Golok as reviewed by the same guy can be gotten for $35 and also looks pretty impressive. I think I like the Eco Parang a bit more.

Frustrations

This morning I took a meter I was working on outside so I could take it apart and watch my son run around the yard. I planned ahead and took a box to hold the parts. After I had gotten a couple screws out, the wind picked up and blew my box into the yard. I can’t find the funny screws in the grass. I should have left them out of the box.

All The Changes

tl;dr

Here’s a quick summary of changes that have taken place:

  • A new site Nifty Noodle People has been launched
  • BCARS has been moved from mezzanine to wordpress and re-organized
  • A new community forum site has been launched: https://community.yourtech.us/
  • Comments for BCARS and YourTech use the community site now.
  • YourTech.us and YourTech Community now live on a dedicated VPS.
  • YourTech.us is now using WordPress, though all content is still generated in Markdown.
  • Soon, other sites will migrate as well.

Launching A New Site

Over the course of this last week (really a good bit of this year) I’ve been doing a lot more web work. In February, I launched Nifty Noodle People, an event website to promote BCARS‘s rebranded Skills Night 2.0. After trying many, many different systems, I settled on WordPress. WordPress is something I moved away from back in 2012. However, for a single purpose site, WordPress really impressed me. It impressed me so much, that I decided that I should redo BCARS site under wordpress as well. I had been using Mezzanine, Django-based CMS to manage their site and mine. But Mezzanine has been showing its age and often causing more problems than it’s worth when it comes to doing updates or adding things like an event calendar.

BCARS Changes

I setup a BCARS development wordpress site and started importing content into it. I spent a lot of time looking at different calendars. For Nifty Noodles, I had used The Events Calendar, and it’s a really nice calendaring system. But when I was trying to utilize it for BCARS, I ended up not liking the formatting options. I went back into research mode and ultimately settled on time.ly. I even picked up their Core+ package which lets me re-use vendors and organizers. This let me add in recurring events like meetings and weekly nets, and it allows people viewing the site to filter between regular and featured events (like a VE session).

As I was secretly working on this, it was brought up at a club meeting that the club would like to see a way to buy and sell gear on the site. So I added bbPress forum to the development site. Then I launched it silently on April 24th. It has gotten pretty solid reviews from people visiting it.

Server Move

As I was doing all this work, I observed that my Dreamhost VPS was prone to crashing. I also made an alarming discovery that I was paying a lot more each year than I had remembered. Also, I often get issues with it running out of memory and getting rebooted. I decided it was time to go searching. I had stuck with Dreamhost because of their nice control panel. They made it easy to spin up new sites, sub-domains, and “unlimited” everything. But it’s time to move on.

I looked at web hosts, then I looked at plain VPSes. I discovered that OVH had some really good pricing on SSD VPSes. A couple years ago, I would have bulked at “wasting time” managing a server in order to do something simple like pushing web content. But my skills with config management have come a long way over the last 5 years. I decided I would use Ansible to manage the VPS and use all the myriad of roles out there to do so. I’ll hopefully write more on that later. But in short, I’ve got roles installing mongodb, mysql, nginx, letsencrypt, and managing users. I couldn’t find a suitable role to manage nginx vhosts, especially in a way to start with a vhost on port 80, and not clobber the information letsencrypt puts in when it acquires a certificate. I hope to make a role that maintains http and https config separately, only putting in the https configuration if the certificate files exist.

But I digress.

Community Forums

During all this, I have been giving lots of thought to moving YourTech to wordpress as well. It’s a bit more challenging because I write all my notes in Markdown, which I then convert into posts. I started markdown blogging in 2012, and have shifted platforms several times since, most recently on Mezzanine. I was also thinking of better ways to engage the audiences of YourTech, BCARS, and Nifty Noodles. I had come across this article about replacing Disqus (which I had used) with Github comments. While I liked the idea, I knew it wouldn’t work for my goals. I kept coming back to forum software. I found three modern variations Discourse, NodeBB, and Flarum. Of the three, I like Flarum the best. Unfortunately, Flarum is still under heavy development and not recommended for production use. The authors can’t yet guarantee that you’ll be able to preserve data through upgrades. They want the flexibility to make changes to the database structure as they develop features. So I went to the next best, which is NodeBB.

NodeBB has a blog comments plugin that allows you to use a NodeBB to handle comments on your blog. The pieces all started coming together. I installed NodebB on my VPS as https://community.yourtech.us/. I changed the links on BCARS forums to point to this new community site, and integrated comments for BCARS.

YourTech Move

This weekend, I decided to pull the plug on YourTech.us and migrate it simultaneously into wordpress and into the new server. I new this would cause downtime, but since my blog is not commercian, and not exactly in the Alexa Top 500, I wasn’t too concerned. If anyone did notice downtime between the 5th and 7th, let me know below.

The move was not without hitches. I did have a markdown copy of all my posts, but I had to add yaml frontmatter to the top of them for github wordpress sync to work. Then I discovered that the plugin ignores my post_date and just makes all my posts match the time of the sync. Also, using the same repository I had been using in development caused issues as well. But eventually, I got all my posts imported with their original post date.

What I didn’t import was my resume and personal history. My contact page I did import, but it is rather out of date, so I feel I should update it soon. I want to rethink what I have on all three pages and how I present them, so that’s a future project.

Finally, I discarded the handful of disqus comments I had and integrated the comment system with YourTech Community.

Future Plans

  • I still need to migrate BCARS, Nifty Noodle People, and other sites away from Dreamhost. But I hope those moves will be pretty painless since it will be direct copy and DNS change.
  • I made YourTech.us look similar to how it did before the move, but I am not sure I’ll keep that look going forward.
  • Once Flarum becomes more production like and they build a NodeBB importer (and comment integration), I’ll quite possibly move to that.
  • Ultimately, I hope these changes will motivate me to write more frequently, now that I can easily post from my phone or web.

things are in a state of flux

UPDATE: Content has been re-added, but the published date information is still being corrected.

I am migrating the site to a new server and from mezzanine to wordpress.

There’s always a few thins to work out, and I should be able to restore the content sometime this weekend.