Alternate test

This was written with GitHuber

These are custom PC-based firewalls running OPNSense, an open-source firewall system. These are deployed at BCARS sites, connected to Crowsnest to provide firewall and routing services to the rest of the site.

file

Specs

Micro ITX PC EMB-CV1 Mobo
12V/5A DC Power Supply
Intel Atom D2550 1.86Ghz
8Gb RAM DDR3
64Gb mSATA hard drive

GMRS is pretty cheap and easy

I got myself a GMRS license 2 years ago. They are $80 for 5 years, and they allow an entire "immediate" family to use it. That's spouse, children, parents, in-laws. I thought I was going to put play around with some GMRS repeater/text data modes, never did.

Flash forward to more recent. Picked up a 4-pack of Baofeng BF-888S radios. These go for about $13 apiece and $42 for a 4-pack. They're 2-watt radios, and channelized (16 channels). Program with a computer, dead simple for others. Gave one to my wife's sister who lives 2 miles down the road. She can talk crystal clear to my son from inside her house. I drove around town with one able to talk to my son as well.

Adding a frequency list to the back is good if you want to talk to someone else.

FRS is limited to specific radios and 1/2 watt. These BF-888S radios are NOT FRS compliant. T GMRS allows 5W on FRS shared frequencies and 50 watts on dedicated. Amateur radio requires a control operator to be physically near the radio. With today's cheap radios, you can get high powered FRS radios, or use MURS frequencies, or find some off the books frequencies; There are lots of space between the FRS channels, and there are some old airplane to ground cellular frequencies that have been phased out. No one is monitoring and even at 5W, you're not going to bother anyone enough to draw enforcement. However, GMRS is probably the cheapest and easiest way to get long range legal communications going for a family (or small business). In my case if you consider 4 people will be using it, it comes to $4/user/year. We could even put a repeater on the roof, or some higher powered vehicle antennas if desired. We probably won't, because the whole goal is to replace my son's (now dead) walkie talkies with something that really works. The fact that he can talk to his Aunt down the road is a big bonus.

ansible flush handlers

In ansible playbooks, handlers (such as to restart a service) normally happen at the end of a run.
If you need ansible to run a handler between two tasks, there is "flush_handlers".

  - name: flush handlers
    meta: flush_handlers

Serial port console

This is how to get serial port console working on a Ubuntu 16.04 (or any systemd based OS) and how to access it with idrac/ssh.

To get serial port working on a running system:

systemctl enable [email protected]
systemctl start [email protected]

Update Grub:

To get serial console during boot up, including the grub menu:

Go ahead and edit /etc/default/grub

GRUB_CMDLINE_LINUX_DEFAULT="splash quiet"
GRUB_CMDLINE_LINUX="console=tty0"
GRUB_TERMINAL="console serial"
# also, it takes so long to boot a server, adding 10
# second to the grub menu is more good than harm
GRUB_TIMEOUT=10

Access it via idrac

ssh <idrac-ip> console com2

#yourtech-dailies

I browse pornhub for the articles. Not only is this an interesting article on the drop in traffic during the Hawaii Missile Alert, I also discovered that their insight's blog has a lot of good data analysis type articles. The blog itself is SFW with no bad images, though of course the logo and items mentioned in the articles themselves would not be. #yourtech-dailies

Mocking Consul for fun and profit.

I've been creating a fun microservice tool that provides a single API frontend and merges data from multiple backends. Since the app itself relies entirely on external data, I was wondering how in the world I would write unit tests for it. It's written in python using the amazing apistar framework. All of my external data so far is gathered using the requests library. The answer for this, turns out to requests-mock. Requests-mock will allow you create mock responses to requests.

The documentation is pretty straightforward, but I was having some trouble wrapping my head around how I would use it to test the code in my app. To start simple, I decided to mock consul, which is one of my datasources.

Get a value into consul

First off, let's go ahead and setup. Go to https://www.consul.io/ and download the consul binary for your OS.

  1. Start consul in dev mode/foreground: consul agent -dev
  2. Insert a key: consul kv put foo bar
  3. Let's request that key with curl. Be verbose, because there are some headers you'll want later.
$ curl http://127.0.0.1:8500/v1/kv/foo -v
* Connected to localhost (127.0.0.1) port 8500 (#0)
> GET /v1/kv/foo HTTP/1.1
> Host: localhost:8500
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/json
< X-Consul-Index: 7
< X-Consul-Knownleader: true
< X-Consul-Lastcontact: 0
< Date: Thu, 04 Jan 2018 11:04:20 GMT
< Content-Length: 158
<
[
    {
        "LockIndex": 0,
        "Key": "foo",
        "Flags": 0,
        "Value": "YmFy",
        "CreateIndex": 7,
        "ModifyIndex": 7
    }
]

You may be wondering why "Value" is YmFy. That's because consul uses base64 encoding. Running echo YmFy | base64 -d will give you the string bar.

Read consul with python requests

Awesome, now I can write a fancy python script to show off my foo. Create a file called requestkey.py.

import base64
import json
import requests

URL = "http://127.0.0.1:8500/v1/kv"

def requestkey(key):
  url = "{}/{}".format(URL, key)
  r = requests.get(url)
  data = r.json()
  v = base64.b64decode(data[0]['Value'])
  # FYI v will return a byte instead of a string (b'foo'), we'll decode that to a string
  return (v.decode())

if __name__ == '__main__':
  v = requestkey('foo')
  print("Here is my foo: {}".format(v))

Run it:

$ python requestkey.py
FETCHED: foo RESULT: b'bar

Live unit test

Let's create a test.py file. This will ensure the value of foo is equal to bar.

import unittest
from requestkey import requestkey

class TestStringMethods(unittest.TestCase):

    def test_foo(self):
        v = requestkey('foo')
        self.assertEqual(v, 'bar')

if __name__ == '__main__':
    unittest.main()

And run it:

$ python test.py
.
----------------------------------------------------------------------
Ran 1 test in 0.011s

OK

This works great! But what if the consul server on the CICD server running my test.py has a different value for foo? Or no consul server at all?

(py3) jh1:mocktests ytjohn$ consul kv delete foo
Success! Deleted key: foo
(py3) jh1:mocktests ytjohn$ python test.py
$ consul kv put foo bard
Success! Data written to: foo
(py3) jh1:mocktests ytjohn$ python test.py
F
======================================================================
FAIL: test_foo (__main__.TestStringMethods)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test.py", line 8, in test_foo
    self.assertEqual(v, 'bar')
AssertionError: 'bard' != 'bar'
- bard
?    -
+ bar


----------------------------------------------------------------------
Ran 1 test in 0.012s

FAILED (failures=1)

This is a problem. My code is still perfectly fine, but because the live data has changed, my test fails. That is what we hope to solve.

Let's mock consul.

As I alluded at the top of my post, I hope to solve this with requests-mock. There's some fancy things I see like registering URIs, but to start, I am just going use the Mocker example they have. It's a good thing I did a curl request earlier to see what the actual response will be.

import json
import requests_mock
import unittest

from requestkey import requestkey

class TestStringMethods(unittest.TestCase):

    def setUp(self):
       self.baseurl = 'http://127.0.0.1:8500/v1/kv'

    def test_foo(self):
        key = 'foo'
        url = '{}/{}'.format(self.baseurl, key)
        response = [{
                    "LockIndex": 0,
                    "Key": "foo",
                    "Flags": 0,
                    "Value": "YmFy",
                    "CreateIndex": 7,
                    "ModifyIndex": 7}]

        with requests_mock.Mocker() as m:
          m.get(url, text=json.dumps(response))
          v = requestkey('foo')
          self.assertEqual(v, 'bar')

if __name__ == '__main__':
    unittest.main()

Let's try our test:

(py3) jh1:mocktests ytjohn$ consul kv get foo
bard
(py3) jh1:mocktests ytjohn$ python test.py
.
----------------------------------------------------------------------
Ran 1 test in 0.010s

OK

This is great. I can develop locally, store working examples in my test code, and test against that.

Requests is fun, but what about python-consul?

The truth is, I don't talk to consul using requests and base 64 decoding. For some reason, I thought it would
be easier for you to follow along if I did straight requests. But in reality, most people are going to use python-consul. In fact, here is my getkey.py file doing just that.

import consul

def getkey(key):
  c = consul.Consul() # consul defaults to 127.0.0.1:8500
  index, data = c.kv.get(key, index=None)
  return data['Value']

if __name__ == '__main__':
  v = getkey('foo')
  print("Here is my foo: {}".format(v))

Now I'm going to rewrite my test.py to test both of these. I moved my response into the setUp
class, renamed test_foo to test_request and added a test_get to test my getkey function.

import json
import requests_mock
import unittest

from requestkey import requestkey
from getkey import getkey

class TestStringMethods(unittest.TestCase):

    def setUp(self):
       self.baseurl = 'http://127.0.0.1:8500/v1/kv'
       self.response_foo = [{
                    "LockIndex": 0,
                    "Key": "foo",
                    "Flags": 0,
                    "Value": "YmFy",
                    "CreateIndex": 7,
                    "ModifyIndex": 7}]

    def test_request(self):
        key = 'foo'
        url = '{}/{}'.format(self.baseurl, key)

        with requests_mock.Mocker() as m:
          m.get(url, text=json.dumps(self.response_foo))
          v = requestkey('foo')
          self.assertEqual(v, 'bar')

    def test_get(self):
        key = 'foo'
        url = '{}/{}'.format(self.baseurl, key)

        with requests_mock.Mocker() as m:
          m.get(url, text=json.dumps(self.response_foo))
          v = getkey('foo')
          self.assertEqual(v, 'bar')

if __name__ == '__main__':
    unittest.main()

Let's see how this does:

$ python test.py
E.
======================================================================
ERROR: test_get (__main__.TestStringMethods)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test.py", line 35, in test_get
    v = getkey('foo')
  File "/Users/ytjohn/vsprojects/unsafe/mocktests/getkey.py", line 7, in getkey
    index, data = c.kv.get(key, index=None)
  File "/Users/ytjohn/.venvs/py3/lib/python3.6/site-packages/consul/base.py", line 538, in get
    params=params)
  File "/Users/ytjohn/.venvs/py3/lib/python3.6/site-packages/consul/std.py", line 22, in get
    self.session.get(uri, verify=self.verify, cert=self.cert)))
  File "/Users/ytjohn/.venvs/py3/lib/python3.6/site-packages/consul/base.py", line 227, in cb
    return response.headers['X-Consul-Index'], data
  File "/Users/ytjohn/.venvs/py3/lib/python3.6/site-packages/requests/structures.py", line 54, in __getitem__
    return self._store[key.lower()][1]
KeyError: 'x-consul-index'

----------------------------------------------------------------------
Ran 2 tests in 0.016s

FAILED (errors=1)

Oh what disaster! My fancy getkey code is failing tests!. What is x-consul-index anyways? Well, it looks
to be response.headers['X-Consul-Index'], which is a header we saw in the curl request. Fortunately,
mock allows you to provide headers as well.

  1. Add headers to setUp: self.headers = {'X-Consul-Index': "7"} (yes, value must be a string)
  2. Add the header response to your mockup: m.get(url, text=json.dumps(self.response_foo), headers=self.headers_foo)
$ python test.py
..
----------------------------------------------------------------------
Ran 2 tests in 0.012s

OK

Outstanding. And for completion, here is the final test.py:

import json
import requests_mock
import unittest

from requestkey import requestkey
from getkey import getkey

class TestStringMethods(unittest.TestCase):

    def setUp(self):
       self.baseurl = 'http://127.0.0.1:8500/v1/kv'
       self.headers_foo = {'X-Consul-Index': "7"}
       self.response_foo = [{
                    "LockIndex": 0,
                    "Key": "foo",
                    "Flags": 0,
                    "Value": "YmFy",
                    "CreateIndex": 7,
                    "ModifyIndex": 7}]

    def test_request(self):
        key = 'foo'
        url = '{}/{}'.format(self.baseurl, key)

        with requests_mock.Mocker() as m:
          m.get(url, text=json.dumps(self.response_foo))
          v = requestkey('foo')
          self.assertEqual(v, 'bar')

    def test_get(self):
        key = 'foo'
        url = '{}/{}'.format(self.baseurl, key)

        with requests_mock.Mocker() as m:
          m.get(url, text=json.dumps(self.response_foo), headers=self.headers_foo)
          v = getkey('foo')
          self.assertEqual(v, 'bar')

if __name__ == '__main__':
    unittest.main()

What is the point?

There isn't much pointing to having a block of code produce a static value and then check to see if it is that value. However, when we start taking actions based on values (live, maintenance, true, false, -1), we can definitely check to see if our code behaves an expected way based on a collection of sample data we store. I can also check for how I handle incomplete data. A big part of my microservice correlates devices with network interfaces, ip addresses, and vlans. Not every interface has an ip, not every ip has a vlan. Not every network has a default gateway. I have to determine which ip is "primary". So as I collect examples of devices with different configurations, I should be able to register urls and responses for each device. If my code is expecting a vlan to be a number, but instead I receive a "None" - will I handle that or will I throw an exception error?

Looking forward, I can envision having sample json data stored with functions to provide the desired response and headers needed.

First impressions with emacs

Learning emacs really sucks. Let's do this.

My current stack

Because I'm insane, I decided to give Emacs a try. While I'm still pretty pleased with Omnifocus, I do find limitations with it. I also store all kinds of notes in Quiver. Anytime I'm working on anything, I keep my notes in Quiver. I had been keeping a worklog in it, but it got goofy when work (and the associated notes) span multiple days. I also use PyCharm as my IDE. That I definitely like, but if I want to open a single file, I'm going to pop open vim, sublime text, or even just textedit.

Why I'm looking at Emacs

For years, I've been hearing amazing things about orgmode. It's used for todo lists, project planning, documents, and journals. People also like using emacs as an python ide. Everyone that uses emacs seems to really like it, and it keeps showing up as a good looking solution to different problems I'd like to solve. There's even a slack client for it. I decided that I should really give emacs a shot before discarding any solution because it happened to be emacs based. You see, emacs has a super steep learning curve, and when you use it, you essentially join a cult.

Round 1

So I decided to dive in. I found a number of people recommending How to learn emacs as a good place for beginners. The first page has this juicy tidbit:

What I need from you is commitment (a couple of dedicated hours of study per week, and to use Emacs as your day-to-day editor) and patience (be willing to give up your favorite IDE’s features, at least for a month or two).

That's pretty intimidating, but to be fair, Vim takes a long time to master. Those firs starting out need to learn what a modal editor is, how to switch between insert and command mode, how to navigate, how to do search and replace, how to delete lines, and possibly how to use vim's internal clipboard operations. That's all before you get into customizing and extending the program to turn it into an ide.

I put a couple hours in the first weekend, and a little bit of time the following week going through the examples. But I got bored and real life kept me away.

Round 2

Seeing sometime ahead of me, I figured I'd try again. I went back and forth between plain emacs, spacemacs and prelude. I did research all over about how people got started with emacs. Lots of heavy opinions on "starting pure", or using a starter pack like spacemacs/prelude. For those with vim leanings, there is an "evil mode" that provides some vim keybindings and emulation. I came across the Mastering Emacs book which gots some good feedback on reddit.
I started reading a copy of the book with pure emacs open. It's 277 pages long and I got to about page 30 before I started falling asleep. However, here are some key things to know:

  • Emacs isn't based on files, but based on buffers (which may contain a file).
  • What we call a window, emacs calls a frame.
  • A frame contains multiple windows (or window panes)
  • You can assign any buffer to any window, or even have multiple windows showing the same buffer.

Round 3 - Just edit some python already

Screw reading books and working through tutorials. I'm just going to go ahead and start using emacs and then google every time I got stuck. That's how I learned vim back in the late 90s, except I didn't have google back then. In fact, I didn't even have Internet at home back then. I had to go to school, search altavista, download manuals and deb packages to floppy disk, then take them home and transfer them.

I figure, just use the stupid program this week and expect normal operations to take longer while I work things out.

I don't know how to exit vim

So first things first, I took the popular advice and installed spacemacs, which gives me fancy color themes and evil mode.

  1. If you fire up emacs, it's a decent gui, complete with helpful menus and mouse integration. You can open a file, edit, save it exit almost as easily as any other text editor. File -> Visit new file and File -> Open do make you type in the path the file instead of a file open dialog gui, but there is a sort of autocomplete/directory listing interface.
  2. Emacs with spacemacs takes a long time to load. It's on par with pycharm slow load times. Kind of sucks if emacs is closed and you just want to open a file. The one book I says that I should run emacs in a a server mode and just use the emacsclient binary for all subsequent starts. Ok, fine - if I have emacs up all day long, that's doable.
  3. Emacs can run a shell. You can run shell which runs your default shell (bash in my case) in a buffer. Emacs fans call this the inferior shell. The "emacs shell" or shell is promoted as superior. It's a bash-inspired shell written in elisp. Both shells suck. I thought I'd be able to run a terminal in a window below the file I'm editing like I do in pycharm, but it's extremely frustrating working in this shell. Ctrl-C is "C-c C-c", and it's really easy to end up no typing over output from a pervious command. Worst of all, I could not activate a virtualenv in either shell. This means I couldn't drop to a shell and run python ad-hoc python tools. While there may be some amazing tweaks and features these shells bring, I found it much like working on a bad serial connection.
  4. When I opened a python file, spacemacs detected this and asked if I wanted to install the python layer. This gave me nice syntax highlighting, but I didn't get any autocomplete like I was hoping for. I know that "helm" is enabled, but there is perhaps something else I have to do for autocomplete to work.

projectile for projects

Spacemacs bundled an add-on called projectile. This is pretty nice. Incidentally, "bbatsov" writes a lot of stuff for emacs, including the previously mentioned prelude. People recommend prelude over spacemacs because they feel spacemacs adds could add complexity that could confuse a beginner once they get past the initial learning curve. Ie, spacemacs is good for super beginners, but bad for intermediate users. Or so I've heard.

Anyways, this add some nice functionality. Open a file inside a directory that is under git control, and it establishes all files in the directory as a project. If you have all your projects in a directory like ~/projects, you can teach emacs about all them at once.

M-x projectile-discover-projects-in-directory ~/projects

Once you scan them all, you can run C-c p F to show a list of all known projects and select one to open. Open a file in any project and it puts you in project mode. There are shortcuts to see all files in the project, if you open a shell it drops you in the project directory. You can also quickly switch between recently opened files, perform in-project search and replace.

org-mode

So far, org-mode has been my most positive experience. I wrote up a general outline of a software I'm working on and I found it much easier to write with and organize than when I write in markdown.

It's not markdown, and expecting to be able to use things like markdown code blocks will disapoint you. But it's definitely learnable and I can see myself using it.

You just go ahead and open a file ending in .org and start writing. Headers start with * instead of # but otherwise will be familiar to a markdown user.

The real nice bit of org mode is as you learn the hot keys and easy shortcuts. Key combinations will create new headings and list entry, or you can move an entire section up, down, indent or outdent.

If you type < s <TAB>, it expands to a ‘src’ code block:

#+BEGIN_SRC 

#+END_SRC

I only did some basic outlining, but it seemed workable. I can see emacs/orgmode possibly replacing quiver as my primary notebook. It won't be easy, because quiver has a this nice feature were you just start writing a note and that note may or may not get a title. There is no need to save that note to a file, because it's saved within the quiver datastore. Emacs will want me to save a file for each note.

Probably a next step is to test out the orgmode-journal. After that, dive into orgmode and Getting Things Done. If I can put my omnifocus tasks into emacs and use it as a daily work notebook, then this time invested so far won't be entirely put to waste.

Follow up: I came across this orgmode day planner approach, which seems even more workable than the GTD approach linked above.

bits and bobs for 2017

Abusing my blog to record a few things. This is kind of a year-end wrap-up before Christmas, or a pre-pre New Years Eve post. I am off work till the end of the year, so this is kind of a good week to reflect and prepare for the upcoming year.

I cover a couple different things in this post, find the section you want.

  • Home Security Cameras
  • Financial Tools (quicken, ledger, hledger, beancount)

Security Cameras

tl;dr - I recommend Q-See NVR, refurbished 16-channel for $250 with 2TB drive

I've got a few cameras around my house. Two of them are analog cameras on a cheap LA View dvr bought off of woot (I've replaced it twice). I've ran zoneminder to pull the RTSP stream from it.

I also have some really cheap ESCAM 720p cameras. These things are amazingly cheap, and can be had for under $40. Zoneminder can pull these in as well.

The problem is that I keep finding zoneminder in some broken state. I also am not a fan of the "each frame is a file" apprach. I've started using shinobi. I like that better, but feel limited. Also, several times a week, one of the camera feeds goes dark and I have to re-enable it. I got a Windows PC setup and tried out iSpy. GenisuVision, and Xeoma. None of them have really stood out as a great system.

I decided to try out a more expensive hardware NVR. First I tried an Amcrest NVR, but it couldn't work with any of my existing cameras. Returned. I have a colleague that is a big fan of Q-See Analog DVRs, and the mobile app is pretty slick. I found a good deal on a refurbished 16-channel NVR for $250 with a 2TB hard drive included. This was an instant success. It picked up my ESCAM right away and starts recording.

The mobile app is a dream.


I do get higher resolution images, but when scaled for mobile playback and on a screenshot, obviously the resolution suffers.

Downsides: There is a downside in that accessing it with a web browser sucks - on MacOS it requires Safari and a binary plugin. However, they do have desktop clients for Mac and Windows. Once again, Linux is left out in the cold. I might still end up running Shinobi against the QTSee (or the cams directly) for a simple remote web interface.

The other downside is that I couldn't get my analog dvr added to the system. This isn't too big of a downer, because I'm going to replace those analog 480p cams with the higher quality 720p ESCams (maybe eventually 1080p cams).

Finances

tl;dr - I'm going to switch from Quicken to beancount for double-entry plain-text accounting.

I am planning to get a better handle on my finances in 2018. We're meeting all our bill payments, but I'm definitely not where I want to be with knowing where our money is going and planning for the future. For years, I used nothing substantial. I would do some balancing in spreadsheets, or try to use Mint, and I had Quicken for my business. In 2016, I used hledger for about 4 months to track finances. I really liked the concept of plain text accounting, but ultimately ended up purchasing Quicken and using that through the end of 2016 and all of 2017. I can essentially take a Tuesday and sync all my transactions down and reconcile it. In the world of personal finance, there are several camps, but two big ones are those that prefer syncing historical data (mint, quicken) and those that want you to be budget every transaction in advance, such as You Need A Budget and Every Dollar. There is overlap, especially since YNAB and EveryDollar have added syncing to their offerings. Plain Text Accounting/ledger/hledger fall into the second camp, with no sync capabilities.

That being said, I have used a program called reckon to import my main bank account into hledger. You go onto your bank website, download a CSV for a certain date range, and import it in. Even with Reckon, it was time consuming, and that's what led me to switch to Quicken. However, after using Quicken for 1.5 years, that can get time consuming as well. My family and I have a handful of credit cards, a mortgage, a car loan, checking accounts, savings accounts, 401k, roth ira, college savings, student loans, and a lot of transactions. For the most part, the bulk of our activity centers around a joint checking account. Just maintaining that one account in Quicken is a big time sink. If I don't update every Tuesday, it can take several hours to catch it up. This is because Quicken might mis or duplicate a transaction from the bank. Or something weird will happen. I might have everything caught up perfectly, and then the next time I'm in, I'll discover my balances are off going back 3 months. I'll have to spend time comparing statements and daily balances, going almost transaction by transaction - finding the most recent time when the balances match, then moving forward and fixing whatever caused it to diverge. I'll get things looking correct, then I'll jump forward a month and realize I had missed a divergence somewhere and I'll have to go back. By the time I get the main account squared away, I don't really feel like validating all the other accounts. If my Discover card balance is off, then I'll just have to go in an add a BALANCE ADJUSTMENT entry to bring it in line. I was trying to split my loan payments between principal and interest, but that went by the wayside.

Since I'm spending all of this time on Quicken reading every statement anyways, I decided I wouldn't be loosing much by going back to ledger. In fact, some banks such as US Bank has stopped offering integration with Quicken. So I'm going to start a brand new file and start tracking. This time, I'm going to dig around into web scraping. There are a lot of people out there that write tools to automatically log into their bank and download their CSV files. If I can semi-automate their retrieval, that will be a big win. I will also continue to use quickbooks to at least sync the data, but mainly to keep it as a backup if I decide to stop using ledger. I probably will not use it, but there is a quickbook to ledger converter

While I was reviewing hledger, I found another system called beancount. It is another plaintext double-entry accounting system, but it's designed to have [less trust in the user entering data]
(https://docs.google.com/document/d/1dW2vIjaXVJAf9hr7GlZVe3fJOkM-MtlVjvCO1ZpNLmg/edit?pli=1#heading=h.2ax1dztqboy7). There is a ledger2beancount tool, so I can import any ledger files I had previously or make along the way (though right now I'm looking at a fresh start), and beancount itself provides a solid export to ledger.

I'm going to start with beancount and see where it takes me. I might bounce a bit between beancount and ledger/hledger along the way. Beancount has some really nice web reports, and their example user in the tutorial sounds rather familiar.

Worst case, I can drift back to Quicken.

Vault Standup

This is a little walkthrough of settng up a "production-like" vault server with etcd backend (Not really production, no TLS and one person with all the keys). Hashicorp Vault is incredibly easy to setup. Going through the dev walkthrough is pretty easy, but when you want to get a little more advanced, you start getting bounced around the documentation. So these are my notes of setting up a vault server with an etcd backend and a few policies/tokens for access. Consider this part 1, and in "part 2", I'll setup an ldap backend.

Q: Why etcd instead of consul?
A: Most of the places I know that run consul, run it across multiple datacenters, and a few thousand servers, and interacts with lots of different services. Even if the secrets are protected, the metadata is quite visible. I want a rather compact and isolated backend for my eventual cluster.

Let's get started.

First off, create a configuration file for vault.

vaultserver.hcl:

metaladmin@vaultcore01:~$ cat vaultserver.hcl
storage "etcd" {
  address  = "http://localhost:2379"
  etcd_api = "v2"
  path = "corevault"
}

listener "tcp" {
  address = "0.0.0.0:8200"
  tls_disable = 1
}

disable_mlock = true
cluster_name = "corevault"

Start the server (in its own terminal)

metaladmin@vaultcore01:~$ vault server -config=vaultserver.hcl
==> Vault server configuration:

                     Cgo: disabled
              Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", tls: "disabled")

Init the server

dfzmbp:~ ytjohn$ export VAULT_ADDR=http://vaultcore01.pool.lab.ytnoc.net:8200
dfzmbp:~ ytjohn$ vault init
Unseal Key 1: f9XJwuxla/H86t8pbWVPnI6Tfi3nQtkasq303Oi8B+ep
Unseal Key 2: jFqEmE1c/lei+C1aIju6JM2t5fSI534g26E7Nv83t9RV
Unseal Key 3: ty/P+Jubm1BukPcdZ16eJFD0JQ9BFGqOSgft35/fvHXr
Unseal Key 4: 6k4aPjuKgz0UNe+hTVAOKUzrIvbS9w8UszB0HX3Au496
Unseal Key 5: PYNjRe9vBvHAGE9peiotrtjoYuVlAV/9QJ0NvqZScd2a
Initial Root Token: b6eac78d-f278-4d32-6894-a8168d055340

That Initial Root Token is your only means of accessing the vault once it's unsealed. Don't lose it until you replace it.

And this creates a directory in etcd (or consul)

metaladmin@vaultcore01:~$ etcdctl ls
/test1
/corevault
metaladmin@vaultcore01:~$ etcdctl ls /corevault
/corevault/sys
/corevault/core

Unseal it:

dfzmbp:~ ytjohn$ vault unseal
Key (will be hidden):
Sealed: true
Key Shares: 5
Key Threshold: 3
Unseal Progress: 1
Unseal Nonce: d860cb16-f084-925d-6f41-d80ef15e297c
dfzmbp:~ ytjohn$ vault unseal
Key (will be hidden):
Sealed: true
Key Shares: 5
Key Threshold: 3
Unseal Progress: 2
Unseal Nonce: d860cb16-f084-925d-6f41-d80ef15e297c
dfzmbp:~ ytjohn$ vault unseal
Key (will be hidden):
Sealed: false
Key Shares: 5
Key Threshold: 3
Unseal Progress: 0
Unseal Nonce:
dfzmbp:~ ytjohn$ vault unseal
Vault is already unsealed.

Now let's take that root token and save it in our home directory. Not safe, because it's the all-powerful root token, you shold create a user token for yourself. But that's later.

Save your token (or export it as VAULT_TOKEN), then write and read some secrets.

echo b6eac78d-f278-4d32-6894-a8168d055340 > ~/.vault-token
dfzmbp:~ ytjohn$ vault read secret/hello
Key                 Value
---                 -----
refresh_interval    768h0m0s
value               world

dfzmbp:~ ytjohn$ vault read -format=json secret/hello
{
    "request_id": "a4b199e7-ff7c-e249-2944-17424bf1f05c",
    "lease_id": "",
    "lease_duration": 2764800,
    "renewable": false,
    "data": {
        "value": "world"
    },
    "warnings": null
}

dfzmbp:~ ytjohn$ helloworld=`vault read -field=value secret/hello`
dfzmbp:~ ytjohn$ echo $helloworld
world

Ok, that's the basics of getting vault up and running. Now we want to get more users to access it. What I want is to create three "users" and give them each a path.

infra admins = able to create, read, and write to secret/infra/*
infra compute = work within the secret/infra/compute area.
infra network = work within the secret/infra/network area

infraadmin.hcl

path "secret/infra/*" {
  capabilities = ["create"]
}

path "auth/token/lookup-self" {
  capabilities = ["read"]
}

infracompute.hcl

path "secret/infra/compute/*" {
  capabilities = ["create"]
}

path "auth/token/lookup-self" {
  capabilities = ["read"]
}

infranetwork.hcl

path "secret/infra/network/*" {
  capabilities = ["create"]
}

path "secret/infra/compute/obm/*" {
  capabilities = ["read"]
}

path "auth/token/lookup-self" {
  capabilities = ["read"]
}

Now, we write these policies in.

dfzmbp:vault ytjohn$ vault policy-write infraadmin infraadmin.hcl
Policy 'infraadmin' written.
dfzmbp:vault ytjohn$ vault policy-write infracompute infracompute.hcl
Policy 'infracompute' written.
dfzmbp:vault ytjohn$ vault policy-write infranetwork infranetwork.hcl
Policy 'infranetwork' written.

Let's create a token "user" for each policy.

dfzmbp:vault ytjohn$ vault token-create -policy="infraadmin"
Key             Value
---             -----
token           d16dd3dc-cd9e-15e1-8e41-fef4168a429e
token_accessor  50a1162f-58a2-474c-466d-ec68fac9a2f9
token_duration  768h0m0s
token_renewable true
token_policies  [default infraadmin]

dfzmbp:vault ytjohn$ vault token-create -policy="infracompute"
Key             Value
---             -----
token           d156326d-1ee6-7a93-d9d3-428e2211962d
token_accessor  daf3beb4-6c31-4115-2d00-ba811c50b05b
token_duration  768h0m0s
token_renewable true
token_policies  [default infracompute]

dfzmbp:vault ytjohn$ vault token-create -policy="infranetwork"
Key             Value
---             -----
token           84faa448-20d9-b472-349f-1053c81ff4c9
token_accessor  68eea7ec-78c0-4be1-03c4-f2ec155b66de
token_duration  768h0m0s
token_renewable true
token_policies  [default infranetwork]

Let's login as with the infranetwork token and attempt to write to compute. I have not yet created secret/infra/compute or secret/infra/network and I'm curious if infraadmin is needed to make those first.

dfzmbp:vault ytjohn$ vault auth 84faa448-20d9-b472-349f-1053c81ff4c9
Successfully authenticated! You are now logged in.
token: 84faa448-20d9-b472-349f-1053c81ff4c9
token_duration: 2764764
token_policies: [default infranetwork]
dfzmbp:vault ytjohn$ vault write secret/infra/compute/notallowed try=wemust
Error writing data to secret/infra/compute/notallowed: Error making API request.

URL: PUT http://vaultcore01.pool.lab.ytnoc.net:8200/v1/secret/infra/compute/notallowed
Code: 403. Errors:

* permission denied
dfzmbp:vault ytjohn$ vault write secret/infra/network/allowed alreadyexists=maybe
Success! Data written to: secret/infra/network/allowed

I got blocked from creating a path inside of compute, and I didn't need secret/infra/network created before making a child path. That infraadmin account is really not needed at all. Let's go ahead and try infracompute.

$ vault auth d156326d-1ee6-7a93-d9d3-428e2211962d # auth as infracompute
$ vault write secret/infra/compute/obm/idrac/oem username=root password=calvin
Success! Data written to: secret/infra/compute/obm/idrac/oem
$ vault read secret/infra/compute/obm/idrac/oem
Error reading secret/infra/compute/obm/idrac/oem: Error making API request.

URL: GET http://vaultcore01.pool.lab.ytnoc.net:8200/v1/secret/infra/compute/obm/idrac/oem
Code: 403. Errors:

* permission denied

Oh my. I gave myself create, but not read permissions. New policies.

infranetwork.hcl

path "secret/infra/network/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}

path "secret/infra/compute/obm/*" {
  capabilities = ["read", "list"]
}

path "auth/token/lookup-self" {
  capabilities = ["read"]
}

infracompute.hcl

path "secret/infra/compute/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}

path "auth/token/lookup-self" {
  capabilities = ["read"]
}

Let's update our policy list and cleanup.

vault auth b6eac78d-f278-4d32-6894-a8168d055340 # auth as root token
vault policy-delete infraadmin # delete unneeded infradmin policy
vault token-revoke d16dd3dc-cd9e-15e1-8e41-fef4168a429e # remove infraadmin token
vault policy-write infranetwork infranetwork.hcl
vault policy-write infracompute infracompute.hcl

Try again:

$ vault auth d156326d-1ee6-7a93-d9d3-428e2211962d # auth as infracompute
Successfully authenticated! You are now logged in.
token: d156326d-1ee6-7a93-d9d3-428e2211962d
token_duration: 2762315
token_policies: [default infracompute]
$ vault read secret/infra/compute/obm/idrac/oem
Key                 Value
---                 -----
refresh_interval    768h0m0s
password            calvin
username            root

And as network

$ vault auth 84faa448-20d9-b472-349f-1053c81ff4c9 #infranetwork
$ vault list secret/infra/compute
Error reading secret/infra/compute/: Error making API request.

URL: GET http://vaultcore01.pool.lab.ytnoc.net:8200/v1/secret/infra/compute?list=true
Code: 403. Errors:

* permission denied
$ vault list secret/infra/compute/obm
Keys
----
idrac/

$ vault list secret/infra/compute/obm/idrac
Keys
----
oem

$ vault read secret/infra/compute/obm/idrac/oem
Key                 Value
---                 -----
refresh_interval    768h0m0s
password            calvin
username            root