My Topics
September 22, 2020•0 words
I'm big fan of open source software, a die-hard Linux user since 2K, and an live-long Christian :-D
September 22, 2020•0 words
January 19, 2019•849 words
My laptop is my main computing device and I have lots of passwords that I keep in a KeePass store. This store I access interactively through the Kee Firefox Extension which works great. However, I have a bunch of applictions on my laptop that need access to my plain text groupware password and this didn't work so great with KeePass. For a long time I kept the passwords in various configuration files including netrc. This is not very secure plus it's a pain to change my password because of many places it's stored at.
For improving this situation I probably could've used the system password store. I decided against it because I didn't want to deal with the various wallet applications that come attached to it, e.g. Kwallet. Instead, I decided to give vault a try. It's less a single user application and more an enterprise grade password store with very detailed access control mechanisms that might come in handy if I decide to store many more things in there.
In this first exploration I set up a local vault installation and integrate it with msmtp, offlineimap and vdirsyncer for a comprehensive and secure groupware and email setup.
Make sure that the vault
executable on Linux has the appropriate capabilities to use mlock
without root privileges. This command sets the capabilities:
sudo setcap cap_ipc_lock=+ep $(readlink -f $(which vault))
First I got the impression that consul is required in order to run vault. Only later on I realized that this is not the case because vault also supports storing everything on the local filesystem which is sufficient for me. I'll leave the configuration here in case I need to set up a distributed vault server.
Consul is the storage backend for Vault and should therefore be set up first.
/home/jceb/.config/vault/consul.json
:
{
"datacenter": "jc",
"data_dir": "/home/jceb/.config/vault/consul",
"log_level": "INFO", <-- has to be adjusted later on to WARN or so
"node_name": "torch",
"server": true,
"bootstrap": true, <-- Key to have it run as the first (only) server
"bind_addr": "127.0.0.1", <-- Run it only privately on this computer
"ports": { <-- disable unused services
"dns": -1,
"https": -1
}
}
Run server with consul agent -config-file /home/jceb/.config/vault/consul.json
Actually, vault doesn't need consul. It can also run with a file backend.
/home/jceb/.config/vault/vault.json
:
{
"storage": {
"file": {
"path": "~/.config/vault/vault"
}
},
"listener": {
"tcp": {
"address": "127.0.0.1:8200",
"tls_disable": 1
}
}
}
Run server with vault server -log-level=info -config /home/jceb/.config/vault/vault.json
Initialize vault with
export VAULT_ADDR=http://127.0.0.1:8200
vault operator init
vault operator unseal
lvault
commandFor convenience reasons I created the lvault
command that will access the local vault server without having to export VAULT_ADDRESS
globally:
#!/bin/bash
VAULT_ADDR=http://127.0.0.1:8200 exec vault "$@"
Policies are there to restrict/grant access in one place for as many concrete tokens as you like. Policies can be combined into roles that make it even easier to control access.
First, create a policy that only grants access to the password that will be store in the vault: lvault policy write password -
path "secret/password" {
capabilities = ["read"]
}
Second, create a role that integrates the just created policy: lvault write auth/token/roles/simpleapp @..
{
"allowed_policies": "password",
"name": "simpleapp",
"orphan": false,
"renewable": true
}
Create the new token: lvault token create -role=simpleapp -display-name=msmtp
Store token (not the accessor token which is only the identifier of the token) in an encrypted file gpg -eq > ~/.msmtp.key.gpg
.
Authenticate as root and store the secert:
lvault login
lvault write secert/password @..
{
"username": "<username>",
"value": "<password>"
}
Unfortunately, the vault
command will store the current token in ~/.vault-token
which will cause different programs to compete with one another for storing their token in there.
gpg -dq ~/.msmtp.key.gpg | lvault auth -
lvault read -field=value secret/password
Solution: don't store the token instead use curl. Store the following line in file lvault-read
gpg -dq "$1" | paste ~/.local/bin/lvault-read.header - | curl -s --header @- --request GET "http://127.0.0.1:8200/v1/${2:-secret/password}" | jq -r "${3:-.data.value}"
File ~/.local/bin/lvault-read.header
contains:
X-Vault-Token:
Add the following line to your account in ~/.msmtprc
:
passwordeval lvault-read ~/.msmtp.key.gpg
Add the following line to your repository in ~/.offlineimaprc
:
remotepasseval = lvault_read()
And this line to the general section:
pythonfile = ~/.offlineimap/remotepasseval.py
And store these contents in ~/.offlineimap/remotepasseval.py
:
from subprocess import check_output
from os.path import expanduser
def lvault_read():
res = check_output(['lvault-read', expanduser('~/.offlineimap.key.gpg')])
return res.decode().strip()
Add the following lines to the storage configuration in ~/.config/vdirsyncer/config
:
username.fetch = ["command", "lvault-read", "~/.config/vdirsyncer/vdirsyncer.key.gpg", "", ".data.email"]
password.fetch = ["command", "lvault-read", "~/.config/vdirsyncer/vdirsyncer.key.gpg"]
January 18, 2019•879 words
tl;dr WiFi doesn't work because you have Docker installed on your laptop! Shutdown Docker and surf happily ever after.
Big shout out to Armbruster IT that led me in their blog post to the issue of Docker's network configuration that overlaps with Deutsch Bahn Wifi. In the follwoing post I'll walk you through moving the Docker's networks to different IP address ranges.
On the trains of Deutsche Bahn WiFi uses the IP networks 172.16.0.0/16
to 172.18.0.0/16
. Docker's default network 172.17.0.0/16
sits right in the middle and might interfere with DB WiFi on some trains. In addition, Docker allows user defined bridge networks that occupy additional IP networks. If you're using docker-compose
these additional networks are automatically created right behind the default network, e.g. starting at 172.18.0.0/16
. This will increase the chances of Docker interfering with DB WiFi. In fact, I wasn't able to use DB WiFi for long time on my laptop.
There are two ways of finding out whether your laptop is affected by the issue. First, connect to DB WiFi.
Option 1: Right-click on the network icon in the system tray and open Connection Information. Compare the IP addresses of all network interfaces. If the same IP network is used on multiple interfaces your laptop is affected by the issue.
Option 2: Open a terminal and list all IP network routes by running the command ip r s
. The output should look something like this. In my case multiple network bridges have been created by Docker, one of them is using the same IP network (172.18.0.0
) as my WiFi interface (wlp59s0
).
% ip r s
default via 172.18.0.1 dev wlp59s0 proto dhcp metric 600
169.254.0.0/16 dev wlp59s0 scope link metric 1000
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.18.0.0/24 dev wlp59s0 proto kernel scope link src 172.18.154.222 metric 600
172.18.0.0/16 dev br-1364b6d8194f proto kernel scope link src 172.18.0.1 linkdown
172.19.0.0/16 dev br-c66b62063149 proto kernel scope link src 172.19.0.1 linkdown
BTW, I didn't experience any network issues in the DB Lounge. These Wifi networks use completely different IP address ranges that are most common in home network settings.
The easiest solution is to temporarily shutdown Docker. The following terminal command should do that on all Linux systems that use systemd: sudo systemctl stop docker.service
Now, reconnect to DB Wifi end enjoy the trip :-D
Start Docker again after leaving the train: sudo systemctl start docker.service
In order to fix the issue the Docker configuration file /etc/docker/daemon.json
has to be adjusted (or created if it doesn't exist yet) and the currently configured Docker bridges need to be cleaned up.
Let's first do the cleanup. If you've used docker-compose
before a number of networks have been created that need to be removed manually. Let's list all Docker networks: sudo docker network ls
% sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
428fdda9c2b5 bridge bridge local
1364b6d8194f project_network bridge local
82bc0405ed96 host host local
029f363dbb5a none null local
You can ignore the networks with the names bridge
, host
, and none
because they're internal Docker networks. In my case the only relevant network is project_network
. To remove it take the network id and feed it into the remove command: sudo docker network rm [NETWORK ID]
Now, the network list should only contain the internal Docker networks:
% sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
428fdda9c2b5 bridge bridge local
82bc0405ed96 host host local
029f363dbb5a none null local
Hint: Sometimes a network configuration is hard-coded in the docker-compose.yml
configuration file. In this case removing the network now will only fix the issue until you run docker-compose
again. For a permanent fix adjust the network configuration in your project.
daemon.json
The last step is to adjust the Docker daemon configuration. We'll set the IP address and network of the default Docker network bridge and we'll also specify one or multiple IP address pools that are used to create networks by docker-compose
. Pick two free IPv4 networks (one for the Docker network bridge and the other for the address pool) that are not in conflict with any of the networks that you're using on a regular basis.
Stop Docker: sudo systemctl stop docker.service
Create the file /etc/docker/daemon.json
with the following content and replace the IP address and the IP networks:
{
"bip": "10.199.0.1/16",
"fixed-cidr": "10.199.0.0/16",
"default-address-pool": [{"scope":"local","base":"10.200.0.0/16","size":24}],
"default-address-pools": [{"scope":"local","base":"10.200.0.0/16","size":24}]
}
Hint: The IP address pool can only be set in the configuration if you're using a recent version of Docker. The feature has been integrated in March 2018.
Start Docker again (sudo systemctl stop docker.service
) and test (ip r s
) if the configuration is correct. Connect to the Wifi and enjoy the ride :-D
It took a good deal of research to find the solution and it also takes a bit of effort to configure it. Unfortunately, the issue could easily pop up again in a different network. With IPv4 we have to be on the watch - a truly permanent solution doesn't exist. Hope you'll not run into it any time soon.