Too many 2FA tokens to retain control of

Lately we enforced a new security policy at Jimdo which now requires every single Github account (also bot accounts) to have two factor authentication enabled. As you might imagine there are many accounts for different purposes and someone had to take care about enabling 2FA for all of them.

Luckily I’m only responsible for some of those bot users but in the end I needed to touch some accounts. As I’m a huge fan of enabling every possible security measure I can find, my Authy app already got a huge list of 2FA tokens in it. Adding even more tokens just to set them once and afterwards deleting them didn’t seem very appealing to me. Even if I would do that I still would need to store those secrets to set up those accounts again as soon as I need access to them.

After thinking about this problem I looked through all those 2FA tokens I already have in my Authy app and found that I’m using only a small amount of them on a regular basis. All those other tokens are stored for use every once in a while (probably even as few as once per year). So in the end I would be fine with putting those secrets in a place where they are secure and not stored together with the password of the service. That means storing those secrets inside LastPass would be a bad idea because LastPass already does have all the passwords inside its database.

As I’m hosting my own Vault instance I came to the idea to put the secrets into Vault and then find out how to generate a one-time-password out of them as soon as I require access to those services. And luckily I like to write small utilities to do such things…

The idea of vault-totp (download on Github releases) was born and shortly aftwards put into code. What it does? Quite simple: It takes a Vault key, reads the secret from it and generates the current one-time-password. It even can take a wildcard in the last segment of the key and print a whole list of OTPs…

vault-totp console output

Now I can put all those tokens I don’t often need into my Vault and for all those Github accounts mentioned earlier I even can put them into the company Vault, restrict access to them using Vault ACLs and in case I need a one-time-password for one account I just need one command to get it while the password is stored in another secure location…

Keeping overview of many git repos

If you’re a developer like myself you probably are dealing with a huge amount of different git repositories. Some for your private things (because having things organized with a version history is just nice), some for your private projects and even more for company work. Then there are also distractions and so maybe uncommitted changes…

At least thats the situation at my dev machine: In total 684 git repositories for like everything. Some managed at BitBucket, GitLab but most of them at Github. One thing common for all: There are untracked files, modified things not yet committed and somehow I got distracted and now they are laying on my disk waiting to get pushed to the remote.

Even though I’m doing an hourly incremental backup using duplicity and my duplicity-backup wrapper I don’t like that status but managing that huge amount of repositories and keeping an overview is hard. Thats the reason I came up with git-recurse-status.

In the end git-recurse-status is just a small Go binary walking through a tree of directories, collecting the current status of the repository and displaying it to the CLI. Sounds simple and could have been done with a small shell script. What I found too slow and also too complicated to put into a shell script is filtering those results. (684 lines are exhausting to read…)

So if you are like me you can put the binary (downlaod on Github) into your PATH and just fire up git recurse-status -f changed in your homedir or in the directory keeping all your private projects (or where ever you like) and you’re provided a list of repositories having changes (63 in my case). Similarly you can filter for repositories being ahead of the remote tracking branch and so on.

For a detailed overview what is possible see the README file inside the repository…

About quitting projects

I’ve started so many projects and other smaller things over the years. Since I’m on Github I’ve created 120 repositories containing code. Not all of them are real projects, some only are tools I’ve invested some hours in but there are so many projects I’ve invested a lot of time. Most of them are already neglected for quite a while and I don’t even remember all of them…

Recently I had a peergroup feedback with my colleagues I work with and while preparing for that feedback I realized one of the goals for the next year should be to lower my off-work workload. When looking into my todo list there are many things I need to do. Some of them are mainly “nice to have” things I can just skip over and over and nothing will happen. Maybe I should start with them. Nobody would notice if I just click that little trash can on those tasks and watch them vanish in a little animation.

Then there are those tasks I really need to do. Many of them are generated automatically every week or when ever I need to care about them. Though they also require time to be done I manage to do them quite well. But that list is only a small part of the grand total: There are issues in my repositories waiting for me to care about them. Some repositores even have whole road-maps of things I need to do.

This being already a quite big list I also work (or better should work to finish the new website of the VoxNoctem online radio. And then not to forget there is a whole bunch of Ideas in my head I even didn’t wrote down. A large amount of them vanished in the meantime but still there is a lot which stays with me.

Why I’m telling this you might ask: As stated in the beginning I want to reduce all of that workload. But I really don’t have a clue how to do this. Investing some time in all those small projects to make an improvement is not such a big deal. Especially as most of the tools in my Github account are tools to fit a single purpose and don’t need that much maintenance.

But how to deal with those big projects? There for instance is my GoBuilder: Back in 2015 I started to rewrite one component of the system but got distracted from that task. Also that task is a quite big one as It’s one of the two main components. And with like every big task it’s hard to start working on it. Even though I’m forgetting a lot of things really fast (somethimes I have no clue what I’ve done just 2 minutes ago) I still remember what I need to do to complete that task and everything inside me resists against that task.

So in the end maybe I should just stop thinking about that project as I’ve not worked on it for quite a long time and should let it go? It feels like doing so but on the other hand there are people using that project. Sure, they can use it in the current state but is it fair to them to neglect a project they are using?

And even if I can decide to stop working on those projects (and force me to really stop caring about them) how to communicate? And what about the projects I’m also hosting? For projects I’m not hosting it’s fairly easy: They can be downloaded in their latest version but taking down services put all users in the need to put another bunch of tasks to their task list: Migrate from my service to something different…

So many questions, so few answers. Do you have hints or advices for me? Let me know using Twitter, Messenger, Discord, where ever you can find me…

Using Vault to unlock GPG keys

Some weeks ago I wrote about using LastPass stored passwords to unlock ssh-keys. Some of you gave feedback, using LastPass to store those quite confidential passwords might not be the best idea. Also when there is no internet connection it’s just not possible to unlock the SSH keys (for example to access local VMs).

That’s the reason I thought about this and switched to using a local Vault instance to store those passwords. The unlock key for that Vault instance is still stored in LastPass but I only need that key once per reboot / Vault reload and also its not possible for anyone (even if they get access to my unlock keys) to use them with my local vault instance.

Now that GitHub supports signed commits with a badge in its interface I’m using my GPG key way more often to sign all the commits I’m creating so I needed a more easy way to enter the password for it. Given that also my GPG keys do not have a password someone could remember (especially as there is not only one key but seven of them) this also should be done using a script.

To use the script I embedded below you need to have a GPG-Agent running which is started using the parameter --allow-preset-passphrase. Also you need a Vault instance containing your GPG key password which is unlocked so you can do a vault read /secret/gpg-key/<your key-id>. To setup Vault please refer to the official documentation.

When you’ve met all those requirements you simply can test whether it works by executing echo "hi" | gpg -sa before and after executing the script. If everything is working it should ask for a password before the script execution but not after. The cache timeout after which the password is dropped from the gpg-agent cache can be configured. For the configuration of the gpg-agent please refer to a documentation you have trust in.

Using LastPass to unlock SSH-keys

Since its a good practice to use one SSH key for one purpose I have a lot of SSH keys for different purposes: I have a SSH key to access company servers, one to access my private servers, one for accessing the servers running at VoxNoctem, one for code commits pushed to GitHub and so on. All those keys do have passwords in order to have them secured in case someone gets their hands on them.

Until some time I used to have one password for all of those SSH keys because who can remember that many passwords? But there are password managers like Cloudkeys or LastPass. So I started to rotate my SSH keys and gave them different passwords. Those passwords were then stored in my password database and to unlock the SSH keys I needed to do an ssh-add, then switch to my browser, open the password database, search the password, copy it, switch back to the terminal and paste the password. Sounds complicated? It was.

This weekend I thought about making things more easy by coupling all those steps together in one script. It should access my LastPass account, fetch the password and unlock the SSH key for it to be added to my ssh-agent. Using the lpass command line client for LastPass I just had to figure out how to find the password in LastPass and how to add the SSH key with this automatically retrieved password.

In order to use the same mechanism for you, just install the lpass command line client, ensure you have expect on your system (should be present on Linux and OSX by default) and copy the script below into /usr/local/bin/lpass-ssh (or any other location inside your $PATH.

The password for the corresponding key is found by name. So you need to name your keys different and not all of them id_rsa. If you for example have a key ~/.ssh/my_work_key you need to create a secure note in LastPass with the type “SSH key” and name it SSH: my_work_key. Afterwards just execute lpass-ssh my_work_key to add the key to your current ssh-agent.

Of course you also can load keys not stored in ~/.ssh: Just pass the full path to lpass-ssh and keep the naming scheme to SSH: <filename of your key>.

Three more months with Withings

On beginning August I already wrote a post about my experiences with the Pulse Ox. Since then there were three more months with experiences with Withings products, their support and also the attempt to integrate the data into the automation of my life.

Lets start with something positive: The broken widget on the web dashboard has been fixed! The reason for the widget to be broken was an error in the mobile app which didn’t send the required data. (Don’t ask me why one widget is able to show the data and one isn’t. Obviously the app syncs data twice in different qualities…)

Now, back to those parts of my experiences I’m writing this post about. The mobile app ought to get background synchronization - or at least this was what I understood from a support ticket… Sadly even several versions later there is no background sync. So for me to have the step data synched from the Pulse Ox to the Withings server I need to wait for the Pulse to sync with my phone (or trigger a manual sync). Afterwards my phone knows about the data but even after days the data is not synched with the server. To get this sync I need to manually open the app and do a “refresh-pull” in the timeline. This will trigger a sync between the app and the server. For me this is a show stopper when its about automatically processing that data. Getting my data with several days delay or having to do manual actions for the process is just not acceptable.

Speaking about the API, Withings is using an API protected using OAuth 1.0. In general I don’t like the 1.0 version of OAuth but I get around. What really pisses me is the fact Withings is requiring the OAuth parameters to be sent inside the query string. Sure, this is something the OAuth standard makes possible but its certainly not common practise.

After having found a way to convince the OAuth library to do this there are more issues: Either the documentation is deprecated or they just broke their API. There are some endpoints which just don’t work. Sadly I wanted to use that endpoint but couldn’t which then requires me to work around that broken endpoint and use more resources (including the time to build that workarounds) go get to the same results.

Before writing down everything else being wrong with that API I’d like to reference an article by Kate Jenkins: Top Six Things You Need To Know About The Withings API

Having spoken about the support in my last article I’d like to mention their response times improved. Now they only need 8 days to respond to a ticket (instead of the previous 9 days) and they do respond with an answer instead of just asking whether the issue has been gone in the meantime. Sadly the answer they send me had nothing to do with my question. Maybe I need to phrase my tickets in french instead of english? Too bad my french isn’t good enough for this…

After making the same bad experience over months now my support for their products isn’t available anymore. Currently I’m strongly considering to switch back to FitBit products and even ditch and replace the Withings scale I had good experiences with in the last years.

Using multiple GOPATHs with fish-shell

Since I’m using Go in more and more projects and different contexts (private projects, company projects, contributions to other peoples open source projects, …) and all of them are using different versions of libraries I needed an approach to split up those library versions so I don’t have to test company projects against new versions of libraries when I update them for my private projects.

One approach would be to godep save those dependencies every time I change a project and godep restore them as soon as I’m switching back to that project. As I continuously have many different open at the same time I would get confused quite fast and loose the overview which versions of those libraries are currently checked out.

The approach I chose is a bit different and mainly the port of an article by Herbert Fischer to the fish shell. It is fully interchangeable with his solution so if you are using bash and fish mixed at the same time you can use his version for bash and mine for fish.

Like he explained in his article you just create a .gopath file in a directory somewhere above the current one you want to be the $GOPATH. So for example if you have your projects living in ~/gocode/src/... you want to create a .gopath file in ~/gocode/. Below is a small example what the results are if you have your .gopath files at /tmp/test and ~/gocode/:

[13:39] luzifer ~> cd /tmp/test/foo/
[13:39] luzifer /t/t/foo> echo $GOPATH
/private/tmp/test
[13:39] luzifer ~> cd ~/gocode/src/github.com/Luzifer/password
[13:39] luzifer /~/g/s/g/L/password> echo $GOPATH
/Users/luzifer/gocode
[13:39] luzifer /t/t/foo> cd
[13:39] luzifer ~> echo $GOPATH
/Users/luzifer/gocode
[13:39] luzifer ~>

The only change I made to the code by Herbert is that I defined a variable $default_GOPATH which will get set if there is no .gopath found in any directory above the current one. You can just leave it unset and your $GOPATH will get removed from the environment as soon as you leave those directories having an .gopath file above them.

To enable this approach for you just save the following code to ~/.config/fish/functions/cd.fish:

function cd
  builtin cd $argv

  set cdir (pwd)
  while [ "$cdir" != "/" ]
    if [ -e "$cdir/.gopath" ]
      set -x GOPATH $cdir
      return 0
    end
    set cdir (dirname "$cdir")
  end

  set -x GOPATH $default_GOPATH
  return 0
end

(Terminal image: Zenith Z-19 Terminal by ajmexico)

Set phone wallpaper from Tumblr blog

For some time now I was using a live wallpaper on my Android phone to cycle through a bunch of wallpapers, stored on the internal storage of my phone, at every unlock but even if you got several wallpapers there is a time you’re fed up of seeing them every time you unlock your phone.

At least this was my situation when I thought about a source and a solution to have my wallpapers changed more often and without the hassle to search new wallpapers from different sources and copy them to the phone…

The solution I came up with is a quite easy one: At first you need an IFTTT account and a recipe for this. Also you will need to install the IFTTT app on your phone. Even though the recipe is called “from Tumblr” you can later on use any other source for your wallpapers as long as they have about the same screen ratio as your phone. You can find the recipe I used (which is working with my script below) here:

IFTTT Recipe: Update phone wallpaper from Tumblr connects maker to android-device

The second component a bit more complex to set up as you will need a server or a computer being able to run a python script by using a cron job. If you just want to be able to update your phone wallpaper manually by executing a script you also can do that with the script below.

To get it running on your machine you need python (available on every Linux and Mac OSX) and a small library you can get using this command: pip install requests

If you want to use the script I used you need to get your API key from Tumblr. For this just register an application on the Tumblr site and copy the “OAuth Consumer Key” into the tumblr_key field in the script. Additionally you will need the secret key (maker_key field) for the IFTTT Maker channel. You can find it on the channels page. When you’ve configured these two secrets you’re ready to run the script and you will get an update of your phones wallpaper.

(To change the source just adjust the tumblr_blog. If you don’t, be warned: Images might be NSFW or even get you in trouble with your significant other…)

For a one-time execution just run python walltumblr.py (or any other name you saved the script to). To set up a cron job periodically changing your phones wallpaper please consult for example StackOverflow.

#!/usr/bin/env python

from __future__ import division
import random, requests, json

conf = {
  "tumblr_key": "<put your key here>",
  "tumblr_blog": "luziferus.tumblr.com",
  "tumblr_limit": 20,

  "maker_event": "walltumblr",
  "maker_key": "<put your key here>",

  "ratio_target": 1440 / 2560,
  "ratio_max_deriv": 0.2,
}

random.seed()

tumblr_url = "http://api.tumblr.com/v2/blog/{tumblr_blog}/posts/photo?api_key={tumblr_key}&limit={tumblr_limit}".format(**conf)
maker_url = "https://maker.ifttt.com/trigger/{maker_event}/with/key/{maker_key}".format(**conf)

tumblr = requests.get(tumblr_url).json()
posts = tumblr["response"]["posts"]

image_url = None

while True:
  post_no = random.randint(0,conf["tumblr_limit"]-1)

  img = posts[post_no]["photos"][0]["original_size"]

  ratio = img["width"] / img["height"]
  image_url = img["url"]

  if abs(conf["ratio_target"] - ratio) <= conf["ratio_max_deriv"]:
    break

print "Sending image {}".format(image_url)

maker_data = {
  "value1": image_url,
}
maker_headers = {
  "Content-Type": "application/json",
}
requests.post(maker_url, data=json.dumps(maker_data), headers=maker_headers)

About converting HTML5-slides into PDFs…

Since the day Ole, one of my colleagues, started promoting his service slidr.io in the office, I was curious how to get the slides of the few talks I held into that service. Sadly I needed PDF files to get those slides into slidr.io and what did I have? Shiny HTML5 slides build using the Google HTML5-slides library

Today I finally managed to convert my slides into a PDF suitable to upload to slidr.io and because I tried several methods and failed at all except one I want to share that knowledge with you, in case you also want to put your slides online as a PDF or upload them to a service only accepting PDFs.

1st attempt: deck2pdf

deck2pdf is a tool written in Java (yeah, I know…) promising to be able to convert several types of HTML5 slides into a PDF. Sadly it did not work for me as it spawned its built-in browser and then just sat there waiting for the end of the world… Even compiling from latest master myself (Java…) lead to no changes…

2nd attempt: Use phantomjs

My second attempt already was that one finally leading to success: Use phantomjs to load the slides, save renderings of all slides and skip them forward one-by-one. Sadly I needed to put a small CSS sniplet into my slides in order to hide slides I did not want to see in my renderings:

.slides > article.next, .slides > article.past, .slides > article.far-next {
  display: none !important;
  -webkit-transform: none;
}

This sniplet hides all but the current slide and ensures there is no transform done which will lead to really huge renderings with a lot of wasted space at the edges and you would need to cut them afterwards.

After manipulating my slides with that small sniplet the rest was just putting together a workflow using phantomjs to slide to the next slide every time and to save the renderings in the right order. I ended up with some lines of JavaScript code doing exactly this:

var page = require('webpage').create(),
    system = require('system');

var address = system.args[1];
page.viewportSize = { width: 1024, height: 768 };

function pad(n, width) {
  n = n + '';
  return n.length >= width ? n : new Array(width - n.length + 1).join('0') + n;
}

page.open(address, function (status) {
  if (status !== 'success') {
    console.log('Unable to load the address!');
    phantom.exit(1);
  } else {
    window.setTimeout(function () {
      var numSlides = page.evaluate(function() {
        return slideEls.length-1;
      })
      nextPage(0, numSlides);
    }, 2000);
  }
});

function nextPage(i, max_i) {
  page.render("output" + pad(i, 3) + ".png");
  page.evaluate(function() { nextSlide(); });

  if (i < max_i) {
    window.setTimeout(function(){ nextPage(i+1, max_i) }, 1000);
  } else {
    phantom.exit();
  }
}

Might look a bit confusing but in the end it just does the job without much overhead. Sure, you could include the CSS fix into the JavaScript in case you want to render slides you can’t change locally but as I was able to edit every file locally I did not went that step and stopped at this state.

After you execute this script (saved as render.js) using phantomjs render.js index.html (index.html being the HTML file containing your slides) you will end with a bunch of PNG-images with your slides. If they look the way you want them to look you now can convert them into a PDF. For this task I used ImageMagick with just one simple command: convert output*.png output.pdf

Doing this might still be some manual work but the only other way I’m able to think about would be to do screenshots manually… Maybe this is not the way you want to convert your slides into a PDF… In my case the effort for the two slides I wanted to upload into my slidr.io account was maybe a bit too much but it worked and in case I’m holding talks in the future I’m able to use this method to convert them again…

My experiences with the Pulse Ox

As I’m continuously trying to get more fit and I like every kind of gadgets I was a user of the FitBit One. Lately I got an offer from Withings announcing a service to migrate all my data from my FitBit account and start tracking my steps, sleep and other stats using a Withings Pulse Ox.

Starting in 2011 I’m using a Withings WiFi scale to track my weight changes and I’m really satisfied with that scale. It just does what I’m expecting from it: It tracks my weight and body fat percentages and syncs them to my Withings account. I don’t have to take care about much (just change the batteries every now and then) and additional I have an API to get all my data from that account and let applications or scripts (for example Libra for graphing the measures) do things to that results.

Having that experience with Withings products for years now I didn’t thought about the offer it a long time and bought a Withings Pulse Ox for 99.95 EUR. This was about a month ago. At the same time I connected the website, promising to sync my FitBit data to my Withings account, to my FitBit account and the wait began.

Now, about a month later in which I used the Pulse Ox on a daily base I can tell you a bit about my experiences and why I’m not really satisfied.

The Withings Pulse Ox

The device itself is about the size of the FitBit One (a bit shorter and also a bit wider) and doesn’t really hinder me while doing whatever I’m doing during the day. When I’m outside, normally I’m wearing an Android watch so during that time the Pulse Ox sits in my pocket and works quite well as I can say by comparing the data it collects to other trackers. The number of steps doesn’t really differ significantly from the number of steps counted by Google Fit or the FitBit One.

During the night I’m wearing the silicone wrist band holding the Pulse Ox. This wrist band is a really huge improvement compared to the wrist band of the FitBit One: The FitBit One did have a fabric wrist band which got out of form quite fast while the silicone wrist band of the Pulse Ox stays as it was in the first place.

The sleep measurement of the Pulse Ox is like a huge nope for me. The measurement is interrupted quite often with sometimes like 45min without any data (just a white gap between the collected data). When it’s about the collected data I’m not able to confirm how accurate those data is as I don’t have access to professional equipment for detecting sleep phases.

In contrast to the FitBit One the Withings Pulse Ox does not stay connected to the phone all the time (even when the App is active) but syncs on some interval I currently wasn’t able to figure out. You always have a chance to manually sync the data by just pressing the button of the Pulse Ox for three seconds which isn’t bad. The sync itself sadly is bad: Sometimes it just stops while synching (the phone is about 50cm distant from the Pulse Ox), sometimes it does not sync all data (and even on a second sync does not sync them) which I saw after being at the gym having a count of nearly 6000 steps on the Pulse Ox itself but the app and also the website showed only slightly over 4000 steps and sometimes everything works… (I don’t like things working only sometimes.)

The mobile App: “Withings Healthmate”

First to say: The last versions of the app were a huge improvement. Some versions ago the sleep for example was only counted to the last gap (which I mentioned above). Overall it’s now a way to visualize the data I don’t really use often because there are better ways, just not for the sleep and the step counts, and for synching the data to my Withings account. Talking about the syncs frequently not synching all the data here’s a good example:

Failed syncs and their display

As you can see the sync was triggered at 09:12 that day, synched about 2:42h and then after an additional manual sync it synched the remaining data. How long did I sleep that night? Like 9 hours? No, just that 6:17h. The interface in general is relatively confusing and for example also does not remember the scroll position when entering a sub-view and jumping back. If you have scrolled through the timeline you have to search again for the point of time you were viewing.

Additionally there are messages injected into the timeline having call-to-action items for participating in polls, pushing articles from the FAQ and several other things.

The website

At the time, the mobile application was not that usable it is now, the website was the best point of viewing the data collected by the devices and in some points it still is. Sadly the quality of the website did not improve but worsened.

Some of the widgets are quite useful, for example if you want to see how active you were during your day the “Activity & sleep patterns” widget is great:

Activity & sleep patterns widget

Sadly for example the steps goal displayed in the screenshot does not adjust and is fixed at 70k while my own goal I set in the app is 35k steps per week. Also other widgets are currently totally broken and there is no fix in sight.

Activity & sleep patterns widget

All the things working with weight, where Withings is in service for several years now, are quite sophisticated but those dealing with steps and everything around the Pulse Ox is rather a work in progress…

The support

I wrote a support ticket about most of these points (those already known while writing the ticket) to the Withings support but got no answer for about 9 days. After that 9 days the response was like “hey, sorry for the late response, did your problem fix itself?“…

Seriously: What the fuck? This is not an acceptable answer in any way. When I’m telling you there are issues and name them, why would those “vanish” by themselves? Did you work on it? Great, so you know what you’ve fixed and can tell me what you’ve fixed. You didn’t? Yeah okay, but then don’t ask me whether the problems are gone.

That answer after 9 days with a half-hearted excuse for the long waiting time and the promise next support answer will not take another 9 days without doing anything about the issue looks like someone needed to respond just before an escalation deadline of 10 days. “We’ve met our support-requirement to get back to the customer within 10 days!” - Nope, you didn’t. Throwing predefined text blocks at all tickets open for more than 9 days is not support.

Conclusion / TL;DR

Even though I’m always in to test-drive new hardware and gadgets and am used to it being not yet ready, the experience of the Withings Pulse Ox (which is a “finished” product, not a beta-test or something) feels unready and like “hey, we are not ready yet for launch but lets launch and do the last fixes and development while the product is out there at the customers side”.

As a customer I’d not recommend my friends to buy this product while as a developer I’m keeping it in action and hope there will be major improvements in near future.