Using LastPass to unlock SSH-keys

Since its a good practice to use one SSH key for one purpose I have a lot of SSH keys for different purposes: I have a SSH key to access company servers, one to access my private servers, one for accessing the servers running at VoxNoctem, one for code commits pushed to GitHub and so on. All those keys do have passwords in order to have them secured in case someone gets their hands on them.

Until some time I used to have one password for all of those SSH keys because who can remember that many passwords? But there are password managers like Cloudkeys or LastPass. So I started to rotate my SSH keys and gave them different passwords. Those passwords were then stored in my password database and to unlock the SSH keys I needed to do an ssh-add, then switch to my browser, open the password database, search the password, copy it, switch back to the terminal and paste the password. Sounds complicated? It was.

This weekend I thought about making things more easy by coupling all those steps together in one script. It should access my LastPass account, fetch the password and unlock the SSH key for it to be added to my ssh-agent. Using the lpass command line client for LastPass I just had to figure out how to find the password in LastPass and how to add the SSH key with this automatically retrieved password.

In order to use the same mechanism for you, just install the lpass command line client, ensure you have expect on your system (should be present on Linux and OSX by default) and copy the script below into /usr/local/bin/lpass-ssh (or any other location inside your $PATH.

The password for the corresponding key is found by name. So you need to name your keys different and not all of them id_rsa. If you for example have a key ~/.ssh/my_work_key you need to create a secure note in LastPass with the type “SSH key” and name it SSH: my_work_key. Afterwards just execute lpass-ssh my_work_key to add the key to your current ssh-agent.

Of course you also can load keys not stored in ~/.ssh: Just pass the full path to lpass-ssh and keep the naming scheme to SSH: <filename of your key>.

Three more months with Withings

On beginning August I already wrote a post about my experiences with the Pulse Ox. Since then there were three more months with experiences with Withings products, their support and also the attempt to integrate the data into the automation of my life.

Lets start with something positive: The broken widget on the web dashboard has been fixed! The reason for the widget to be broken was an error in the mobile app which didn’t send the required data. (Don’t ask me why one widget is able to show the data and one isn’t. Obviously the app syncs data twice in different qualities…)

Now, back to those parts of my experiences I’m writing this post about. The mobile app ought to get background synchronization - or at least this was what I understood from a support ticket… Sadly even several versions later there is no background sync. So for me to have the step data synched from the Pulse Ox to the Withings server I need to wait for the Pulse to sync with my phone (or trigger a manual sync). Afterwards my phone knows about the data but even after days the data is not synched with the server. To get this sync I need to manually open the app and do a “refresh-pull” in the timeline. This will trigger a sync between the app and the server. For me this is a show stopper when its about automatically processing that data. Getting my data with several days delay or having to do manual actions for the process is just not acceptable.

Speaking about the API, Withings is using an API protected using OAuth 1.0. In general I don’t like the 1.0 version of OAuth but I get around. What really pisses me is the fact Withings is requiring the OAuth parameters to be sent inside the query string. Sure, this is something the OAuth standard makes possible but its certainly not common practise.

After having found a way to convince the OAuth library to do this there are more issues: Either the documentation is deprecated or they just broke their API. There are some endpoints which just don’t work. Sadly I wanted to use that endpoint but couldn’t which then requires me to work around that broken endpoint and use more resources (including the time to build that workarounds) go get to the same results.

Before writing down everything else being wrong with that API I’d like to reference an article by Kate Jenkins: Top Six Things You Need To Know About The Withings API

Having spoken about the support in my last article I’d like to mention their response times improved. Now they only need 8 days to respond to a ticket (instead of the previous 9 days) and they do respond with an answer instead of just asking whether the issue has been gone in the meantime. Sadly the answer they send me had nothing to do with my question. Maybe I need to phrase my tickets in french instead of english? Too bad my french isn’t good enough for this…

After making the same bad experience over months now my support for their products isn’t available anymore. Currently I’m strongly considering to switch back to FitBit products and even ditch and replace the Withings scale I had good experiences with in the last years.

Using multiple GOPATHs with fish-shell

Since I’m using Go in more and more projects and different contexts (private projects, company projects, contributions to other peoples open source projects, …) and all of them are using different versions of libraries I needed an approach to split up those library versions so I don’t have to test company projects against new versions of libraries when I update them for my private projects.

One approach would be to godep save those dependencies every time I change a project and godep restore them as soon as I’m switching back to that project. As I continuously have many different open at the same time I would get confused quite fast and loose the overview which versions of those libraries are currently checked out.

The approach I chose is a bit different and mainly the port of an article by Herbert Fischer to the fish shell. It is fully interchangeable with his solution so if you are using bash and fish mixed at the same time you can use his version for bash and mine for fish.

Like he explained in his article you just create a .gopath file in a directory somewhere above the current one you want to be the $GOPATH. So for example if you have your projects living in ~/gocode/src/... you want to create a .gopath file in ~/gocode/. Below is a small example what the results are if you have your .gopath files at /tmp/test and ~/gocode/:

[13:39] luzifer ~> cd /tmp/test/foo/
[13:39] luzifer /t/t/foo> echo $GOPATH
[13:39] luzifer ~> cd ~/gocode/src/
[13:39] luzifer /~/g/s/g/L/password> echo $GOPATH
[13:39] luzifer /t/t/foo> cd
[13:39] luzifer ~> echo $GOPATH
[13:39] luzifer ~>

The only change I made to the code by Herbert is that I defined a variable $default_GOPATH which will get set if there is no .gopath found in any directory above the current one. You can just leave it unset and your $GOPATH will get removed from the environment as soon as you leave those directories having an .gopath file above them.

To enable this approach for you just save the following code to ~/.config/fish/functions/

function cd
  builtin cd $argv

  set cdir (pwd)
  while [ "$cdir" != "/" ]
    if [ -e "$cdir/.gopath" ]
      set -x GOPATH $cdir
      return 0
    set cdir (dirname "$cdir")

  set -x GOPATH $default_GOPATH
  return 0

(Terminal image: Zenith Z-19 Terminal by ajmexico)

Set phone wallpaper from Tumblr blog

For some time now I was using a live wallpaper on my Android phone to cycle through a bunch of wallpapers, stored on the internal storage of my phone, at every unlock but even if you got several wallpapers there is a time you’re fed up of seeing them every time you unlock your phone.

At least this was my situation when I thought about a source and a solution to have my wallpapers changed more often and without the hassle to search new wallpapers from different sources and copy them to the phone…

The solution I came up with is a quite easy one: At first you need an IFTTT account and a recipe for this. Also you will need to install the IFTTT app on your phone. Even though the recipe is called “from Tumblr” you can later on use any other source for your wallpapers as long as they have about the same screen ratio as your phone. You can find the recipe I used (which is working with my script below) here:

IFTTT Recipe: Update phone wallpaper from Tumblr connects maker to android-device

The second component a bit more complex to set up as you will need a server or a computer being able to run a python script by using a cron job. If you just want to be able to update your phone wallpaper manually by executing a script you also can do that with the script below.

To get it running on your machine you need python (available on every Linux and Mac OSX) and a small library you can get using this command: pip install requests

If you want to use the script I used you need to get your API key from Tumblr. For this just register an application on the Tumblr site and copy the “OAuth Consumer Key” into the tumblr_key field in the script. Additionally you will need the secret key (maker_key field) for the IFTTT Maker channel. You can find it on the channels page. When you’ve configured these two secrets you’re ready to run the script and you will get an update of your phones wallpaper.

(To change the source just adjust the tumblr_blog. If you don’t, be warned: Images might be NSFW or even get you in trouble with your significant other…)

For a one-time execution just run python (or any other name you saved the script to). To set up a cron job periodically changing your phones wallpaper please consult for example StackOverflow.

#!/usr/bin/env python

from __future__ import division
import random, requests, json

conf = {
  "tumblr_key": "<put your key here>",
  "tumblr_blog": "",
  "tumblr_limit": 20,

  "maker_event": "walltumblr",
  "maker_key": "<put your key here>",

  "ratio_target": 1440 / 2560,
  "ratio_max_deriv": 0.2,


tumblr_url = "{tumblr_blog}/posts/photo?api_key={tumblr_key}&limit={tumblr_limit}".format(**conf)
maker_url = "{maker_event}/with/key/{maker_key}".format(**conf)

tumblr = requests.get(tumblr_url).json()
posts = tumblr["response"]["posts"]

image_url = None

while True:
  post_no = random.randint(0,conf["tumblr_limit"]-1)

  img = posts[post_no]["photos"][0]["original_size"]

  ratio = img["width"] / img["height"]
  image_url = img["url"]

  if abs(conf["ratio_target"] - ratio) <= conf["ratio_max_deriv"]:

print "Sending image {}".format(image_url)

maker_data = {
  "value1": image_url,
maker_headers = {
  "Content-Type": "application/json",
}, data=json.dumps(maker_data), headers=maker_headers)

About converting HTML5-slides into PDFs…

Since the day Ole, one of my colleagues, started promoting his service in the office, I was curious how to get the slides of the few talks I held into that service. Sadly I needed PDF files to get those slides into and what did I have? Shiny HTML5 slides build using the Google HTML5-slides library

Today I finally managed to convert my slides into a PDF suitable to upload to and because I tried several methods and failed at all except one I want to share that knowledge with you, in case you also want to put your slides online as a PDF or upload them to a service only accepting PDFs.

1st attempt: deck2pdf

deck2pdf is a tool written in Java (yeah, I know…) promising to be able to convert several types of HTML5 slides into a PDF. Sadly it did not work for me as it spawned its built-in browser and then just sat there waiting for the end of the world… Even compiling from latest master myself (Java…) lead to no changes…

2nd attempt: Use phantomjs

My second attempt already was that one finally leading to success: Use phantomjs to load the slides, save renderings of all slides and skip them forward one-by-one. Sadly I needed to put a small CSS sniplet into my slides in order to hide slides I did not want to see in my renderings:

.slides >, .slides > article.past, .slides > article.far-next {
  display: none !important;
  -webkit-transform: none;

This sniplet hides all but the current slide and ensures there is no transform done which will lead to really huge renderings with a lot of wasted space at the edges and you would need to cut them afterwards.

After manipulating my slides with that small sniplet the rest was just putting together a workflow using phantomjs to slide to the next slide every time and to save the renderings in the right order. I ended up with some lines of JavaScript code doing exactly this:

var page = require('webpage').create(),
    system = require('system');

var address = system.args[1];
page.viewportSize = { width: 1024, height: 768 };

function pad(n, width) {
  n = n + '';
  return n.length >= width ? n : new Array(width - n.length + 1).join('0') + n;
}, function (status) {
  if (status !== 'success') {
    console.log('Unable to load the address!');
  } else {
    window.setTimeout(function () {
      var numSlides = page.evaluate(function() {
        return slideEls.length-1;
      nextPage(0, numSlides);
    }, 2000);

function nextPage(i, max_i) {
  page.render("output" + pad(i, 3) + ".png");
  page.evaluate(function() { nextSlide(); });

  if (i < max_i) {
    window.setTimeout(function(){ nextPage(i+1, max_i) }, 1000);
  } else {

Might look a bit confusing but in the end it just does the job without much overhead. Sure, you could include the CSS fix into the JavaScript in case you want to render slides you can’t change locally but as I was able to edit every file locally I did not went that step and stopped at this state.

After you execute this script (saved as render.js) using phantomjs render.js index.html (index.html being the HTML file containing your slides) you will end with a bunch of PNG-images with your slides. If they look the way you want them to look you now can convert them into a PDF. For this task I used ImageMagick with just one simple command: convert output*.png output.pdf

Doing this might still be some manual work but the only other way I’m able to think about would be to do screenshots manually… Maybe this is not the way you want to convert your slides into a PDF… In my case the effort for the two slides I wanted to upload into my account was maybe a bit too much but it worked and in case I’m holding talks in the future I’m able to use this method to convert them again…

My experiences with the Pulse Ox

As I’m continuously trying to get more fit and I like every kind of gadgets I was a user of the FitBit One. Lately I got an offer from Withings announcing a service to migrate all my data from my FitBit account and start tracking my steps, sleep and other stats using a Withings Pulse Ox.

Starting in 2011 I’m using a Withings WiFi scale to track my weight changes and I’m really satisfied with that scale. It just does what I’m expecting from it: It tracks my weight and body fat percentages and syncs them to my Withings account. I don’t have to take care about much (just change the batteries every now and then) and additional I have an API to get all my data from that account and let applications or scripts (for example Libra for graphing the measures) do things to that results.

Having that experience with Withings products for years now I didn’t thought about the offer it a long time and bought a Withings Pulse Ox for 99.95 EUR. This was about a month ago. At the same time I connected the website, promising to sync my FitBit data to my Withings account, to my FitBit account and the wait began.

Now, about a month later in which I used the Pulse Ox on a daily base I can tell you a bit about my experiences and why I’m not really satisfied.

The Withings Pulse Ox

The device itself is about the size of the FitBit One (a bit shorter and also a bit wider) and doesn’t really hinder me while doing whatever I’m doing during the day. When I’m outside, normally I’m wearing an Android watch so during that time the Pulse Ox sits in my pocket and works quite well as I can say by comparing the data it collects to other trackers. The number of steps doesn’t really differ significantly from the number of steps counted by Google Fit or the FitBit One.

During the night I’m wearing the silicone wrist band holding the Pulse Ox. This wrist band is a really huge improvement compared to the wrist band of the FitBit One: The FitBit One did have a fabric wrist band which got out of form quite fast while the silicone wrist band of the Pulse Ox stays as it was in the first place.

The sleep measurement of the Pulse Ox is like a huge nope for me. The measurement is interrupted quite often with sometimes like 45min without any data (just a white gap between the collected data). When it’s about the collected data I’m not able to confirm how accurate those data is as I don’t have access to professional equipment for detecting sleep phases.

In contrast to the FitBit One the Withings Pulse Ox does not stay connected to the phone all the time (even when the App is active) but syncs on some interval I currently wasn’t able to figure out. You always have a chance to manually sync the data by just pressing the button of the Pulse Ox for three seconds which isn’t bad. The sync itself sadly is bad: Sometimes it just stops while synching (the phone is about 50cm distant from the Pulse Ox), sometimes it does not sync all data (and even on a second sync does not sync them) which I saw after being at the gym having a count of nearly 6000 steps on the Pulse Ox itself but the app and also the website showed only slightly over 4000 steps and sometimes everything works… (I don’t like things working only sometimes.)

The mobile App: “Withings Healthmate”

First to say: The last versions of the app were a huge improvement. Some versions ago the sleep for example was only counted to the last gap (which I mentioned above). Overall it’s now a way to visualize the data I don’t really use often because there are better ways, just not for the sleep and the step counts, and for synching the data to my Withings account. Talking about the syncs frequently not synching all the data here’s a good example:

Failed syncs and their display

As you can see the sync was triggered at 09:12 that day, synched about 2:42h and then after an additional manual sync it synched the remaining data. How long did I sleep that night? Like 9 hours? No, just that 6:17h. The interface in general is relatively confusing and for example also does not remember the scroll position when entering a sub-view and jumping back. If you have scrolled through the timeline you have to search again for the point of time you were viewing.

Additionally there are messages injected into the timeline having call-to-action items for participating in polls, pushing articles from the FAQ and several other things.

The website

At the time, the mobile application was not that usable it is now, the website was the best point of viewing the data collected by the devices and in some points it still is. Sadly the quality of the website did not improve but worsened.

Some of the widgets are quite useful, for example if you want to see how active you were during your day the “Activity & sleep patterns” widget is great:

Activity & sleep patterns widget

Sadly for example the steps goal displayed in the screenshot does not adjust and is fixed at 70k while my own goal I set in the app is 35k steps per week. Also other widgets are currently totally broken and there is no fix in sight.

Activity & sleep patterns widget

All the things working with weight, where Withings is in service for several years now, are quite sophisticated but those dealing with steps and everything around the Pulse Ox is rather a work in progress…

The support

I wrote a support ticket about most of these points (those already known while writing the ticket) to the Withings support but got no answer for about 9 days. After that 9 days the response was like “hey, sorry for the late response, did your problem fix itself?“…

Seriously: What the fuck? This is not an acceptable answer in any way. When I’m telling you there are issues and name them, why would those “vanish” by themselves? Did you work on it? Great, so you know what you’ve fixed and can tell me what you’ve fixed. You didn’t? Yeah okay, but then don’t ask me whether the problems are gone.

That answer after 9 days with a half-hearted excuse for the long waiting time and the promise next support answer will not take another 9 days without doing anything about the issue looks like someone needed to respond just before an escalation deadline of 10 days. “We’ve met our support-requirement to get back to the customer within 10 days!” - Nope, you didn’t. Throwing predefined text blocks at all tickets open for more than 9 days is not support.

Conclusion / TL;DR

Even though I’m always in to test-drive new hardware and gadgets and am used to it being not yet ready, the experience of the Withings Pulse Ox (which is a “finished” product, not a beta-test or something) feels unready and like “hey, we are not ready yet for launch but lets launch and do the last fixes and development while the product is out there at the customers side”.

As a customer I’d not recommend my friends to buy this product while as a developer I’m keeping it in action and hope there will be major improvements in near future.

New design and site merges

Yesterday I “finished” my work bringing my blog to a new and shiny design. This time it ought to be a responsive template with bright colors, a more complete design which doesn’t look like some 3rd-grader put it together in about two hours and all together it should fit my contact page and also the blog.

Sure we all know the term “finished” when it’s about web designs and probably it will never get finished but for now it’s a huge improvement in contrast to the old design. Now you also can find some of my projects, previously listed on a Trello board, and a new way shorter “about me” text on the contact page.

The “new” blog itself still contains all my posts starting 2008 until now (919 posts) so many of them will not really fit the new design as they were created for several previous designs. For example this is the first post containing a teaser image which is really responsive and does not force mobile devices to add horizontal scroll bars. (For the posts on the first index page I’ve modified the posts also to be responsive but I will not do this for all those over 900 posts.)

Old blog design

The old blog will get migrated to the new version soon by redirecting the visitors towards this new version and then cease to exist. So if you linked the old version in your sidebar or something else please change that link to point to in the future. (Though you don’t need to go through all your posts to check whether you linked me in one of your articles, the redirects will remain in place for a long time…)

What’s your opinion on the new design? Do you like it? Please leave me a comment below…

Sorry for the spam…

Today is the day I maybe made one of the biggest mistakes of my online life. Until now I’ve managed not to send any mass-mails or spam using my email address. Multiple years, multiple email accounts and never such a disaster before.

But what happened? I got a bunch of mails over a longer time because someone (I don’t know who) opened an account using my email address at a service I’m not going to name in this blog post but those of you I’m apologizing to know what I’m talking about. To finally get rid of those mails I decided to log into that account and delete it.

Yeah, like I said in the initial paragraph: It was the biggest mistake I made while using the internet. Somehow (until now I don’t know how because I did definitely not trigger such an action) that service started a mass mailing to all contacts in my address book. Before I got the first responses to that mail (they were so “nice” to put my email address into the senders field) I noticed that spam wave because in my address book there are some mail addresses for testing purposes which were never published to the public.

Said this: Please do not accept the “invitation” from my address to join that “social discovery network”. Please treat this email as spam and throw it where it belongs: The trash bin.

Despite the fact I’ve broken my own rule not to click any link in a spam-mail I hope this does not cause a larger spam wave due to anyone subscribing to that service. Luckily the mail was sent in german language so all non-german speaking contacts in my address book will not even understand that mail and hopefully put it into trash as “foreign language spam”…

Update: Looks like I’m not the only one who has fallen for this… Despite I didn’t want to name the company sending out that mails here is an article about the same problem but from about two years ago: A Year Of Spam: The Twoo Experience (And after searching for more results there seems to be a huge list of people having the same problem…)

Update 2: Today I reviewed the list of Apps connected to my Google account and indeed found an app I apparently authorized some day to access my address book. Despite I’m a hundred percent sure I never gave that service any permissions to my Google account there are reports they took over several other apps, one of them an app I authorized a long time ago. Looks like they also used the already authorized apps of the services they took over to gain access to the address book.

Kleine Tools erleichtern den Alltag


Wer mich kennt, der weiss, dass ich mich beruflich und auch privat mit der Optimierung von Arbeitsprozessen beschaeftige und keine “Monkey-Tasks” mag. Wem ein Monkey-Task nichts sagt: Ein Monkey-Task ist eine Taetigkeit, die von Hand ausgefuehrt wird. Und zwar immer und immer wieder exakt das gleiche. In der Regel so anspruchsvoll, dass ein halbwegs trainierter Affe diesen Task auch machen koennte. Allerdings ist es ja wieder aufwaendig den Affen zu trainieren und ueberhaupt so mit Tierschutz und so…

Genau deswegen ueberlaesst man die Monkey-Tasks am Besten gleich dem Computer und dafuer moechte ich hier mal drei meiner Tools vorstellen, die in der letzten Zeit entstanden sind. Alle Projekte finden sich natuerlich auch in meinem Github-Profil und sind unter einer Apache 2.0 Lizenz veroeffentlicht, was euch ermoeglicht sie auch im kommerziellen ohne Probleme nutzen zu koennen.


Der erste Prozess, der mir auf den Geist ging, war das Einloggen an der AWS Web-Konsole. Ich habe aktuell Zugriffsrechte auf 5 verschiedene AWS-Accounts, die natuerlich alle ihre eigenen Credentials haben und zum Teil noch unterschiedlich zu bedienen sind. Jedes Mal also raus suchen welche Credentials nun fuer den Account anzuwenden sind und wie der Account heisst. Kann man mal machen aber auf Dauer geht das ziemlich auf den Senkel.

Nun erwaehnte ich ja schon, dass ich mir solche Aufgaben gerne vom Hals schaffe und somit entstand awsenv. awsenv ist ein kleines Tool, welches aus dem Terminal bedient wird und sich die ganzen Credentials merken kann. Natuerlich alles mit AES256 (aktuelle Verschluesselungsstufe fuer “geheime Dokumente” des DoD) verschluesselt, damit die Daten nicht offen irgendwo auf dem Rechner herum liegen.


Wenn man wie ich viele kleine Projekte betreibt und auch immer wieder neue Projekte anfaengt, kommt man schnell dazu, dass diesen Projekten immer wieder Lizenzen zugewiesen werden sollten. Hier ging mir auf den Keks, dass ich immer wieder die Lizenztexte im Netz suchte bzw. mich erinnern musste in welchen Projekten ich die Lizenz schon einmal verwendet habe, um die jeweilige Lizenzdatei zu kopieren. So war ein wenig Zeit und ein paar Zeilen Quellcode spaeter license entstanden.

license automatisiert das hinzufuegen dieser Lizenzdateien und uebernimmt dabei auch gleich noch das laestige Ausfuellen der Felder wie dem aktuellen Jahr und dem Author der Software. Die Daten dafuer kommen aus der git-Konfiguration des aktuellen users. Natuerlich muss man ein wenig dran editieren, wenn man die Datei fuer eine Organisation braucht aber nunja. Es ist halt doch nicht alles komplett automatisierbar.


Das einzige Tool aus dieser Reihe, welches nicht in Go geschrieben ist, dient dazu auf Github fuer Ordnung zu sorgen, wenn es darum geht welche Repositories “watched” werden. Standardmaessig hat Github die Funktion, dass alle neu erstellten Repositories “watched” sind. Wenn man wie ich in diversen Organzisationen ist und dort immer wieder Repositories erstellt werden, mit denen man im Zweifel gar nichts zu tun hat, dann schaltet man dieses automatische watching relativ schnell ab. (Oder lebt damit sie manuell weg zu klicken… Hallo Monkey-Task!)

Nachdem man diese Funktion aber deaktiviert hat, wird kein neues Repository mehr automatisch “watched” und auch die selbst angelegten sind da keine Ausnahme. Also fuehrt das schnell dazu, dass man keine Notifications mehr bekommen wenn jemand ein Issue in einem der eigenen Projekte auf macht. Jetzt geht das Spiel anders herum los: Man klickt sich durch die Liste und sucht danach, wo man doch wieder watchen sollte…

github-masswatch kann diese Arbeit abnehmen, solange man in der Lage ist die Repositories, die man abonnieren moechte in einer RegEx zu beschreiben. Wenn ich zum Beispiel meine eigenen Repositories automatisch watchen moechte waere das ^Luzifer/.*$, da die Repositories alle mit meinem Github-Nick beginnen. Man kann sogar so weit gehen, wie ich und einen Jenkins-Job einrichten, der einmal am Tag eine Liste von RegEx durch geht und alle passenden Repositories abonniert…

SmartSecure oder auch Hindernis Online-Banking

Seit einiger Zeit bin ich Kunde der ING-DiBa, nachdem mich die Weigerung der DKB auf neue Technologien zu setzen zunehmend veraergert hat. Bei der DiBa kann ich nun endlich zum Beispiel das mTan-Verfahren nutzen und muss mich nicht mehr damit herum aergern eine iTan-Liste mit mir zu fuehren oder aber von unterwegs keine Ueberweisungen machen zu koennen…

Vor Kurzem stellte die DiBa dann ihre neue App “SmartSecure” vor, die angeblich alles viel schneller und einfacher machen sollte. Zudem sollte man eine Banking-App und die SmartSecure-App zusammen auf dem gleichen Geraet nutzen koennen, was ja bei Banking-App und mTan-Verfahren nicht erlaubt ist.

Die SmartSecure-App wird damit beworben, dass sie besonders sicher sei und “als ein unabhängiges virtuelles Device umgesetzt” sei. Dabei wird auch eine eigene Tastatur verwendet, die mich inzwischen so geaergert hat, dass ich die App wieder deinstalliert und auf das mTan-Verfahren zurueckgestellt habe. Warum? Ganz einfach: Die in-App-Tastatur hat eine Usability wie ein Backstein. Als erstes startet sie immer im numerischen Eingabemodus. Prima Sache, wenn man sein App-Passwort auf eine Zahlenfolge setzen moechte, sonst aber sehr hinderlich. Dann interessiert die App und ihre Tastatur sich nicht im Geringsten dafuer, dass die Systemsprache (und auch meine Standardeingabemethode) auf US-Englisch gestellt sind. Und zu guter Letzt ist das Layout der Tastatur von der Usability Lichtjahre von der Usability der heutzutage aktuellen Smartphone-OnScreen-Tastaturen entfernt…

Nun kann man sich vielleicht mit der Tastatur noch irgendwie anfreunden allerdings was gar nicht geht: Diese App soll eine Sicherheitsfunktion im Online-Banking darstellen. Ergo ich muss alle Transaktionen und sonstigen Aktionen, die bisher eine Tan brauchten mit dieser App bestaetigen. Wenn die App mir dann nach dem Entsperren fuer einen Bruchteil einer Sekunde die freizugebende Transaktion anzeigt, sich dann wieder sperrt und dabei die Transaktion noch eigenmaechtig ohne meine Interaktion abbricht, sehe ich rot. Zum Glueck hatte ich es bisher nur, dass die Transaktion dabei abgebrochen wurde und nicht, dass sie ohne meine Interaktion freigegeben wurde. Das verursacht im Zweifel zwar Mehraufwand, da die Ueberweisung neu angelegt werden muss aber immerhin keine Kosten…

Was sich leider auch heraus stellte: Die DiBa unterstuetzt es leider nicht, dass man mehr als ein Authentifizierungsmedium gleichzeitig verwendet. Also nicht wie z.B. der Login bei Google mit einem Hardware-Token oder der Authenticator-App oder einer SMS sondern nur die SmartSecure-App. Um wieder zur mTan zurueck zu kommen, muss man zuerst von SmartSecure auf das iTan-Verfahren umstellen, sich seine iTan-Liste suchen und dann wieder auf das mTan-Verfahren wechseln. (Jeder Wechsel muss mit der gerade aktuellen Authentifizierungsmethode authorisiert werden.)