DIY: Homemade fizzy Water

For quite a time I’ve been a loyal customer to SodaStream and had like four different models of their devices to produce fizzy water at home. Over the time some broke and the last one (a SodaStream Crystal Silver) not only has a broken machanism to lift its parts but also doesn’t look hygenic anymore after not being used for more than half a year.

Therefore I tried to get the device clean again and while I’m not able to fit my hand in it properly to give it a good clean it’s also impossible to disassemble the device. Some screws are tightened with huge force (or just covered in what ever to keep the screw in place) and some are placed under plastic parts I’d break while disassembling.

The alternative

So cleaning wasn’t the option to get fizzy water again and I needed an alternative: Option A would have been to go to the local store (or Amazon) and just buy a new device. Last time I’ve bought my Crystal Silver I’ve spent like a hundred EUR for it as it came with glass bottles and I don’t like the plastic ones as they are not suitable for the dish-washer. Given I’d buy another one of those Amazon would charge me with 120 EUR…

After watching some videos on YouTube of US guys building their own solutions for fizzy water I began looking up prices and possibilities to build similar solutions myself. Luckily I stumbled upon a website with information for home-brewers (beer stuff) and they explained how to get the best results for getting bubbles into the beer. Well, my type of drink is no beer but it’s based on water (it is water) so that should work for me.

I still have the CO2 tank for the SodaStream (though I’m probably not supposed to use them in my own solution) so that’s a start. Working with CO2 tanks we’re talking about around 65-72 bar (~1000 PSI) which isn’t anything you would want to apply to any bottle for drinks out there (you would just burst it and injure yourself). Therefore our first component is a pressure regulator which is a quite common component as there are quite some home-brewers out there.

Then we need something to pour liquids in it which is able to withstand some pressure (the regulator I got myself goes up to 90 PSI which is ~6 bar). In the YouTube videos I saw the guy using some old soda bottles with a “carbonator cap”. These caps are US-made and only available in the US but luckily we don’t get that plastic stuff but caps made of stainless steel for PET bottles (in my case an empty Cola bottle). The offer I found even has a coupler for the cap included and therefore spared me to search for that one.

At last I went to the local DIY market and searched for some pipe to connect the regulator and the coupler. Turns out DIY markets don’t have any pipes specially made for CO2 being able to withstand ~8 bar (120 PSI) and having an ¼” inner diameter. But they do have transparent water pipes being able to withstand 10+ bar water pressure in exactly that diameter so I got myself half a meter of that one and two hose clamps to fixate the pipe.

In the end I spent 31 EUR for the regulator, 15 EUR for the cap and coupler and 4 EUR for the pipe and clamps. 50 EUR and around 2 minutes for assembling in total for a DIY device to get fizzy water instead of 100 EUR for the new SodaStream.

Pros vs. Cons

Clear advantage: It will cost me another 2 minutes and some cash to exchange single parts of my solution: The regulator breaks? Get a new one! The Cap is unhygenic? Put it into the dish-washer (or get a new one if it can’t be helped anymore). The pipe breaks? Luckily 6 bar won’t hurt me much and I can get a new one from the DIY store in less than 10 minutes and 4 EUR.

Disadvantages? For sure: I don’t have any well designed housing (yet!) and I’m using PET bottles instead of glass bottles to get my water fizzy. But in case the bottle needs to be exchanged I’ll just buy another Coke and I’ve got a new one. I even could do that on a weekly base.

How to even use?

Now what special method am I using to get the water fizzy? A quite easy one: Fill up the bottle with your liquid you want to have fizzy until the bottle has around 2 Inch (5cm) air left at the top. Then squeeze the bottle until the liquid nearly reaches the top and screw the cap tight. Attach the coupler and put 40 PSI of pressure to the bottle. Now without reducing the pressure shake the liquid for it to have a bigger surface and be able to absorb the CO2. The whole process works better with cooled liquids as they are better at dissolving CO2.

And if the liquid isn’t as fizzy as you wanted? Experiment with temperature, pressure, shaking, time the pressure is applied and so on until you’ve got the perfect result for yourself.

Saving money

If you want to save even more money you can visit your local CO2 dealer and get for example a 2kg bottle of CO2 and use that one instead of the SodaStream ones: Just ask them for food-quality CO2 and they will provide you with it. Most of the time you will be paying a one-time fee to buy the bottle and then exchange the bottle on each visit. If you do have a big family and everyone loves fizzy water: Get a 5 or even 10kg bottle. Much less trips, much more water.

In case you don’t want to have a big bottle of CO2 in your appartment (which isn’t a good idea) you can get a CO2 bottle to fill smaller bottles and then get for example a paintball CO2 tank for daily use. (You need to tell the CO2 dealer you want to fill other bottles as those are special bottles dispensing liquid CO2 instead of CO2 gas. NEVER ever use one of those dispensing liquid CO2 on a pressure regulator! In best case it will just freeze and stop working, in worst case you’ll end in the ICU.)

Before experimenting with this please inform yourself about the dangers of CO2 and how to handle it. As I mentioned before: You will be handling a gas with around 70 bar (1000PSI) pressure which in high concentrations is harmful to your health!

Amazon links to components:

Go 1.11 dependency management: Hell NO!

Some days ago Go 1.11 was released and with that release we now have a dependency management tool packed into the go binary itself: We’ve got Go Modules to play with. Go modules derived from the vgo test and takes quite a different approach than all the previous vendoring tools (including dep).

But lets start at the beginning of a new project: I’ve created a new folder, added a main.go with some code and some imports. Good to go, lets export GO111MODULE=on and go a go mod init. After some time thinking about my dependencies I now have a go.mod and a go.sum file. Thats all: No more vendor folder, no more vendoring commits adding a slim 500k lines of Go code to the project!

Well, thinking twice what was the initial reason we began vendoring Go code in a vendor folder and building tools around that folder? The reason was we’re not living in a perfect world but are working with stupid people: They force push to Git repos, don’t version their code and if they add versions they sometimes just pull versions from their repository. Also they tend to change their Github usernames or delete repos or even the whole account.

Now with Go modules we’re back at the point of time before all of the vendoring happened: We don’t have a copy of the third party source code. So if any of the above happens (or maybe Github is just down) our go build will look at us and start running around being confused about missing dependencies. Oh well, as the person having successfully built the last commit I do have the required dependencies. Your bad you don’t.

Go modules do fetch the dependencies basically through a go get like it was in the old times. So we will again experience all the problems we had before Go 1.5 introducing vendoring. Well, maybe we don’t: There is Project Athens and other solutions will follow. They are proxies for Go modules as Go now is capable of speaking a quite specific protocol through proxies: They can cache all the required code to build our projects.

Now we do have two working scenarios: The first is a company having all code in own repositories where all employees strictly are following a guideline never to force-push, never to delete a version, always to use SemVer and of course never to delete or rename any repository. The second is a company having set up a proxy capable of speaking the Go modules proxy protocol: They do have a cached copy of all required code to build the project. Someone deletes code: They don’t care as the proxy has a copy.

One of the main cases I am working in are open source projects: Will there be strict policies like in example one? Nope. Just look at what happened when was renamed to… Half of the Go world imploded and the maintainer wrote an excuse-issue explaining everything and trying to help people getting the rename right. Or look at dying In some old projects you will find code referenced from there and it will not build as Google pulled the plug and Google Code went dark.

So the open source projects must follow the second approach: Every project needs to set up a proxy to be used when compiling their source! So no more go get (fine tool) but first fiddling with setting the right GOPROXY environment variable before getting as I need the proxy having all code I need to build the tool. Or maybe I’m lucky, nothing bad happened to the required code and I’m still able to just go get it.

You might think the same as I’m doing: What a damn pile of bullcrap!

Of course nobody will set up such a mirror for the public. Some people will set up their personal proxies to keep the stuff running they are using (I will be one of them in case this approach stays with us without taking care of the flaws being introduced (again)) but every tool not being distributed in binary form will at some point fail to build because some library or hosting platform was pulled from the net.

What is to be said after a like 700 words rant? As long as Go modules are experimental / beta I will have an eye on them but will definitely not use them. I will continue to stick on dep and ensuring no go.mod file changes the build behaviour of my code. Full vendoring of all required code. In case someone comes up with a miracle to solve all this with Go modules I might be in but until then: Hell NO!

State of my home automation

Lately I had a chat on Twitter about the current state of my home automation triggered by a screenshot about temperature / humidity changes in my flat. In the end this led to me trying to explain my whole setup with a lot of links in just one direct message and we came to the conclusion a blog post would be a better way to transport my setup. So here it is: A (more or less) short overview of my setup.

Basic services

The base of my setup is the software performing most of my automation tasks I set up: Home Assistant is a software written in Python capable of directly communicating to 1120 different components being hardware as well as software products or custom scripts.

Most of the components I’m using are integrated using a MQTT server or are directly talking to third-party APIs.

Additionally I do have an “event system” set up which in the end is a Redis pub/sub connected to different software components creating messages and reacting on them. Some of those messages also do trigger switches inside Home Assistant using MQTT topics.

Hardware components

As a hardware component, I needed to write an own wraper for, an USB-CUL is connected to my server to receive status messages from older FS20 door/window contacts. Formerly there were multiple FS20 power outlets but as those devices only supports a one-way connection and are not able to report their current state back to the software they got replaced.

The replacement for the FS20 as well as the EdiPlug power outlets are Sonoff S20 devices. They do have a direct connection to my WiFi and are capable of reporting their state using MQTT so I do have a direct feedback whether the command was executed successfull, which was a problem with the FS20 devices.

To measure the temperature and humidity I’m using a BME280 sensor connected to Sonoff Dev boards. Sadly at the moment those boards and sensors do not have a proper housing, which definitely is something to improve later on.

All of the Sonoff Dev boards also do have at least one PIR motion sensor connected which controls the light in all rooms. I’m not sure when I touched a light switch the last time but it’s quite a time now.

In some places instead of Sonoff S20 outlets I set up Sonoff Pow to have the chance to control the connected device and also to measure the power consumption which enables my automation for example to react on idle devices (power consumption below a certain limit for a certain time) and to switch them off entirely.

All of the Sonoff components are by default delivered with a Chinese firmware talking to some Chinese servers being not quite reliable and trustworthy. Therefore the firmware on all Sonoff devices was replaced by Tasmota before connecting them to my WiFi. That way I’m in control over those systems and they do not leak data into the web.

As the last hardware component I’m currently using LifX light bulbs in all rooms. They seamlessly integrate with Home Assistant without requiring a connection to the internet though they are indeed connected to the LifX cloud system as there are some left-over components of my former automation system requiring this access to the cloud-API.

Additional suggar

All of the components above combined are doing a quite fine job and enable me to control them using the Home Assistant web interface on the one hand and with the automations in Home Assistant on the other hand. For example when leaving my flat the lights are turned off instantly while they are re-activated as soon as a movement is detected.

To trigger more complex actions which also contains systems not being connected to Home Assistant my “event system” also has a connected Telegram chat bot. There is no magic involved, just some regular expressions and reactions on those. This enables me to control for example my laptop through the chat bot: When going to bed a simple chat command is sufficient to trigger a whole bunch of actions including locking my laptop, setting my home automation into a pre-defined night mode and so on. On the other hand all of the components in my automation are capable of contacting my through the chat bot.

Regarding statistical data for example about the temperature and humidity sensors Home Assistant as well as the “event system” are reporting metrics into an InfluxDB which afterwards are displayed together with other metrics on a Grafana dashboard in my flat.


Being in the process of automating my flat in a way not requiring too much tempering with the already present electrical installation (in the end it’s a rented flat so I’m not allowed to modify everything) starting 2010 my system currently is in a state not being glued together from several different components and being prone to errors. Instead I’m currently mainly using components from two different hardware vendors and planning to even reduce this to only one vendor by replacing the LifX bulbs.

Seeing the current state of the system I’m quite pleased how well all those components are working together. Of course it is like every system: never finished as there always will be space to improve…

Microsoft acquired Github...

So probably everyone reading this already noticed that Github was / will be acquired by Microsoft and I’m not quite sure what to think about this so this is more like a brain-dump than anything else…

I’m an active user of Github for several years now and I’m quite happy with the possibilities there. Also I do have a bunch of tools related to and working with Github. On the other side I’ve turned my back towards Microsoft quite a long time ago (yeah except for the usual: my gaming pc still runs Windows). In the past for me Microsoft wasn’t really associated with good products and/or decisions.

Reading the comments on Twitter regarding this acquisition a lot of comments are like “That’s it! I’m moving to GitLab!” while others are defending Microsoft, stating they’ve changed a lot in the last 5 years and everyone being against the acquisition clearly thinks about the Microsoft of 5 years ago.

I’ve seen a lot of cool stuff people working for Microsoft are doing and I do see contributions to projects I’m also using: So they indeed seem to care about open source projects - which is the main aspect of the people defending Microsoft mentioned above. On the other side I’ve seen several projects (Sunrise calendar, Wunderlist, …) being acquired and then simply burned down.

So do I need to be worried about this acquisition? Will Github get crappy within the next - lets say one year? Do I need to move all my projects to another platform and if so: which one? In the end I’ve already tried some of them (BitBucket / GitLab) and was really unsatisfied. Maybe I should go the way other - mainly bigger - open source projects are going: Setting up my own platform (maybe using Gitea) to host my repositories…

Having sorted my thoughts a bit I’m clearly not any step further: I think I’ll have to wait for some time seeing what Microsoft is doing with/to Github and meanwhile thinking about a mirror of my repos in an own service and preparing an exit-strategy in case those who are raging against the acquisition are right and Microsoft ruins Github…

AWS Fargate - I'm dissapointed, again

It’s now close to seven years I’m working with AWS technologies and I really like the concept of having no more hardware to maintain. Just spawning instances and having them work for you: If they cause trouble they are automatically replaced and everything is fine again without me having to take any action. Also I was working with AWS ECS for several years at Jimdo and I liked it.

A few days ago AWS announced Fargate which in the end is a managed and shared ECS cluster. You don’t have to care about the stack, don’t pay for the instances and yeah… In the end just run Docker containers without having to care about anything.

Yesterday I ran some calculations whether it does pay off to switch from my Hetzner machine for 50 EUR per month to running my already containerized services on Fargate. Shouldn’t have done that!

The smallest service possible having only 0.5 GB of memory and a quarter of a vCPU already costs 13 USD per month… So for my nearly 80 containers on my host running as the smallest size possible would cost around 1100 USD per month. More than 1000 USD for 80 containers compared with a 50 EUR machine idling most of the time having capacity for several more services based on CPU, memory and also network.

Well done AWS. Another service way too pricy for non-commercial customers to use.

Sorry, I did not understand the request

As a huge fan of automating things (I’d automate everything if it would be possible) I’m using a chat-bot for quite a time doing something like “chat-ops” but for my home automation. Back when I heared about Amazon releasing Alexa together with the Echo device series I was amazed: Now I would no longer need to type my requests and use a format my bot understands (it does not have natural language processing)…

Now fast forward 7 months to the present day I’ve got more than half a year of experience with Alexa, not only using the Echo but also using Reverb on my Android device and I think it’s time to recap my experiences.

Lets start with porting my chat-bot to Alexa: That should be fairly easy, right? Or at least Alexa should be able to understand my keywords and trigger my bot to execute the appropriate actions, right? Nope. Just a plain nope. Sure, you can write own skills for Alexa: They have to exist on Lambda, they are summed up under the skill so you probably could tell “Alexa, tell ava to turn on the phone charger!” and Alexa might understand that…

Well, then lets at least use the existent skills to control my stuff. For example Alexa can control my lights (LifX bulbs). Most of the time this works sufficiently but in the end I do have “scenes” inside my chat-bot (called “Ava”) which controls not just one bulb or one room but multiple rooms with different light settings. As an example there is the scene “beforesleep” which adjust light colors, dims lights, switches off the lights in the living room and turns on those in the bedroom… Okay this might be possible through Alexa but then there is a more advanced functionality: My lights are also controlled through location boundaries. I leave my flat, lights are switched off… I’m entering, lights are adjusted and switched on. So do I really trigger scenes myself? Rarely.

But then I do have remote controlled power outlets. Those I do need to trigger manually so yay, finally a usecase for Alexa! And again a disappointing “Nope!”. I’m using a combination of FS20 outlets and EdiPlugs… You might already guess: Both are not supported by Alexa. The FS20 are controlled using FHEM and a CUL attached to my home server. The EdiPlugs do have an XML API which is then controlled through a Go daemon I wrote myself. Are those wrappers attachable to Alexa? Maybe, when writing an own skill for it…

As the support for Phillips Hue raised (which uses a bridge to control the Hue light bulbs) there finally was a chance to attach those outlets to Alexa: I wrote another daemon to expose FHEM switches to Alexa. So now I’m able to tell Alexa to turn on my electric kettle! (Then Alexa triggers the “bridge”, which triggers FHEM, which triggers the CUL, which sends a radio signal to the FS20 outlet attached to the kettle…)

Now everything is fine? Sadly again another “Nope!”… I’m using Alexa in German language. So calling the dashboard I do have set up here “Dashboard” does not work: When set to german Alexa does not understand a single word english… And though I’m using the word “dashboard” in daily conversations with other german speaking people it’s still an english word. So lets call the device “Bildschirm” (german for screen) because in the end a screen is switched on and off. Now every time I tell Alexa to switch on the screen she asks me “Which device?”… Yeah, great. I just told you? Maybe listen to me? Well repeating “Bildschirm” for her to understand… Suddenly my bathroom light turns on. What?

Speaking of Alexas capabilities to understand me: I’m not speaking one of those dialects even a german will not understand. I’m speaking a quite clear german (we call that “Hochdeutsch”, which is what you would learn when learning german)… Though when I tell Alexa to turn on my shower (she would trigger a change in my light automation, not the faucet itself) she feels insulted… When telling her to enable my phone charger, she doesn’t know how to help me or turns on my kettle… Well, nope? I clearly expressed what I’m expecting and there is no way to mistake the german word “Ladegerät” (charger) for “Wasserkocher” (kettle). It’s just not possible.

But enough of home automation: Obviously this does not work as expected by me… There are several million other skills! (Yeah, maybe I’m slightly exaggerating…) There must be a good use for Alexa… I’ve enabled the skill (back then when skills needs to be activated before use) to retrieve tide gauge reading of the Elbe river. Did I use it? Nah… When there is no giant flood I just don’t care about that reading…

So what does Alexa do for me? She creates countdown timers! (Sometimes she creates a one minute timer when I request a seven minute timer - also those two are not even close words in the german language - but most of the time this works quite fine even though Ava also can do that for me: “timer 7m” and even “timer 12h7m45s”…)

And all the other tasks? Well, some of them are controlled by motion sensors in my flat and the rest is triggered through an event-system I implemented to react on different types of messages from several services (something like IFTTT but with more code and more possibilities) or if it needs my input, the events are created by Ava based on my chat input…

Then there is another thing I’ve experienced with Alexa: My trigger-word indeed is “Alexa” but sometimes when being in a voice-chat on Teamspeak or Discord Alexa suddenly responds to something I said in that conversation. I’ve asked the others whether they understood that I asked Alexa a question: They didn’t… Nobody could explain why Alexa was triggered… On other occasions my timer expired and Alexa is giving signals my tea is done and I try to “Alexa, Stop!” her… Once, twice, thrice… Maybe she forgot her own name? Ah the fourth try worked…

Another really cool feature: Sometimes I’m watching Twitch streams having the sound routed to my sound system and when the streamer decides to mess around with Alexa owners they just can do that! (There even are news articles about TV stations reporting about Alexa and triggering her in hundreds of flats…)

Overall what did I buy when getting an Amazon Echo? I got a speaker, I’m not using to blast music because that’s a job for my sound system, which is capable of misunderstanding my requests and most of the time isn’t used because it can’t trigger things I want to trigger. Great deal!

(Header image “Robot” by william hartel)

Too many 2FA tokens to retain control of

Lately we enforced a new security policy at Jimdo which now requires every single Github account (also bot accounts) to have two factor authentication enabled. As you might imagine there are many accounts for different purposes and someone had to take care about enabling 2FA for all of them.

Luckily I’m only responsible for some of those bot users but in the end I needed to touch some accounts. As I’m a huge fan of enabling every possible security measure I can find, my Authy app already got a huge list of 2FA tokens in it. Adding even more tokens just to set them once and afterwards deleting them didn’t seem very appealing to me. Even if I would do that I still would need to store those secrets to set up those accounts again as soon as I need access to them.

After thinking about this problem I looked through all those 2FA tokens I already have in my Authy app and found that I’m using only a small amount of them on a regular basis. All those other tokens are stored for use every once in a while (probably even as few as once per year). So in the end I would be fine with putting those secrets in a place where they are secure and not stored together with the password of the service. That means storing those secrets inside LastPass would be a bad idea because LastPass already does have all the passwords inside its database.

As I’m hosting my own Vault instance I came to the idea to put the secrets into Vault and then find out how to generate a one-time-password out of them as soon as I require access to those services. And luckily I like to write small utilities to do such things…

The idea of vault-totp (download on Github releases) was born and shortly aftwards put into code. What it does? Quite simple: It takes a Vault key, reads the secret from it and generates the current one-time-password. It even can take a wildcard in the last segment of the key and print a whole list of OTPs…

vault-totp console output

Now I can put all those tokens I don’t often need into my Vault and for all those Github accounts mentioned earlier I even can put them into the company Vault, restrict access to them using Vault ACLs and in case I need a one-time-password for one account I just need one command to get it while the password is stored in another secure location…

Keeping overview of many git repos

If you’re a developer like myself you probably are dealing with a huge amount of different git repositories. Some for your private things (because having things organized with a version history is just nice), some for your private projects and even more for company work. Then there are also distractions and so maybe uncommitted changes…

At least thats the situation at my dev machine: In total 684 git repositories for like everything. Some managed at BitBucket, GitLab but most of them at Github. One thing common for all: There are untracked files, modified things not yet committed and somehow I got distracted and now they are laying on my disk waiting to get pushed to the remote.

Even though I’m doing an hourly incremental backup using duplicity and my duplicity-backup wrapper I don’t like that status but managing that huge amount of repositories and keeping an overview is hard. Thats the reason I came up with git-recurse-status.

In the end git-recurse-status is just a small Go binary walking through a tree of directories, collecting the current status of the repository and displaying it to the CLI. Sounds simple and could have been done with a small shell script. What I found too slow and also too complicated to put into a shell script is filtering those results. (684 lines are exhausting to read…)

So if you are like me you can put the binary (downlaod on Github) into your PATH and just fire up git recurse-status -f changed in your homedir or in the directory keeping all your private projects (or where ever you like) and you’re provided a list of repositories having changes (63 in my case). Similarly you can filter for repositories being ahead of the remote tracking branch and so on.

For a detailed overview what is possible see the README file inside the repository…

About quitting projects

I’ve started so many projects and other smaller things over the years. Since I’m on Github I’ve created 120 repositories containing code. Not all of them are real projects, some only are tools I’ve invested some hours in but there are so many projects I’ve invested a lot of time. Most of them are already neglected for quite a while and I don’t even remember all of them…

Recently I had a peergroup feedback with my colleagues I work with and while preparing for that feedback I realized one of the goals for the next year should be to lower my off-work workload. When looking into my todo list there are many things I need to do. Some of them are mainly “nice to have” things I can just skip over and over and nothing will happen. Maybe I should start with them. Nobody would notice if I just click that little trash can on those tasks and watch them vanish in a little animation.

Then there are those tasks I really need to do. Many of them are generated automatically every week or when ever I need to care about them. Though they also require time to be done I manage to do them quite well. But that list is only a small part of the grand total: There are issues in my repositories waiting for me to care about them. Some repositores even have whole road-maps of things I need to do.

This being already a quite big list I also work (or better should work to finish the new website of the VoxNoctem online radio. And then not to forget there is a whole bunch of Ideas in my head I even didn’t wrote down. A large amount of them vanished in the meantime but still there is a lot which stays with me.

Why I’m telling this you might ask: As stated in the beginning I want to reduce all of that workload. But I really don’t have a clue how to do this. Investing some time in all those small projects to make an improvement is not such a big deal. Especially as most of the tools in my Github account are tools to fit a single purpose and don’t need that much maintenance.

But how to deal with those big projects? There for instance is my GoBuilder: Back in 2015 I started to rewrite one component of the system but got distracted from that task. Also that task is a quite big one as It’s one of the two main components. And with like every big task it’s hard to start working on it. Even though I’m forgetting a lot of things really fast (somethimes I have no clue what I’ve done just 2 minutes ago) I still remember what I need to do to complete that task and everything inside me resists against that task.

So in the end maybe I should just stop thinking about that project as I’ve not worked on it for quite a long time and should let it go? It feels like doing so but on the other hand there are people using that project. Sure, they can use it in the current state but is it fair to them to neglect a project they are using?

And even if I can decide to stop working on those projects (and force me to really stop caring about them) how to communicate? And what about the projects I’m also hosting? For projects I’m not hosting it’s fairly easy: They can be downloaded in their latest version but taking down services put all users in the need to put another bunch of tasks to their task list: Migrate from my service to something different…

So many questions, so few answers. Do you have hints or advices for me? Let me know using Twitter, Messenger, Discord, where ever you can find me…

Using Vault to unlock GPG keys

Some weeks ago I wrote about using LastPass stored passwords to unlock ssh-keys. Some of you gave feedback, using LastPass to store those quite confidential passwords might not be the best idea. Also when there is no internet connection it’s just not possible to unlock the SSH keys (for example to access local VMs).

That’s the reason I thought about this and switched to using a local Vault instance to store those passwords. The unlock key for that Vault instance is still stored in LastPass but I only need that key once per reboot / Vault reload and also its not possible for anyone (even if they get access to my unlock keys) to use them with my local vault instance.

Now that GitHub supports signed commits with a badge in its interface I’m using my GPG key way more often to sign all the commits I’m creating so I needed a more easy way to enter the password for it. Given that also my GPG keys do not have a password someone could remember (especially as there is not only one key but seven of them) this also should be done using a script.

To use the script I embedded below you need to have a GPG-Agent running which is started using the parameter --allow-preset-passphrase. Also you need a Vault instance containing your GPG key password which is unlocked so you can do a vault read /secret/gpg-key/<your key-id>. To setup Vault please refer to the official documentation.

When you’ve met all those requirements you simply can test whether it works by executing echo "hi" | gpg -sa before and after executing the script. If everything is working it should ask for a password before the script execution but not after. The cache timeout after which the password is dropped from the gpg-agent cache can be configured. For the configuration of the gpg-agent please refer to a documentation you have trust in.