DIY: Homemade fizzy Water

For quite a time I’ve been a loyal customer to SodaStream and had like four different models of their devices to produce fizzy water at home. Over the time some broke and the last one (a SodaStream Crystal Silver) not only has a broken machanism to lift its parts but also doesn’t look hygenic anymore after not being used for more than half a year.

Therefore I tried to get the device clean again and while I’m not able to fit my hand in it properly to give it a good clean it’s also impossible to disassemble the device. Some screws are tightened with huge force (or just covered in what ever to keep the screw in place) and some are placed under plastic parts I’d break while disassembling.

The alternative

So cleaning wasn’t the option to get fizzy water again and I needed an alternative: Option A would have been to go to the local store (or Amazon) and just buy a new device. Last time I’ve bought my Crystal Silver I’ve spent like a hundred EUR for it as it came with glass bottles and I don’t like the plastic ones as they are not suitable for the dish-washer. Given I’d buy another one of those Amazon would charge me with 120 EUR…

After watching some videos on YouTube of US guys building their own solutions for fizzy water I began looking up prices and possibilities to build similar solutions myself. Luckily I stumbled upon a website with information for home-brewers (beer stuff) and they explained how to get the best results for getting bubbles into the beer. Well, my type of drink is no beer but it’s based on water (it is water) so that should work for me.

I still have the CO2 tank for the SodaStream (though I’m probably not supposed to use them in my own solution) so that’s a start. Working with CO2 tanks we’re talking about around 65-72 bar (~1000 PSI) which isn’t anything you would want to apply to any bottle for drinks out there (you would just burst it and injure yourself). Therefore our first component is a pressure regulator which is a quite common component as there are quite some home-brewers out there.

Then we need something to pour liquids in it which is able to withstand some pressure (the regulator I got myself goes up to 90 PSI which is ~6 bar). In the YouTube videos I saw the guy using some old soda bottles with a “carbonator cap”. These caps are US-made and only available in the US but luckily we don’t get that plastic stuff but caps made of stainless steel for PET bottles (in my case an empty Cola bottle). The offer I found even has a coupler for the cap included and therefore spared me to search for that one.

At last I went to the local DIY market and searched for some pipe to connect the regulator and the coupler. Turns out DIY markets don’t have any pipes specially made for CO2 being able to withstand ~8 bar (120 PSI) and having an 1/4" inner diameter. But they do have transparent water pipes being able to withstand 10+ bar water pressure in exactly that diameter so I got myself half a meter of that one and two hose clamps to fixate the pipe.

In the end I spent 31 EUR for the regulator, 15 EUR for the cap and coupler and 4 EUR for the pipe and clamps. 50 EUR and around 2 minutes for assembling in total for a DIY device to get fizzy water instead of 100 EUR for the new SodaStream.

Pros vs. Cons

Clear advantage: It will cost me another 2 minutes and some cash to exchange single parts of my solution: The regulator breaks? Get a new one! The Cap is unhygenic? Put it into the dish-washer (or get a new one if it can’t be helped anymore). The pipe breaks? Luckily 6 bar won’t hurt me much and I can get a new one from the DIY store in less than 10 minutes and 4 EUR.

Disadvantages? For sure: I don’t have any well designed housing (yet!) and I’m using PET bottles instead of glass bottles to get my water fizzy. But in case the bottle needs to be exchanged I’ll just buy another Coke and I’ve got a new one. I even could do that on a weekly base.

How to even use?

Now what special method am I using to get the water fizzy? A quite easy one: Fill up the bottle with your liquid you want to have fizzy until the bottle has around 2 Inch (5cm) air left at the top. Then squeeze the bottle until the liquid nearly reaches the top and screw the cap tight. Attach the coupler and put 40 PSI of pressure to the bottle. Now without reducing the pressure shake the liquid for it to have a bigger surface and be able to absorb the CO2. The whole process works better with cooled liquids as they are better at dissolving CO2.

And if the liquid isn’t as fizzy as you wanted? Experiment with temperature, pressure, shaking, time the pressure is applied and so on until you’ve got the perfect result for yourself.

Saving money

If you want to save even more money you can visit your local CO2 dealer and get for example a 2kg bottle of CO2 and use that one instead of the SodaStream ones: Just ask them for food-quality CO2 and they will provide you with it. Most of the time you will be paying a one-time fee to buy the bottle and then exchange the bottle on each visit. If you do have a big family and everyone loves fizzy water: Get a 5 or even 10kg bottle. Much less trips, much more water.

In case you don’t want to have a big bottle of CO2 in your appartment (which isn’t a good idea) you can get a CO2 bottle to fill smaller bottles and then get for example a paintball CO2 tank for daily use. (You need to tell the CO2 dealer you want to fill other bottles as those are special bottles dispensing liquid CO2 instead of CO2 gas. NEVER ever use one of those dispensing liquid CO2 on a pressure regulator! In best case it will just freeze and stop working, in worst case you’ll end in the ICU.)

Before experimenting with this please inform yourself about the dangers of CO2 and how to handle it. As I mentioned before: You will be handling a gas with around 70 bar (1000PSI) pressure which in high concentrations is harmful to your health!


Amazon links to components:

Go 1.11 dependency management: Hell NO!

Some days ago Go 1.11 was released and with that release we now have a dependency management tool packed into the go binary itself: We’ve got Go Modules to play with. Go modules derived from the vgo test and takes quite a different approach than all the previous vendoring tools (including dep).

But lets start at the beginning of a new project: I’ve created a new folder, added a main.go with some code and some imports. Good to go, lets export GO111MODULE=on and go a go mod init. After some time thinking about my dependencies I now have a go.mod and a go.sum file. Thats all: No more vendor folder, no more vendoring commits adding a slim 500k lines of Go code to the project!

Well, thinking twice what was the initial reason we began vendoring Go code in a vendor folder and building tools around that folder? The reason was we’re not living in a perfect world but are working with stupid people: They force push to Git repos, don’t version their code and if they add versions they sometimes just pull versions from their repository. Also they tend to change their Github usernames or delete repos or even the whole account.

Now with Go modules we’re back at the point of time before all of the vendoring happened: We don’t have a copy of the third party source code. So if any of the above happens (or maybe Github is just down) our go build will look at us and start running around being confused about missing dependencies. Oh well, as the person having successfully built the last commit I do have the required dependencies. Your bad you don’t.

Go modules do fetch the dependencies basically through a go get like it was in the old times. So we will again experience all the problems we had before Go 1.5 introducing vendoring. Well, maybe we don’t: There is Project Athens and other solutions will follow. They are proxies for Go modules as Go now is capable of speaking a quite specific protocol through proxies: They can cache all the required code to build our projects.

Now we do have two working scenarios: The first is a company having all code in own repositories where all employees strictly are following a guideline never to force-push, never to delete a version, always to use SemVer and of course never to delete or rename any repository. The second is a company having set up a proxy capable of speaking the Go modules proxy protocol: They do have a cached copy of all required code to build the project. Someone deletes code: They don’t care as the proxy has a copy.

One of the main cases I am working in are open source projects: Will there be strict policies like in example one? Nope. Just look at what happened when github.com/Sirupsen/logrus was renamed to github.com/sirupsen/logrus… Half of the Go world imploded and the maintainer wrote an excuse-issue explaining everything and trying to help people getting the rename right. Or look at dying code.google.com: In some old projects you will find code referenced from there and it will not build as Google pulled the plug and Google Code went dark.

So the open source projects must follow the second approach: Every project needs to set up a proxy to be used when compiling their source! So no more go get github.com/genuinetools/reg (fine tool) but first fiddling with setting the right GOPROXY environment variable before getting as I need the proxy having all code I need to build the tool. Or maybe I’m lucky, nothing bad happened to the required code and I’m still able to just go get it.

You might think the same as I’m doing: What a damn pile of bullcrap!

Of course nobody will set up such a mirror for the public. Some people will set up their personal proxies to keep the stuff running they are using (I will be one of them in case this approach stays with us without taking care of the flaws being introduced (again)) but every tool not being distributed in binary form will at some point fail to build because some library or hosting platform was pulled from the net.

What is to be said after a like 700 words rant? As long as Go modules are experimental / beta I will have an eye on them but will definitely not use them. I will continue to stick on dep and ensuring no go.mod file changes the build behaviour of my code. Full vendoring of all required code. In case someone comes up with a miracle to solve all this with Go modules I might be in but until then: Hell NO!

State of my home automation

Lately I had a chat on Twitter about the current state of my home automation triggered by a screenshot about temperature / humidity changes in my flat. In the end this led to me trying to explain my whole setup with a lot of links in just one direct message and we came to the conclusion a blog post would be a better way to transport my setup. So here it is: A (more or less) short overview of my setup.

Basic services

The base of my setup is the software performing most of my automation tasks I set up: Home Assistant is a software written in Python capable of directly communicating to 1120 different components being hardware as well as software products or custom scripts.

Most of the components I’m using are integrated using a MQTT server or are directly talking to third-party APIs.

Additionally I do have an “event system” set up which in the end is a Redis pub/sub connected to different software components creating messages and reacting on them. Some of those messages also do trigger switches inside Home Assistant using MQTT topics.

Hardware components

As a hardware component, I needed to write an own wraper for, an USB-CUL is connected to my server to receive status messages from older FS20 door/window contacts. Formerly there were multiple FS20 power outlets but as those devices only supports a one-way connection and are not able to report their current state back to the software they got replaced.

The replacement for the FS20 as well as the EdiPlug power outlets are Sonoff S20 devices. They do have a direct connection to my WiFi and are capable of reporting their state using MQTT so I do have a direct feedback whether the command was executed successfull, which was a problem with the FS20 devices.

To measure the temperature and humidity I’m using a BME280 sensor connected to Sonoff Dev boards. Sadly at the moment those boards and sensors do not have a proper housing, which definitely is something to improve later on.

All of the Sonoff Dev boards also do have at least one PIR motion sensor connected which controls the light in all rooms. I’m not sure when I touched a light switch the last time but it’s quite a time now.

In some places instead of Sonoff S20 outlets I set up Sonoff Pow to have the chance to control the connected device and also to measure the power consumption which enables my automation for example to react on idle devices (power consumption below a certain limit for a certain time) and to switch them off entirely.

All of the Sonoff components are by default delivered with a Chinese firmware talking to some Chinese servers being not quite reliable and trustworthy. Therefore the firmware on all Sonoff devices was replaced by Tasmota before connecting them to my WiFi. That way I’m in control over those systems and they do not leak data into the web.

As the last hardware component I’m currently using LifX light bulbs in all rooms. They seamlessly integrate with Home Assistant without requiring a connection to the internet though they are indeed connected to the LifX cloud system as there are some left-over components of my former automation system requiring this access to the cloud-API.

Additional suggar

All of the components above combined are doing a quite fine job and enable me to control them using the Home Assistant web interface on the one hand and with the automations in Home Assistant on the other hand. For example when leaving my flat the lights are turned off instantly while they are re-activated as soon as a movement is detected.

To trigger more complex actions which also contains systems not being connected to Home Assistant my “event system” also has a connected Telegram chat bot. There is no magic involved, just some regular expressions and reactions on those. This enables me to control for example my laptop through the chat bot: When going to bed a simple chat command is sufficient to trigger a whole bunch of actions including locking my laptop, setting my home automation into a pre-defined night mode and so on. On the other hand all of the components in my automation are capable of contacting my through the chat bot.

Regarding statistical data for example about the temperature and humidity sensors Home Assistant as well as the “event system” are reporting metrics into an InfluxDB which afterwards are displayed together with other metrics on a Grafana dashboard in my flat.

Conclusion

Being in the process of automating my flat in a way not requiring too much tempering with the already present electrical installation (in the end it’s a rented flat so I’m not allowed to modify everything) starting 2010 my system currently is in a state not being glued together from several different components and being prone to errors. Instead I’m currently mainly using components from two different hardware vendors and planning to even reduce this to only one vendor by replacing the LifX bulbs.

Seeing the current state of the system I’m quite pleased how well all those components are working together. Of course it is like every system: never finished as there always will be space to improve…

Microsoft acquired Github...

So probably everyone reading this already noticed that Github was / will be acquired by Microsoft and I’m not quite sure what to think about this so this is more like a brain-dump than anything else…

I’m an active user of Github for several years now and I’m quite happy with the possibilities there. Also I do have a bunch of tools related to and working with Github. On the other side I’ve turned my back towards Microsoft quite a long time ago (yeah except for the usual: my gaming pc still runs Windows). In the past for me Microsoft wasn’t really associated with good products and/or decisions.

Reading the comments on Twitter regarding this acquisition a lot of comments are like “That’s it! I’m moving to GitLab!” while others are defending Microsoft, stating they’ve changed a lot in the last 5 years and everyone being against the acquisition clearly thinks about the Microsoft of 5 years ago.

I’ve seen a lot of cool stuff people working for Microsoft are doing and I do see contributions to projects I’m also using: So they indeed seem to care about open source projects - which is the main aspect of the people defending Microsoft mentioned above. On the other side I’ve seen several projects (Sunrise calendar, Wunderlist, …) being acquired and then simply burned down.

So do I need to be worried about this acquisition? Will Github get crappy within the next - lets say one year? Do I need to move all my projects to another platform and if so: which one? In the end I’ve already tried some of them (BitBucket / GitLab) and was really unsatisfied. Maybe I should go the way other - mainly bigger - open source projects are going: Setting up my own platform (maybe using Gitea) to host my repositories…

Having sorted my thoughts a bit I’m clearly not any step further: I think I’ll have to wait for some time seeing what Microsoft is doing with/to Github and meanwhile thinking about a mirror of my repos in an own service and preparing an exit-strategy in case those who are raging against the acquisition are right and Microsoft ruins Github…