Introducing: Kube-door

I would like to introduce Kube-Door, a small side project that I have been working for a little while. Kube-door is a is a simple reserve proxy for Kubernetes services using HAProxy. It simply watches for services annotates with kube-door/ports and then generate appropriate HAproxy configuration. The idea is similar to Marathon-lb from Mesos.

The motivation for me to do this project is kube-door is relatively easier to setup than using a dedicated Ingress controller. If your kuberentes cluster has cloud configuration, then it’s best to just use LoadBalancer service type. In my case, the cluster is not configured with the cloud so we usually set up the LoadBalancer manually, which takes a lof of work.

How to use Kube-Door

First, you will need to build the docker image, I will soon publish the pre-built image somewhere, but you will need to build it yourself for now.

Then run the docker image with net host mode. Additionally, you can mount the kubeconfig and relevant certs to kube-door so it can talk to Kubernetes.

Now you will need to annotate your services with kube-door/ports annotation to expose your service. For example, below is a command to expose port 80 from your_service.

After a few seconds, you will be able to the service via port 80 of the instance where kube-door runs. You can also expose multiple-hosts and proxy by domain. See the repository README for more detail.

In the future, Kube-door will support TCP proxy and maybe also TSL/SSL proxy using kubernetes TLS secret. But for now, it is one quick way to expose your kubernetes service for external access 😀


Lesson learnt from launching a Kubernetes cluster from scartch

There are many tools for launching a kubernetes cluster, after trying many tools my current favorite one is Kargo. My main reason is Kargo gives you both the control on provisioning the instances (using kargo-cli) and installing kubernetes (using ansible directly). However, to understand more about how kubernetes is setupped, i challenge myself to do a setup from scratch manually. The motivation is that I can have a better understand on kubernetes components and dependencies.

There are already many posts on the internet about setting up a kubernetes cluster from scratch (like this good one), so I will mainly summarize my lessons learnt during the process, not diving on the detail of the how.

The installation process is quite simple

I did a some preparation before starting the installation, reading the document from kubernetes, also read part of kargo source and kube-deploy/docker-multinode source. But it is simpler than my expectation.

In one sentence, you will install docker, etcd, flannel, then finally use hyperkube for launching kubelet. etcd is the database of kubernetes, flannel is network plugin to provide overlay network for docker. The steps are repeatable for a multi node cluster. Just for information, I use Ubuntu 16.04 for host OS, and use flannel because i’m familiar most with these tools.

The nice thing is I learn out of the installation process is Kubelet is amazingly kool. Kubelet is an agent that runs on all of kubernetes cluster nodes, and its job is quite simple: it consumes the PodSpec provides by various sources, through a folder or API server, then manage the described containers. In someway, it is like a supervisor for containers. There are some very good articles about kubelet here and here. So when you deploy a kubernetes master, you just need to deploy kubelet, then point to a manifest folder and use PodSpec to describe other kubernetes services like apiserver, controller and scheduler. Kubelet will setup these up for you. For a kubernetes slave, it will pull the PodSpec from apiserver.

Another interesting thing is that you can also make kubelet to manage an addon-manager, which is similar to kubelet but it monitors addon folder and creates kube-proxy and kube-dns addon on top of kubernetes cluster. The whole setup can be visualized as the below picture, the services inside dashed box are managed by kubelet.

Beware of the default value in Hyperkube

Hyperkube is quite helpful, since it is an all-in-one binary docker has everything: kubelet, apiserver, controller manager, scheduler, proxy …It is easy to get started with hyperkube but once I want to make some modification such as changing the cluster CIDR I found myself a bit struggle in making the change. To change cluster CIDR, I need to mount both the manifest and addon into hyperkube, since the default CIDR is and the default DNS is also hardcoded to Also another thing is the default manifest makes apiserver opened to the world. And If you want to use your own certs, you will need to rewrite all the manifests and addons configuration to use your certs. By the way, Makecerts is quite useful to generate certs for kubernetes. The also does not make sense to me. If I do the setup again, next time i will just use the binary files directly.

Make sure you enable ip forwarding for both ipv4 and ipv6

Another lesson i learnt is you need to enable ip forwarding for both ipv4 and ipv6 packet forwarding for all kubernetes nodes! The surface problem is that I can not access services exposed via NodePort, but they are still accessible from inside the cluster! I thought it was my setup, something wrong with kube-proxy. I spent many hours before finding out the actual problem. Ubuntu 16.04 has ipv4 port forwarding enabled by default, but not ipv6 by default. Kubernetes listens on ipv6, the socket will work for both ipv4 and ipv6 but since the forwarding is not enabled so the packet will be dropped externally. But thanks to that I learn about iptables debugging using TRACE, which is very useful for debugging iptables, and this picture will help you to understand all the chains and steps inside iptables.


The whole installation took me 2 days to set up a proper cluster. (half of that is finding out the problem with ipforwarding …). For production environment, i will surely just use Kargo since it is a lot easier and less error prone. However, going through the process helps me understand more about kubernetes components and dependencies. I would recommend anyone to try to install kubernetes manually, i’m sure that you will learn something out of this!

Clean up untagged docker images

I usually build a lot of docker images on my laptop, and time over time the images starts eating up my disk space. Here is a quick and simple way to clean up untagged docker images

Basically, it just list all docker images with no tag (<none>) and then remove it. The assumption is untagged docker image is unused and safe to remove. When you build an image with the same tag again, the previous image will not be removed but untagged. So this is a quick way to clean up my hard drive 😀

Exposing Kubernetes cluster over VPN

Another exercise that I worked in last few weeks is to setup and test kubernetes cluster, and one of the thing that bother me is that i can not access the kubernetes pod and service directly, i have to go use kubectl pod forwarding, but it is really inconvenient. If you are not familiar with Kubernetes, it’s like kubernetes is setup another cluster inside your cluster. You can read kubernetes networking document for more detail. So i set a small challenge for myself to make kubernetes accessible over VPN, this means that once you connect to a VPN gateway you can easily access to pods and services.

The deployment can be summarized as below:

There is one easy is way that you can setup a VPN inside the Kubernetes cluster, then expose that VPN via NodePort. But I want to go the hard way, not because i want hard thing, but this will help me to understand more about kubernetes networking. And it really helped! In summary, there are three steps you need to do: connect your VPN node to kubernetes cluster, connect your vpn node to kubernetes services and adjust your VPN configuration accordingly. To give you more context: I am using kubernetes 1.5.2 on coreos with Flannel network addon, and i am using openvpn for VPN server.

Connect VPN node to kubernetes cluster

Kubernetes setups an overlay network and uses it to manage the pod network. In my case, this is equivalent to connect my VPN node to the Flannel overlay network, which is quite easy. You need to download flannel binary here, and identify the correct etcd nodes and runtime parameters. For me, this translate to this command

Some explanation: I am using multiple etcd servers with custom certs, the prefix is under /cluster.local/network and the VPN address is After running the command, it will take like 10 second, and then I can start pinging the 10.233/16 subnet, which is  the pod subnet of the kubernetes cluster.

Connect VPN node to kubernetes services

After you connect to flannel network, you still can not access the kubernetes services. Why? Because the service cluster ip is a virtual ip, and it is managed by kube-proxy and route via ip table. You can read additional detail in Service document. So what i did is: downloading the kube-proxy and start the kube-proxy on the VPN node. You will need to make sure your kubeconfig is available so that the kube-proxy will be able to connect to API server

After a few seconds, you will be able to access kubernetes cluster ip inside the cluster.

Adjusting VPN configuration

So now from the VPN node, we are able to connect to Kubernetes pods and services. I just need to adjust the openvpn to declare that the kubernetes subnet is available over VPN

However, you will soon notice that from VPN client, you still can not access the service ip yet. The main reason is they are not real ip and instead handled via IP tables. So, what i did is add a SNAT rule to forward the packet so that i will be handled by iptables as following:

Also one note is the ip is VPN node flannel’s ip. Additionally, you can also push the DNS option so that you can access kubernetes dns over the VPN.


After three simple setup (it actually took me almost half a day to figured out the whole thing and how they worked …), I managed to expose the whole cluster over VPN. It is quite convenient since I don’t have to do port forwarding or sock proxy anymore. I believe this is also reusable for your cluster as well, maybe a bit different if you use Weave or other network plugin. I hope you find this helpful and feel free to share your thoughts under comment section.