Installing kubernetes on bare metal, the easy way
From nothing all the way up to an application running in your cluster with SSL and all
It’s easy to get lost in the sheer complexity that’s involved in creating a kubernetes cluster from scratch. This post is what I would have like to read while making my own cluster on bare metal with a configuration that’s working fine for my production needs.
We will be working on:
- creating a kubernetes cluster from scratch
- deploy an application that will be accessible from the internet with SSL and all.
So let’s get started:
Step 1: Provision the servers & dependencies
To run a kubernetes cluster, you need some machines and an SSH connection to each of them. For the sake of this post, I’m using the quite awesome hertzner cloud where I’ve created 3 servers.
For easier reference, I’ve named those machines after the awesome rick & morty
serie:
echo "ip_of_master rick.node" >> /etc/hosts
echo "ip_of_woker1 morty.node" >> /etc/hosts
echo "ip_of_woker2 summer.node" >> /etc/hosts
We want to install on each machines all the components we will need:
- docker
- kubectl
- kubelet
- kubeadm
- sshfs
To do this, the fastest way is to SSH into each machine and use my install script that aggregate all the commands you need to run in this one liner:
curl https://gist.githubusercontent.com/mickael-kerjean/c69f0b17fcee16b0442b78c5a920628e/raw/402cd8fb45ceb837cd552d2fd2ef4434b8259c30/install_node.sh | bash
Optional:
- I also got a floating IP which I like to use to easily route traffic to whatever machine I want without having to tweak some DNS stuff and wait for some client cache to expire
- I also got a volume storage which I use to store the state of the applications I run, making it easy to shut down my cluster and recreate one in a matter of minutes
Step 2: Creating the Kubernetes cluster
Setup the master node
SSH into the master and proceed to the installation:
ssh root@rick.node
kubeadm init
Once the above installation completes, you still need to follow the instructions shown on your terminal:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
It should also gives you a join command you want to copy somewhere.
On the master I like to mount some volume that will be later on mounted to all the nodes
mount /dev/sdb /mnt/
Setup the worker nodes
Now you want to SSH onto each node and paste the join command you copied earlier:
kubeadm join 116.203.202.210:6443 --token <redacted> \
--discovery-token-ca-cert-hash sha256:<redacted>
If you can’t find the join command back, you can create a new one by:
kubeadm token create --print-join-command
As I don’t have any of the supported volumes, I will be using hostPath with each node having some sort of shared filesystem that’s getting pulled from the master node using SSHFS:
sshfs -o allow_other root@master.node:/mnt/ /mnt/
Step 3: Complete the setup
Now that we’ve created our kubernetes cluster, we want to make it fully functional by installing the remaining components including:
- some must have:
- network plugin: there’s many available, we’ll be using weave
- ingress controller: to expose our services to the outside world
- some nice to have:
- cert manager: to automatically generate SSL certificates
- metrics server: to have an easy way to monitor ressource consumption on the nodes and pods
- dashboard: to have a nice UI from which you can do some click click
Network plugin (must have)
Kubernetes has many networking solution available. Weave is quite decent so let’s install this one:
# install network plugin weave
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Ingress (must have)
To expose our applications to the outside world with a proper domain name, we will be creating an Ingress object but for ingress to work, we need to install one of the many ingress controller available.
The one I use is the nginx ingress controller
. The installation I’ve followed is shown in the official nginx documentation.
Copy and paste the provided manifest files but pay attention as there’s 2 way to work it out:
- with a deployment
- with a daemonset
Somehow, the default deployment method want you to have some loadbalancer available which I don’t have/need so I’ve went with the daemonSet which doesn’t require anything else.
~ # kubectl get all -n nginx-ingress
NAME READY STATUS RESTARTS AGE
pod/nginx-ingress-l8qgs 1/1 Running 0 6h12m
pod/nginx-ingress-zhbzl 1/1 Running 0 24h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/nginx-ingress 2 2 2 2 2 <none> 24h
The daemonset approach is quite convenient as it makes all the worker node looks the same from the outside. You can leverage that property to route your traffic to any node you want without problems.
Tips:
- you probably want to edit your daemonset
kubectl edit daemonset -n nginx-ingress nginx-ingress
and add ahostNetwork: true
to the template of the pod if you want nginx to show the originating IP in the logs and in theX-Farwarded-For
header instead of the default internal IP.
Cert Manager (nice to have)
Cert manager is a component that make it convenient to generate SSL certificate. To install it, you want to follow the documentation
Once done, you should see a few more pods running:
~ # k get pod -n cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-7747db9d88-5ttvk 1/1 Running 0 6h49m
cert-manager-cainjector-87c85c6ff-7dls4 1/1 Running 0 30h
cert-manager-webhook-64dc9fff44-bkpcl 1/1 Running 0 6h27m
This component make it easy to create SSL certificate. The way I have it setup is with multiple environments:
- a first environment for self signed certificate
- a second environment using let’s encrypt staging certificates
- lastly a production environment for let’s encrypt production certificate
Sometimes, things can breake and having multiple environments makes it simple to pin point where errors are coming from. So let’s create those environments:
EMAIL=mickael@kerjean.me
cat <<EOF | kubectl apply -n cert-manager -f -
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: ssl-selfsigned
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: ssl-letsencrypt-staging
spec:
acme:
email: $EMAIL
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: ssl-key-staging
solvers:
- http01:
ingress:
class: nginx
---
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: ssl-letsencrypt-prod
spec:
acme:
email: $EMAIL
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: ssl-key-prod
solvers:
- http01:
ingress:
class: nginx
EOF
Cert manager can create your SSL certificate in 2 ways:
- by using some specific annotations in your ingress manifest. We will be using this technic while deploying our application later on
- creating a manifest like this one: ``` cat «EOF | kubectl apply -f - apiVersion: cert-manager.io/v1alpha2 kind: Certificate metadata: name: ssl-cert-selfsigned spec: dnsNames:
- google.com secretName: ssl-cert-selfsigned issuerRef: name: ssl-selfsigned kind: ClusterIssuer EOF
another trick is to use nip.io to have a usable domain and try to issue a certificate against the let's encrypt staging environment
cat «EOF | kubectl apply -f - apiVersion: cert-manager.io/v1alpha2 kind: Certificate metadata: name: ssl-cert-staging spec: dnsNames:
- archive.49.12.115.49.nip.io secretName: ssl-cert-staging issuerRef: name: ssl-letsencrypt-staging kind: ClusterIssuer EOF ```
- google.com secretName: ssl-cert-selfsigned issuerRef: name: ssl-selfsigned kind: ClusterIssuer EOF
you can see the status of your certificate:
~ # k get certificate
NAME READY SECRET AGE
ssl-cert-selfsigned True ssl-cert-selfsigned 21s
ssl-cert-staging True ssl-cert-staging 7s
Once ready, your certificate will be available in a secret
that cert-manager will have create under the name you’ve specified in the Certificate manifest:
~ # k get secret
NAME TYPE DATA AGE
ssl-cert-selfsigned kubernetes.io/tls 3 50s
ssl-cert-staging kubernetes.io/tls 3 36s
Metric server (nice to have)
The metric server is quite usefull to monitor your cluster. It creates a few commands to see ressource consumption on the cluster like:
~ # kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
beth 148m 7% 2172Mi 57%
k8s 189m 18% 1205Mi 65%
morty 251m 12% 2330Mi 62%
or:
~ # kubectl top pod
NAME CPU(cores) MEMORY(bytes)
coredns-66bff467f8-9vc2d 4m 9Mi
coredns-66bff467f8-hmq4b 4m 9Mi
etcd-k8s 25m 75Mi
...
To install it, you should refer to the documentation.
To get it to work, I had to:
kubectl edit deployment metrics-server -n kube-system
and change the args to:
...
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls
...
Dashboard
Kubernetes has a web UI you can install on the server to do simple things without going through the command line. You can install it from the documentation
Step 4: Deploy an application on the cluster
Now that we have a fully functional kubernetes cluster, it’s time to deploy something into it. For the sake of this post, we’ll be deploying Filestash
Step 4.1: Create a Deployment
A deployment is a kubernetes object that is responsible of running an application. Let’s create one:
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-filestash-testing
name: app-filestash-testing
spec:
replicas: 1
selector:
matchLabels:
app: app-filestash-testing
template:
metadata:
labels:
app: app-filestash-testing
spec:
containers:
- image: machines/filestash
name: app-filestash-testing
volumeMounts:
- mountPath: /app/data/config/config.json
name: app-filestash-testing
volumes:
- name: app-filestash-testing
hostPath:
path: /mnt/config.json
type: FileOrCreate
EOF
And wait until the associated pod is in a ready state:
~ # kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
app-filestash-testing 1/1 1 1 4s
Step 4.2: Create a Service
Without service, the only way we have to reach our application is to known the IP on which it is made available. That’s not ideal.
A service solve that problem by either:
- exposing your application to a port of your node that you can access from the outside. This type of service is known of type nodePort and we won’t use it here.
- creating a new dns name from which you can reach to the application. Let’s do that:
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Service metadata: labels: app: app-filestash-testing name: app-filestash-testing spec: ports: - port: 10000 protocol: TCP targetPort: 8334 selector: app: app-filestash-testing EOF
and see the newly created service:
~ # kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE app-filestash-testing ClusterIP 10.104.114.162 <none> 10000/TCP 31s
The idea of such service is that our application is now known in our cluster as:
#[ service name ].[namespace].svc.cluster.local
app-filestash-testing.default.svc.cluster.local
Note that our application isn’t available to the outside world yet, but just from the inside of our cluster. To demonstrate our service, we can:
- create a testing pod and connect onto it:
kubectl run -ti alpine --image=machines/alpine --restart=Never --command sh
- see the application is running from the newly created service:
nslookup app-filestash-testing.default.svc.cluster.local curl -X GET -I http://app-filestash-testing.selfhosted.svc.cluster.local:10000
- clear the pod:
kubectl delete pod alpine
Step 4.3: Create an ingress
Without ingress, the only way to access our application is through the service. Ingress is the last thing we need to do to make our application available from a web browser.
To avoid creating new DNS domain and for the sake of this post, we will be using nip.io to map a domain to a specific IP. In my case, I have a floating IP (49.12.115.49) which I can map to any machine of my cluster (because of the daemonset from the ingress controller we’ve installed)
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-filestash-testing
spec:
rules:
- host: filestash.49.12.115.49.nip.io
http:
paths:
- backend:
serviceName: app-filestash-testing
servicePort: 10000
EOF
Tada, our application is now available from your browser from http://filestash.49.12.115.49.nip.io
Last thing we want to do is to create an SSL certificate for our domain. To do this, we will be using cert-manager as we have 2 ways of working this out:
- by creating a Certificate manifest manually
- creating annotations to our ingress object
As I’ve already shown the first way, we will be using the annotations to our ingress object. Let’s start by removing our ingress rule:
kubectl delete ingress app-filestash-testing
and recreate another ingress manifest:
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-filestash-testing
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/issuer: "ssl-letsencrypt-prod"
ingress.kubernetes.io/ssl-redirect: "true"
acme.cert-manager.io/http01-edit-in-place: "true"
spec:
tls:
- hosts:
- filestash.49.12.115.49.nip.io
secretName: app-filestash-testing
rules:
- host: filestash.49.12.115.49.nip.io
http:
paths:
- backend:
serviceName: app-filestash-testing
servicePort: 10000
EOF
and wait until your newly created certificate is issued and the READY state is set to True
:
watch kubectl get certificate
Tada, our application is now available from your browser from https://filestash.49.12.115.49.nip.io
Note: A lot of people use the nip.io
service and let’s encrypt is often applying some rate limits which would cause your certificate to never be issued. To verify this, you can read the logs by kubectl logs -f cert-manager-7747db9d88-5ttvk -n cert-manager
or use the ssl-letsencrypt-staging
environment we’ve created earlier