Create Kubernetes Clusters with CRI-O via Vagrant

Wade Huang
8 min readJun 21, 2020

How to use Vagrant to quickly create Kubernetes clusters with CRI-O runtime.

I have used Docker and Docker Swarm for years. Recently, I started learning Kubernetes to see how good it is and why people like it. In the beginning, I used Docker Desktop and MiniKube to get a hand on. However, the two methods are just a single node cluster that I felt not more Kubernetes enough. I started to create VMs to build and try a more complicated Kubernetes environment and since docker might be too fat in the Kubernetes environment, so I chose CRI-O for the runtime. I did spend a lot of time to research and tries and errors. I guess if you are a beginner like me, you might meet the problems I met. I want to share this post to help others to make them can play Kubernetes easier. This post shows how to create one manager and N workers Kubernetes clusters with Kubernetes Dashboard. Also, the total installation can be within 10 minutes.

Pre-requirements

  • A virtualization enabled machine, I guess nowadays most PC or MAC support that.
  • A virtualization software installed such as VirtualBox, Hyper-V, and VMware.
  • Vagrant installed.

Vagrant

Vagrant is a VM tool. The features are similar to Docker, but Vagrant manages VMs and Docker manages containers. These features that we will use in this post.

  • Pull a based box from its hub. The feature is similar to Docker pull. We don’t need to find and download a Linux image ourselves.
  • Define Vagrantfile. The feature is similar to Dockerfile. We can customize VMs by Vagranfile and Vagrant can manage VMs by the file.
  • CLI. We will use Vagrant up to create VMs and vargant ssh to login to VMs

Vagrant has more features, you can go to its website to learn more information.

🔔NOTE: I use Vagrant compares to Docker because I guess the readers of this post might know Docker.

Follow the below steps to create a one manage and two workers Kubernetes cluster.

1. Create a Project Folder

mkdir path_to_the_folder
cd path_to_the_folder

Vagrant puts VM related files (hard disks, settings, etc) in the PWD folder. It is better to create a new folder to manage these files.

2. Download the Vagrantfile

you can go to this gist to down the file or run this command to download

wget https://gist.githubusercontent.com/wadehuang36/c31c27c1651261e7992f5dcc61b29c35/raw/Vagrantfile

The content of Vagrantfile will be explained later in this post. The Vagrantfile includes two types of VMs. A manager and workers.

🔔NOTE: The Vagrantfile only supports Hyper-V and VirtualBox. If you use another provider, you have to change provider settings in the Vagrantfile.

3. Create the Manager Node

vagrant up manager

We need to create the Kubernetes manager node first to get the joining information.

Set the join command

After creating, you should see the join command like below

An output of kubeadm init

You can either use this command to extract and put the command into KUBE_JOIN_COMMAND environment variable that will be used in creating workers

export KUBE_JOIN_COMMAND=`vagrant ssh manager -- tail -2 kubeadm.init.log`

Or just copy and paste to terminal

export KUBE_JOIN_COMMAND="kubeadm join $the_address --token $the_token --discovery-token-ca-cert-hash $the_hash"

🔔NOTE:
- The \ and new line have to be removed.
- The output of kubeadm init is saved on /home/vagrant/kubeadm.init.log the manager VM.
- For PowerShell users, the syntax of set environment variable is $env:KUBE_JOIN_COMMAND="command"

4. Create the Worker Nodes

You can use a regular expression to create multiple workers or specific name to create one worker

vagrant up /worker-[0-9]/

Set the number of workers [optional]

The default number of workers is 2. If you want to create a different number of workers, you can set WORKERS environment variable before creating workers.

WORKERS=5
vagrant up /worker-[0-9]/

Add or Destroy a worker [optional]

# if you want to add other worker afterward, run these commnad
KUBE_JOIN_COMMAND="the join command"
WORKERS=6
vagrant up worker-6
# or you destroy a worker, run this command
vagrant destory worker-6

5. Get status

You can run this command to see the status of nodes.

vagrant ssh manager -- kubectl get node

If you see the output like the above image, 🎉🎉🎉 the cluster has been created.

6. Visit Kubernetes Dashboard

Find the IP address of the manager

When you run yagrant ssh manager, you should see the machine information like the below image, remember the IP address.

📢If you use windows and hyper-v, you can just type kube-manager.mshome.netinstead of IP address because hyper-v has the DNS for vm-hostname.mshome.net (other software might have as well, but I haven’t tried, I will be happy if you can tell me that).

Find the Access Token

run vagrant ssh manager -- kubectl describe secret dashboard-admin and you can see the token. Copy it.

Open a Browser and Type https://manager_ip:30000 or https://kube-manager.mshome.net:30000

After you paste the access token, you can see the dashboard. 🎉🎉🎉

7. Create Pod and Service

# if the machine of your current shell isn't manager, ssh into it by
vagrant ssh manager
# create a pod which is a echo server.
kubectl create deployment echo-server --image=k8s.gcr.io/echoserver:1.4
# create a service with node port on 30010
kubectl create service nodeport echo-server --node-port=30010 --tcp=8080
# create
curl http://localhost:30010

If every right, you should see the pod echo the request.

Clean up

If you finished the play, you can run the below command to clear everything. Vagrant will delete VMs (the manager and all workers) and VM’s related files.

vagrant destory

Summary

For me. Docker Swarm seems like a all-in-one PC or laptop. It has a little things to customize that is a good for people don‘t have specific requirements. And Kubernetes seems like a DIY PC. We can choose the components we want. However, the learning curve is much more than Docker. Since there is too many choices and more time to master.

The Vagrantfile

The Vagrantfile should be written in Rudy, The below is the basic block.

Vagrant.configure("2") do |config|end

Inside the block, we can define which box to use and the VM configurations. I use “hashicorp/bionic64” which is about 500 MB small ubuntu box. Since a node of Kubernetes needs 2 CPUs and 2 GB memory, so we need to define them as well.

  config.vm.box = "hashicorp/bionic64"
config.vm.synced_folder ".", "/vagrant", disabled: true
# a node of kuberneters needs at least 2 cpus
config.vm.provider "hyperv" do |h|
h.cpus = 2
h.memory = 2048
end
config.vm.provider "virtualbox" do |vb|
vb.cpus = 2
vb.memory = 2048
end

Then we can define the manager and workers. I extracted the same script that runs on the manager and workers to COMMON_SCRIPT. Then each type of VM runs its own scripts.

  # define the manager
config.vm.define "manager" do |manager|
manager.vm.hostname = "kube-manager"
manager.vm.provision "shell", inline: $COMMON_SCRIPT
manager.vm.provision "shell", inline: $MANAGER_SCRIPT
end
# define workers, default is two workers
$WORKERS = (ENV["WORKERS"] || 2).to_i
(1..$WORKERS).each do |i|
config.vm.define "worker-#{i}" do |worker|
worker.vm.hostname = "kube-worker-#{i}"
worker.vm.provision "shell", inline: $COMMON_SCRIPT
worker.vm.provision "shell", env: {
"KUBE_JOIN_COMMAND" => ENV["KUBE_JOIN_COMMAND"],
},
inline: <<-'WORKER_SCRIPT'
# run join cluster command
eval "$KUBE_JOIN_COMMAND"
WORKER_SCRIPT
end
end
end

🔔NOTE: There are two styles of inline shell I used, one has scripts inside the block and one refers to another variable. I put scripts to variables because the readability is better when scripts are too long.

COMMON_SCRIPT block is setting and installing the necessary things for a node of Kubernetes

$COMMON_SCRIPT = <<-'COMMON_SCRIPT'
# to superuser to keep write many sudo
sudo su
# kebernetes requires swap off
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
modprobe overlay
modprobe br_netfilter
# Set up required sysctl params, these persist across reboots.
cat << EOF | tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
# config /etc/default/kubelet for CRI-O
cat << EOF | tee /etc/default/kubelet
KUBELET_EXTRA_ARGS=--feature-gates="AllAlpha=false,RunAsGroup=true" --container-runtime=remote --cgroup-driver=systemd --container-runtime-endpoint='unix:///var/run/crio/crio.sock' --runtime-request-timeout=5m
EOF
sysctl --system# add CRI-O package repository
. /etc/os-release
sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/x${NAME}_${VERSION_ID}/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/x${NAME}_${VERSION_ID}/Release.key -O- | apt-key add -
# add kubernetes package repository
apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat << EOF | tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update# install CRI-O
apt-get install -y cri-o-1.17
## start CRI-O
systemctl daemon-reload
systemctl enable crio
systemctl start crio
# install kubelet kubeadm
apt-get install -y kubelet kubeadm
apt-mark hold kubelet kubeadm
COMMON_SCRIPT

$MANAGER_SCRIPT block is setting and installing the necessary things for a manager node of Kubernetes. And I also set up things for demo in the post such as dashboard, secret, and exposed port.

$MANAGER_SCRIPT = <<-'MANAGER_SCRIPT'## install kubectl
apt-get install -y kubectl
apt-mark hold kubectl
# run kube init and output to file
kubeadm init --pod-network-cidr=10.88.0.0/16 | tee /home/vagrant/kubeadm.init.log
# copy config to root and vagrant
mkdir -p /root/.kube
cp -i /etc/kubernetes/admin.conf /root/.kube/config
mkdir -p /home/vagrant/.kube
cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config
chown vagrant:vagrant /home/vagrant/.kube/config
# install network plugin
kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
# install dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
# expose dashboard port to 30000
kubectl patch service kubernetes-dashboard -n kubernetes-dashboard -p '{ "spec": { "type": "NodePort", "ports": [ { "protocol": "TCP", "port": 443, "targetPort": 8443, "nodePort": 30000 } ] } }'
# create user for dashboard
kubectl create serviceaccount dashboard-admin-sa
kubectl create clusterrolebinding dashboard-admin-sa --clusterrole=cluster-admin --serviceaccount=default:dashboard-admin-sa
MANAGER_SCRIPT

This is the end of the post. Hope you enjoying it.

--

--

Wade Huang

Expert at .Net, Nodejs, Android, React and React Native