Kubernetes as container orchestration platform is very hot topic now. In this post, I am trying to follow Rancher and K3S guide to integrate them. 
  • Rancher is an amazing GUI for managing and installing Kubernetes clusters. It addresses the operational and security challenges of managing multiple Kubernetes clusters across any infrastructure, while providing DevOps teams with integrated tools for running containerized workloads.
  • K3s is designed to be a single binary of less than 40MB that completely implements the Kubernetes API. In order to achieve this, they removed a lot of extra drivers that didn’t need to be part of the core and are easily replaced with add-ons.

It is much easier than what I thought before for installing both and integrate them. One command for Rancher and one command for K3S, then modify k3s service file to change container engine from containered to docker. After that, another command to import K3S into Rancher. That’s it.


Install Docker environment

Although K3S integrates Containerd by default, for many reasons, for the convenience of subsequent deployment, we will replace Containerd with Docker here.

curl -fsSL get.docker.com | sh

Install Rancher Server


The name of Rancher Server sounds like it has to install a lot of things, but it is not. Rancher Server is actually just a Docker image, and the entire Rancher program is packaged using Docker. So the configuration is relatively simple, only one command is needed:
docker run -d -v /data/docker/rancher-server/var/lib/rancher/:/var/lib/rancher/ --restart=unless-stopped --name rancher-server -p 80:80 -p 443:443 rancher/rancher:stable
Wait for a few minutes,  then visit your Server IP to enter the first configuration interface of Rancher Server.

Add Cluster

Import an existing cluster

Copy the third script for next step to import cluster into Rancher.

Install K3S cluster

Let’s start the deployment of the K3S cluster.
The official website k3s.io provides a very useful one-command installation script, we only need to use the one-command script to complete the installation of the K3S environment:
curl -sfL https://get.k3s.io | sh -
After the installation is complete, we need to adjust the K3S service configuration file to disable traefik . 
Note: Traefik is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease. It simplifies networking complexity while designing, deploying, and running applications. Traefik is deployed by default when starting the server.
Modify the configuration file of the K3S service:
vim /etc/systemd/system/multi-user.target.wants/k3s.service
The contents of the file are as follows:
[Unit]
Description=Lightweight Kubernetes
Documentation=https://k3s.io
After=network-online.target

[Service]
Type=notify
EnvironmentFile=/etc/systemd/system/k3s.service.env
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/k3s server
Here we need to modify the value of ExecStart and modify it to:
/usr/local/bin/k3s server --docker --no-deploy traefik
After saving and exiting, execute the command to reload the new service configuration file:
systemctl daemon-reload
Restart the K3S service after completion:
service k3s restart
Wait for tens of seconds, and then confirm whether the K3S cluster is ready through the command:
k3s kubectl get node
You will get a result similar to the following:

root@K3S-1:~# vim /etc/systemd/system/multi-user.target.wants/k3s.service
root@K3S-1:~# systemctl daemon-reload
root@K3S-1:~# service k3s restart
root@K3S-1:~# k3s kubectl get node
NAME    STATUS   ROLES    AGE   VERSION
k3s-1   Ready    master   58s   v1.18.6+k3s1
root@K3S-1:~#


Import the K3S cluster to Rancher

On the current Rancher Server, the cluster status is displayed as Pending , like this:

This is because we have not yet imported the cluster. In this step, we will import the cluster and establish a connection between Rancher Server and the K3S cluster.
On the K3S master node (in general, the first node is the master controller, also called the Server node), execute the command to import the cluster:
curl --insecure -sfL https://52.152.236.147/v3/import/jr42wvdhk4w94htxxtf5hv424rsjjz6hzq9vl2lj8q9dnb8dgcwgzn.yaml | kubectl apply -f -
Note: The import commands of each cluster are different, please do not copy the import commands in the tutorial directly!
After that, the following information will be returned in the Shell, indicating that the cluster import configuration is successful:
root@K3S-1:~# curl --insecure -sfL https://52.152.236.147/v3/import/6tx4vblm9464jc4wnvj5kx87qsxxkqxcrmn575msq55j6j2bvdzcvk.yaml | kubectl apply -f -
error: no objects passed to apply
root@K3S-1:~# curl --insecure -sfL https://52.152.236.147/v3/import/6tx4vblm9464jc4wnvj5kx87qsxxkqxcrmn575msq55j6j2bvdzcvk.yaml

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: proxy-clusterrole-kubeapiserver
rules:
- apiGroups: [""]
  resources:
  - nodes/metrics
  - nodes/proxy
  - nodes/stats
  - nodes/log
  - nodes/spec
...(Omitted)...
      - name: k8s-ssl
        hostPath:
          path: /etc/kubernetes
          type: DirectoryOrCreate
      - name: var-run
        hostPath:
          path: /var/run
          type: DirectoryOrCreate
      - name: run
        hostPath:
          path: /run
          type: DirectoryOrCreate
      - name: cattle-credentials
        secret:
          secretName: cattle-credentials-26fff6d
          defaultMode: 320
      - hostPath:
          path: /etc/docker/certs.d
          type: DirectoryOrCreate
        name: docker-certs
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
root@K3S-1:~# curl --insecure -sfL https://52.152.236.147/v3/import/6tx4vblm9464jc4wnvj5kx87qsxxkqxcrmn575msq55j6j2bvdzcvk.yaml | kubectl apply -f -
clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver created
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master created
namespace/cattle-system created
serviceaccount/cattle created
clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding created
secret/cattle-credentials-26fff6d created
clusterrole.rbac.authorization.k8s.io/cattle-admin created
deployment.apps/cattle-cluster-agent created
daemonset.apps/cattle-node-agent created
root@K3S-1:~#



Go back to the Rancher interface and wait for tens of seconds, we will find that the Pending state has changed to the Waiting state:

This status shows that Rancher has received the registration request from K3S and is completing the registration of the K3S cluster. Wait a couple of seconds to complete the import of the K3S cluster, and status will become Active:

At this point, we have successfully completed the connection between Rancher 2.x and K3S, and can operate the K3S cluster just like the K8S cluster.

YouTube Video:

References

from Blogger http://blog.51sec.org/2020/07/lightweight-k8s-lab-rancher-22-k3s.html

By netsec

Leave a Reply