Installation K3s sur raspberry pi et banana pi

Dans notre article voici le matériel qui va être utilisé:

Master: raspberry pi 3 B v2
Worker1: banana pi pro

Sur le raspberry:
L’os sur le raspberry est une raspbian
Nous allons commencer par la mise à jour de l’os .

#apt update && apt upgrade -y

Téléchargement et installation de k3s

#curl -sfL https://get.k3s.io | sh -
root@raspberrypi:~# curl -sfL https://get.k3s.io | sh -
[INFO] Finding release for channel stable
[INFO] Using v1.18.2+k3s1 as release
[INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.2+k3s1/sha256sum-arm.txt
[INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.18.2+k3s1/k3s-armhf
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
root@raspberrypi:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
raspberrypi Ready master 18s v1.18.2+k3s1
root@raspberrypi:~# cat /var/lib/rancher/k3s/server/node-token
K10e6ec227ed414c252ad9a0b54f41236713dd02d639ab9acfe59ca50ade3587821::server:0b14d13efa8cbd31e6c0c40128a8cc4c
root@raspberrypi:~# cat /etc/rancher/k3s/k3s.yaml
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJXRENCL3FBREFnRUNBZ0VBTUFvR0NDcUdTTTQ5QkFNQ01DTXhJVEFmQmdOVkJBTU1HR3N6Y3kxelpYSjIKWlhJdFkyRkFNVFU0T1RRMk5qWXhOakFlRncweU1EQTFNVFF4TkRNd01UWmFGdzB6TURBMU1USXhORE13TVRaYQpNQ014SVRBZkJnTlZCQU1NR0dzemN5MXpaWEoyWlhJdFkyRkFNVFU0T1RRMk5qWXhOakJaTUJNR0J5cUdTTTQ5CkFnRUdDQ3FHU000OUF3RUhBMElBQkgrNjRqamZVYnVrZzcrcC9oQzRwUkFTYkovNDFuTkNIdG14d3grYVVwcE0KMUNBYXJKb1V1MVZJYk94eUNKRzdyKzZwS0RlQ0FQN3BSTnJxd1JuVGdHS2pJekFoTUE0R0ExVWREd0VCL3dRRQpBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUFvR0NDcUdTTTQ5QkFNQ0Ewa0FNRVlDSVFDdEIwSmphK2FxCk5VYWVSQW0xWFI2TTVNN0twajE0SFBoTGFlcjRUVEJSL0FJaEFOcnlHczVZKytrVDZLYUpKSm9QV09LZitpbnIKYis1R2x3cDdWVldLTWltdAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://127.0.0.1:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    password: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
    username: admin

Sur le banana pi:(worker 1)

root@bananapipro:~# curl -sfL http://get.k3s.io | K3S_URL=https://192.168.1.97:6443 K3S_TOKEN=K10e6ec227ed414c252ad9a0b54f41236713dd02d639ab9acfe59ca50ade3587821::server:0b14d13efa8cbd31e6c0c40128a8cc4c sh -
[INFO] Finding release for channel stable
[INFO] Using v1.18.2+k3s1 as release
[INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.2+k3s1/sha256sum-arm.txt
[INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.18.2+k3s1/k3s-armhf
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO] systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
[INFO] systemd: Starting k3s-agent

Sur votre pc de travail ( dans notre cas un ubuntu ), nous allons installer kubectl:

ruben@thinserv1:~$ mkdir ~/.kube
mkdir: impossible de créer le répertoire «/home/ruben/.kube»: Le fichier existe
ruben@thinserv1:~$ cd .kube/
ruben@thinserv1:~/.kube$ ls
cache http-cache
ruben@thinserv1:~/.kube$ vi config

ruben@thinserv1:~/.kube$ chmod 600 ~/.kube/config

ruben@thinserv1:~/.kube$ sudo apt update && sudo apt install -y apt-transport-https
................
Il est nécessaire de prendre 1 712 o dans les archives.
Après cette opération, 157 ko d'espace disque supplémentaires seront utilisés.
Réception de :1 http://fr.archive.ubuntu.com/ubuntu eoan-updates/universe amd64 apt-transport-https all 1.9.4ubuntu0.1 [1 712 B]
1 712 o réceptionnés en 0s (142 ko/s)
Sélection du paquet apt-transport-https précédemment désélectionné.
(Lecture de la base de données... 266411 fichiers et répertoires déjà installés.)
Préparation du dépaquetage de .../apt-transport-https_1.9.4ubuntu0.1_all.deb ...
Dépaquetage de apt-transport-https (1.9.4ubuntu0.1) ...
Paramétrage de apt-transport-https (1.9.4ubuntu0.1) ...
ruben@thinserv1:~/.kube$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
OK
ruben@thinserv1:~/.kube$ echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | \
> sudo tee -a /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
ruben@thinserv1:~/.kube$ sudo apt update && sudo apt install kubectl
Atteint :2 http://fr.archive.ubuntu.com/ubuntu eoan InRelease
Atteint :3 http://fr.archive.ubuntu.com/ubuntu eoan-updates InRelease
Atteint :4 http://archive.canonical.com/ubuntu eoan InRelease
Atteint :5 http://dl.google.com/linux/chrome/deb stable InRelease
Atteint :6 http://ppa.launchpad.net/notepadqq-team/notepadqq/ubuntu bionic InRelease
Atteint :7 http://ppa.launchpad.net/yannubuntu/boot-repair/ubuntu eoan InRelease
Ign :1 https://pkg.jenkins.io/debian-stable binary/ InRelease
Atteint :8 http://fr.archive.ubuntu.com/ubuntu eoan-backports InRelease
Atteint :9 https://download.docker.com/linux/ubuntu disco InRelease
Atteint :10 https://pkg.jenkins.io/debian-stable binary/ Release
Atteint :11 https://packages.microsoft.com/repos/ms-teams stable InRelease
Atteint :12 http://security.ubuntu.com/ubuntu eoan-security InRelease
Réception de :13 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8 993 B]
Réception de :15 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [35,5 kB]
44,5 ko réceptionnés en 2s (19,5 ko/s)
Lecture des listes de paquets... Fait
Construction de l'arbre des dépendances
Lecture des informations d'état... Fait
14 paquets peuvent être mis à jour. Exécutez « apt list --upgradable » pour les voir.
Lecture des listes de paquets... Fait
Construction de l'arbre des dépendances
Lecture des informations d'état... Fait
Les NOUVEAUX paquets suivants seront installés :
kubectl
0 mis à jour, 1 nouvellement installés, 0 à enlever et 14 non mis à jour.
Il est nécessaire de prendre 8 825 ko dans les archives.
Après cette opération, 44,0 Mo d'espace disque supplémentaires seront utilisés.
Réception de :1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.18.2-00 [8 825 kB]
8 825 ko réceptionnés en 1s (15,5 Mo/s)
Sélection du paquet kubectl précédemment désélectionné.
(Lecture de la base de données... 266415 fichiers et répertoires déjà installés.)
Préparation du dépaquetage de .../kubectl_1.18.2-00_amd64.deb ...
Dépaquetage de kubectl (1.18.2-00) ...
Paramétrage de kubectl (1.18.2-00) ...
ruben@thinserv1:~/.kube$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
bananapipro Ready <none> 5m3s v1.18.2+k3s1
raspberrypi Ready master 10m v1.18.2+k3s1
ruben@thinserv1:~/.kube$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system local-path-provisioner-6d59f47c7-9ksnl 1/1 Running 0 10m
kube-system metrics-server-7566d596c8-cvw9l 1/1 Running 0 10m
kube-system coredns-8655855d6-2cdn5 1/1 Running 0 10m
kube-system helm-install-traefik-mckq7 0/1 Completed 1 10m
kube-system svclb-traefik-r99dg 2/2 Running 0 9m10s
kube-system traefik-758cd5fc85-gqjt9 1/1 Running 0 9m11s
kube-system svclb-traefik-flt7m 2/2 Running 0 5m12s

ruben@thinserv1:~/.kube$ kubectl get ns
NAME STATUS AGE
default Active 17m
kube-system Active 17m
kube-public Active 17m
kube-node-lease Active 17m
ruben@thinserv1:~/.kube$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
local-path-provisioner-6d59f47c7-9ksnl 1/1 Running 0 17m
metrics-server-7566d596c8-cvw9l 1/1 Running 0 17m
coredns-8655855d6-2cdn5 1/1 Running 0 17m
helm-install-traefik-mckq7 0/1 Completed 1 17m
svclb-traefik-r99dg 2/2 Running 0 15m
traefik-758cd5fc85-gqjt9 1/1 Running 0 15m
svclb-traefik-flt7m 2/2 Running 0 11m
Installation du dashboard:
ruben@thinserv1:~/.kube$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
ruben@thinserv1$ kubectl proxy
ruben@thinserv1:~/.kube$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 25m
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 25m
kube-system metrics-server ClusterIP 10.43.181.53 <none> 443/TCP 25m
kube-system traefik-prometheus ClusterIP 10.43.173.255 <none> 9100/TCP 23m
kube-system traefik LoadBalancer 10.43.235.22 192.168.1.97 80:31586/TCP,443:32440/TCP 23m
kubernetes-dashboard kubernetes-dashboard ClusterIP 10.43.93.105 <none> 443/TCP 2m22s
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.43.64.89 <none> 8000/TCP 2m21s
ruben@thinserv1:~/.kube$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system local-path-provisioner-6d59f47c7-9ksnl 1/1 Running 0 25m
kube-system metrics-server-7566d596c8-cvw9l 1/1 Running 0 25m
kube-system coredns-8655855d6-2cdn5 1/1 Running 0 25m
kube-system helm-install-traefik-mckq7 0/1 Completed 1 25m
kube-system svclb-traefik-r99dg 2/2 Running 0 23m
kube-system traefik-758cd5fc85-gqjt9 1/1 Running 0 23m
kube-system svclb-traefik-flt7m 2/2 Running 0 19m
kubernetes-dashboard dashboard-metrics-scraper-6b4884c9d5-5tt7t 1/1 Running 0 2m43s
kubernetes-dashboard kubernetes-dashboard-7b544877d5-cmjdl 1/1 Running 0 2m43s
ruben@thinserv1:~/.kube$
ruben@thinserv1:~/.kube$ kubectl logs -f dashboard-metrics-scraper-6b4884c9d5-5tt7t --namespace=kubernetes-dashboard
{"level":"info","msg":"Kubernetes host: https://10.43.0.1:443","time":"2020-05-14T14:55:01Z"}
10.42.1.1 - - [14/May/2020:14:55:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
10.42.1.1 - - [14/May/2020:14:55:51 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
10.42.1.1 - - [14/May/2020:14:55:56 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
{"level":"info","msg":"Database updated: 2 nodes, 7 pods","time":"2020-05-14T14:56:02Z"}
10.42.1.1 - - [14/May/2020:14:56:06 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
10.42.0.0 - - [14/May/2020:14:56:07 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.0.0"
10.42.0.0 - - [14/May/2020:14:56:07 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.0.0"
10.42.1.1 - - [14/May/2020:14:56:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
10.42.1.1 - - [14/May/2020:14:56:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
10.42.1.1 - - [14/May/2020:14:56:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
10.42.0.0 - - [14/May/2020:14:56:37 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.0.0"
10.42.1.1 - - [14/May/2020:14:56:46 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
10.42.1.1 - - [14/May/2020:14:56:56 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
{"level":"info","msg":"Database updated: 2 nodes, 8 pods","time":"2020-05-14T14:57:02Z"}
10.42.1.1 - - [14/May/2020:14:57:06 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
10.42.0.0 - - [14/May/2020:14:57:07 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.0.0"
10.42.1.1 - - [14/May/2020:14:57:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
10.42.1.1 - - [14/May/2020:14:57:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
10.42.1.1 - - [14/May/2020:14:57:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
10.42.0.0 - - [14/May/2020:14:57:37 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.0.0"
10.42.1.1 - - [14/May/2020:14:57:46 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
10.42.1.1 - - [14/May/2020:14:57:56 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
{"level":"info","msg":"Database updated: 2 nodes, 8 pods","time":"2020-05-14T14:58:02Z"}
10.42.1.1 - - [14/May/2020:14:58:06 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
10.42.0.0 - - [14/May/2020:14:58:07 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.0.0"
10.42.1.1 - - [14/May/2020:14:58:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
10.42.1.1 - - [14/May/2020:14:58:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
10.42.1.1 - - [14/May/2020:14:58:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
10.42.0.0 - - [14/May/2020:14:58:37 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.0.0"
10.42.1.1 - - [14/May/2020:14:58:46 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
10.42.1.1 - - [14/May/2020:14:58:56 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
{"level":"info","msg":"Database updated: 2 nodes, 8 pods","time":"2020-05-14T14:59:02Z"}
10.42.1.1 - - [14/May/2020:14:59:06 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
10.42.0.0 - - [14/May/2020:14:59:07 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.0.0"
10.42.1.1 - - [14/May/2020:14:59:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
10.42.1.1 - - [14/May/2020:14:59:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
10.42.1.1 - - [14/May/2020:14:59:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
10.42.0.0 - - [14/May/2020:14:59:37 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.0.0"

ruben@thinserv1:~/.kube$ kubectl logs -f kubernetes-dashboard-7b544877d5-cmjdl --namespace=kubernetes-dashboard
2020/05/14 14:56:02 Starting overwatch
2020/05/14 14:56:02 Using namespace: kubernetes-dashboard
2020/05/14 14:56:02 Using in-cluster config to connect to apiserver
2020/05/14 14:56:02 Using secret token for csrf signing
2020/05/14 14:56:02 Initializing csrf token from kubernetes-dashboard-csrf secret
2020/05/14 14:56:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2020/05/14 14:56:02 Successful initial request to the apiserver, version: v1.18.2+k3s1
2020/05/14 14:56:02 Generating JWE encryption key
2020/05/14 14:56:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2020/05/14 14:56:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2020/05/14 14:56:07 Initializing JWE encryption key from synchronized object
2020/05/14 14:56:07 Creating in-cluster Sidecar client
2020/05/14 14:56:07 Auto-generating certificates
2020/05/14 14:56:07 Successful request to sidecar
2020/05/14 14:56:07 Successfully created certificates
2020/05/14 14:56:07 Serving securely on HTTPS port: 8443
2020/05/14 15:00:03 [2020-05-14T15:00:03Z] Incoming HTTP/2.0 GET /api/v1/settings/global request from 10.42.0.0:44400:
2020/05/14 15:00:03 [2020-05-14T15:00:03Z] Incoming HTTP/2.0 GET /api/v1/settings/pinner request from 10.42.0.0:44400:
2020/05/14 15:00:03 Getting application global configuration
2020/05/14 15:00:03 Application configuration {"serverTime":1589468403442}
2020/05/14 15:00:03 [2020-05-14T15:00:03Z] Incoming HTTP/2.0 GET /api/v1/plugin/config request from 10.42.0.0:44400:
2020/05/14 15:00:03 [2020-05-14T15:00:03Z] Outcoming response to 10.42.0.0:44400 with 200 status code
2020/05/14 15:00:03 [2020-05-14T15:00:03Z] Outcoming response to 10.42.0.0:44400 with 200 status code
2020/05/14 15:00:03 [2020-05-14T15:00:03Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 10.42.0.0:44400:
2020/05/14 15:00:03 [2020-05-14T15:00:03Z] Outcoming response to 10.42.0.0:44400 with 200 status code
2020/05/14 15:00:03 [2020-05-14T15:00:03Z] Outcoming response to 10.42.0.0:44400 with 200 status code
2020/05/14 15:00:03 [2020-05-14T15:00:03Z] Incoming HTTP/2.0 GET /api/v1/settings/global request from 10.42.0.0:44400:
2020/05/14 15:00:03 [2020-05-14T15:00:03Z] Outcoming response to 10.42.0.0:44400 with 200 status code
2020/05/14 15:00:04 [2020-05-14T15:00:04Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 10.42.0.0:44400:
2020/05/14 15:00:04 [2020-05-14T15:00:04Z] Outcoming response to 10.42.0.0:44400 with 200 status code
2020/05/14 15:00:04 [2020-05-14T15:00:04Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 10.42.0.0:44400:
2020/05/14 15:00:04 [2020-05-14T15:00:04Z] Outcoming response to 10.42.0.0:44400 with 200 status code
2020/05/14 15:00:04 [2020-05-14T15:00:04Z] Incoming HTTP/2.0 GET /api/v1/login/skippable request from 10.42.0.0:44400:
2020/05/14 15:00:04 [2020-05-14T15:00:04Z] Outcoming response to 10.42.0.0:44400 with 200 status code
2020/05/14 15:00:04 [2020-05-14T15:00:04Z] Incoming HTTP/2.0 GET /api/v1/login/modes request from 10.42.0.0:44400:
2020/05/14 15:00:04 [2020-05-14T15:00:04Z] Outcoming response to 10.42.0.0:44400 with 200 status code

Création du user admin pour accéder au dashboard:

ruben@thinserv1:~/.kube$ kubectl create serviceaccount dashboard -n default
serviceaccount/dashboard created
ruben@thinserv1:~/.kube$ kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluster-admin --serviceaccount=default:dashboard
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
ruben@thinserv1:~/.kube$ kubectl get secret $(kubectl get serviceaccount dashboard -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode
eyJhbGciOiJSUzI1NiIsImtpZCI6IkVuVEVKcUFjYUtQZ0VyMnhXLUUwV294WTY3bVZoQy1jRHduazZlTm5WUE0ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRhc2hib2FyZC10b2tlbi04bnBwbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkYXNoYm9hcmQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjMGRkZTRlZi03MjAwLTQwZTMtYTkyMC04YWZiNjkyNTI2MjYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkYXNoYm9hcmQifQ.l10jxaK6CCcCEuA1CiROcabYO5E4USFpghKFHf8dsd6fhu1av6p8vSAKmXF_z0B-6jAOKPsinBTuOBalESbHSGxkifGZpqekTCALHPjxrIyB9p3R0vjwdhHTe2axmSV6Nn9wxULsPtCh_GdSAmJSdo4MnuCGQZvFHZoSXkEP31eP54FZsZDaVO_ttOBfa7L0Yfdx-QULgabgRg_j5S9TK-5r8zdjwFjj8RAw_xnLjWMrPalcHlVCz3qenk3aAY7jB7pUpDKsTs0WfRdqokx0mRNS-TR0kdhWlJAHjQhb-dY068sXAprlsSkQVInrkU4MOlnLezNia2VWVIfwuXZfxg

Attendre environ 5 bonnes minutes avant d'essayer d’accéder au dashboard via :

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
Insérer le token que vous avez obtenu au dessus ci-dessous:

Capture d’écran du 2020-05-14 17-18-57
Capture d’écran du 2020-05-14 17-35-41


Dans les prochaines articles nous allons mettre en place des services dans K3s.
A bientôt.
 

This site uses Akismet to reduce spam. Learn how your comment data is processed.