Raspberry Pi Kubernetes Cluster - Part 3
This is part 3 of a series. Part 1 Part 2
We left off part 2 of the series with the hardware assembled, installed into the case, case mounted into the rack, and the base OS image booted up.
I decided to install k3s as it seemed ideally suited to the nature of this cluster. The instructions in the Quick Start Guide worked well, however I ran into an issue that required me to modify the /boot/cmdline.txt
file on each pi and add the following arguments:
cgroup_memory=1 cgroup_enable=memory
I then rebooted the pi’s and did the k3s install. I installed with the default configuration which includes Traefik. I haven’t used that particular type of Ingress before, so time will tell as to whether or not I keep it or replace it with something more familiar like the Nginx Ingress.
Now that the cluster is up, I wanted to deploy a private docker registry. I have a Synology NAS, so I deployed the registry container with a PV/PVC attached to it backed by an NFS volume coming off the NAS. The registry is secured with basic auth and exposed via the Traefik Ingress.
Next, I need to bootstrap a github actions runner. I want to do this because I have been very successful configuring fleets of k8s clusters via a runner inside each cluster. https://github.com/SanderKnape/github-runner/ is a good starting point for building the container image. I built and pushed an image to my registry server, created a Github PAT and deployed it as a secret into the cluster, deployed an image pull secret, then deployed a runner into the cluster.
I then built a github actions workflow for configuring the rest of the cluster. As an example:
name: k8s cluster config
on:
push:
branches:
- main
paths:
- '.github/workflows/k8s-cluster.yaml'
repository_dispatch:
types: [k8s-cluster-config]
jobs:
k8s-cluster-config:
name: k8s-cluster-config
runs-on: [self-hosted, k8s]
steps:
- name: Checkout
uses: actions/checkout@v2
- name: prepare
env:
KUBECONFIG: ${{ secrets.KUBECONFIG }}
run: |
mkdir -p $RUNNER_TEMP/helm/{cache,config,data}
echo "HELM_CACHE_HOME=$RUNNER_TEMP/helm/cache" >> $GITHUB_ENV
echo "HELM_CONFIG_HOME=$RUNNER_TEMP/helm/config" >> $GITHUB_ENV
echo "HELM_DATA_HOME=$RUNNER_TEMP/helm/data" >> $GITHUB_ENV
echo "$KUBECONFIG" | base64 --decode > $RUNNER_TEMP/kubeconfig.yaml
echo "KUBECONFIG=$RUNNER_TEMP/kubeconfig.yaml" >> $GITHUB_ENV
- name: helm repo update
run: |
helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo add longhorn https://charts.longhorn.io
helm repo update
- name: test
run: kubectl get all
Next steps include:
- Deploying Prometheus
- Deploying Grafana
- Deploying the metrics server (so I can play with HPA)