Installing GitHub Action Runner in Kubernetes with Terraform

December 10, 2024

As part of of my homelab journey, I wanted to build a CI/CD pipeline for my apps and wanted to use the GitHub self-hosted runners.

There are basically two installation options: as a stand-alone system, or inside a Kubernetes cluster. I eventually decided to deploy it in my Kubernetes cluster I already had running(more specifically, a K3s cluster), so the runners would spin up on demand and there was no need to have a dedicated VM for that(didn’t like the idea of installing it in a shared VM with other service/s either)

Installing the self-hosted runners in the Kubernetes cluster is in principle quite easy, we just need to install two helm charts, one with the controller and another one with the runner set as mentioned in the official documentation.

Bringing this into Terraform is quite straightforward using the helm provider:

resource "helm_release" "gha_runner_arc" {
  name = "gha-runner-arc"
  repository = "oci://ghcr.io/actions/actions-runner-controller-charts"
  chart = "gha-runner-scale-set-controller"
  namespace = "github"
  create_namespace = true
  recreate_pods = true
}
resource "helm_release" "gha_runner_arc_runner_set" {
  depends_on = [
    helm_release.gha_runner_arc
  ]

  name = "lab-gha-runner"
  namespace = "github"
  repository = "oci://ghcr.io/actions/actions-runner-controller-charts"
  chart = "gha-runner-scale-set"
  recreate_pods = true

  set {
    name = "githubConfigUrl"
    value = "https://github.com/dagi3d/dagi3d.net"
  }

  set {
    name = "githubConfigSecret.github_token"
    value = var.gha_token
  }
}

Unfortunately, it wasn’t that simple for a few reasons:
- Default installation uses what they call the “kubernetes” container mode. This worked with the first tests I made, but it turns out it doesn’t include the docker command.
- To be able to use Docker inside the GH actions, we need to set the containerMode.type=dind value.

In Terraform it would look like this:

  set {
    name = "containerMode.type"
    value = "dind"
  }

Now the GH action started building the Docker image as expected but it got stuck every time it tried to download some dependencies 😟

Apparently, the Docker installation might use a MTU for its bridged network different than the one used in your own network, which can cause different connectivity problems. At least this issue has been acknowledged and people have found a workaround for it: https://github.com/actions/actions-runner-controller/discussions/2993

The idea is to use our own spec template so we can use our own Docker configuration to set the desired MTU value.
One of the suggested approaches required to inject the Docker config through a mounted volume, and its content would be defined in a ConfigMap.

It turns out that that we can also pass the --mtu=<VALUE> flag to the dockerd container command and it will work too, so we won’t need to mount the volume 🙂

Terraform config will look like this now:

resource "helm_release" "gha_runner_arc_runner_set" {
  depends_on = [ helm_release.gha_runner_arc ]

  name = "lab-gha-runner"
  namespace = "github"
  repository = "oci://ghcr.io/actions/actions-runner-controller-charts"
  chart = "gha-runner-scale-set"
  create_namespace = true
  recreate_pods = true

  set {
    name = "githubConfigUrl"
    value = "https://github.com/dagi3d/dagi3d.net"
  }

  set {
    name = "githubConfigSecret.github_token"
    value = var.gha_token
  }

  # Important: when using a custom spec template do not set the containerMode.type param

  values = [
    file("${path.module}/values.yaml")
  ]
}

and the file values.yaml:

template:
  spec:
    initContainers:
    - name: init-dind-externals
      image: ghcr.io/actions,/actions-runner:latest
      command: ["cp", "-r", "-v", "/home/runner/externals/.", "/home/runner/tmpDir/"]
      volumeMounts:
        - name: dind-externals
          mountPath: /home/runner/tmpDir
    containers:
    - name: runner
      image: ghcr.io/actions/actions-runner:latest
      command: ["/home/runner/run.sh"]
      env:
        - name: DOCKER_HOST
          value: unix:///var/run/docker.sock
      volumeMounts:
        - name: work
          mountPath: /home/runner/_work
        - name: dind-sock
          mountPath: /var/run
    - name: dind
      image: docker:dind
      args:
        - dockerd
        - --mtu=1450
        - --host=unix:///var/run/docker.sock
        - --group=$(DOCKER_GROUP_GID)
      env:
        - name: DOCKER_GROUP_GID
          value: "123"
      securityContext:
        privileged: true
      volumeMounts:
        - name: work
          mountPath: /home/runner/_work
        - name: dind-sock
          mountPath: /var/run
        - name: dind-externals
          mountPath: /home/runner/externals
    volumes:
    - name: work
      emptyDir: {}
    - name: dind-sock
      emptyDir: {}
    - name: dind-externals
      emptyDir: {}

After this change builds started working like a charm! 🚀