Working with multiple node architectures in K3s
Edited: Saturday 3 May 2025

I am in the process of rebuilding my home lab after neglecting it for the last few years. Originally it consisted of an old laptop and SBC running a kubernetes cluster using k3s, it was more for learning then actually hosting anything. But I recently got a new mini pc that makes the prospects of self hosting a few apps more viable. Now my cluster consists of 3 Nodes:

  • TRIGKEY Ryzen 7 Mini PC (x86_64)
  • Toshiba Satellite C855S5308 (x86_64)
  • Odroid XU4 A15 (armv7l)

I figured the best place to start with rebuilding my home lab would be with getting some sort of observability tool in place to make troubleshooting future deployments less of a pain. I decided to go with an ELK stack as I have a lot of experience with it from my day job.

Now you’ll notice the Odroid is using a different architecture then the other two nodes. This can occasionally cause some issues when an image is not available for the armv7l architecture. I ran into an example of this while setting up the ELK stack.

The Setup

To get the ELK stack stood up I used the Helm charts provided by Elastic. Overall its a dead simple procedure:

  1. Add the repo
1helm repo add elastic https://helm.elastic.co
2helm repo update
  1. Create your value files
1helm show values elastic/filebeat > filebeats/values.yml
2helm show values elastic/logstash > logstash/values.yml
3helm show values elastic/elasticsearch > elastic/values.yml
4helm show values elastic/kibana > kibana/values.yml
5
6# Edit the various values files as you see fit.
  1. Install the charts
1helm install elasticsearch elastic/elasticsearch -n elk-stack --create-namespace -f elastic/values.yml
2helm install filebeat elastic/filebeat -f filebeats/values.yml -n elk-stack
3helm install logstash elastic/logstash -f logstash/values.yml -n elk-stack
4helm install kibana elastic/kibana -f kibana/values.yml -n elk-stack

I configured my values so that each node in the cluster would have a filebeats pod that would watch the logs of each pod running on it. These filebeats pods would feed into a logstash pod that would run on the TrigKey node. The logstash would do a bit of formatting and then send everything along to Elasticsearch. For that part I setup a 2 node elastic cluster with a elastic pod running on both the TrigKey node and the Toshiba node. The Odroid does not have much storage attached to it at the moment so I left it out of the ES cluster. Finally to be able to read and visualize the logs I setup a Kibana pod on the TrigKey node. Once the installs complete I should have a fully functioning ELK stack setup to automatically capture the logs for any pod that I spin up on my cluster.

1kubectl -n elk-stack get pods
2NAME                            READY   STATUS             RESTARTS         AGE
3elasticsearch-master-0          1/1     Running            16 (36h ago)     96d
4elasticsearch-master-1          1/1     Running            2 (10d ago)      10d
5filebeat-filebeat-929z2         0/1     ImagePullBackOff   0                75s
6filebeat-filebeat-nlhn7         1/1     Running            747 (10d ago)    98d
7filebeat-filebeat-q4g8r         1/1     Running            5 (10d ago)      98d
8kibana-kibana-555ddb75f-sknz7   1/1     Running            7 (10d ago)      98d
9logstash-logstash-0             1/1     Running            2390 (21m ago)   98d

Welp.

ImagePullBackOff Loop

As the table above shows one of my filebeats pods is in a ImagePullBackOff state. To put that in human terms, the node this particular pod is running on is having trouble pulling the image for the pod. In this case the image in question is docker.elastic.co/beats/filebeat:8.5.1. By using the wide output option of get pods (kubectl get pods -o wide) I can trace this pod to the Odroid node. In order to pull this pod out of this crash loop I’ll need to figure out why this node can’t pull the filebeat image. I’ve found the best place to start with these issues is to just try pulling the image manually and seeing what breaks.

1# from the odroid node's command line
2$ docker image pull docker.elastic.co/beats/filebeat:8.5.1
38.5.1: Pulling from beats/filebeat
4no matching manifest for linux/arm/v7 in the manifest list entries

That explains it. Since the odroid is using arm v7 architecture it will try to pull an image for that architecture, however in this case no such image exists. Which honestly is not surprising, can’t imagine there is alot of demand for this type of image,  and if you take a look at Elastic’s support matrix page you’ll see they no longer support 32 bit starting with v8.

The Fix

So we have a straight forward problem, we don’t have an image available for the architecture that our node is running. The simplest solution would be to just pick an older version that has an arm v7 image. But that would mean a downgrade for the other two nodes, which doesn’t sit right for me. Instead I went with a more involved solution:

  1. Clone the git repo for filebeats and check out the 8.5.1 tag
  2. Create a build of filebeats using the odroid architecture
  3. Write a image file that uses the newly built binary
  4. Add that image to the odroids local image repository

Clone the repo and build the binary

The repo for filebeats is stored on Github so you can clone it as you would any other git repo. Here I’m specifying the branch 8.5.1 to make sure the version of the repo I checkout matches what’s running on my other nodes.

1# from the odroid node's command line
2git clone --branch v8.5.1 https://github.com/elastic/beats.git

Now that we have the source code we can go ahead and build it. Filebeats is written in Go so building a binary for specific architectures or OSes is simple with the GOARCH and GOOS environment variables. Fortunately we don’t have to worry about any of that as Elastic has setup a simple build process using mage. The steps are documented in their contribution guide here but I’ve included the commands below:

1cd beats
2make mage
3cd filebeat
4mage build

Creating an image with our binary

Now creating the image file is a bit trickier since we can’t just copy Elastic’s image file. But it would seem I am not the first person to run into this issue as I was able to find this Medium post going through a similar solution for ppc64le architectures. Now the article is using an older version of filebeats but we can rework it to fit our needs. First we need to copy a few files, including our new binary, into a directory so they can be easily copied to the docker image:

1# while still in the beats/filebeat directory
2mkdir -p filebeat-odriod/data filebeat-odroid/logs
3cp -p filebeat ./filebeat-odroid
4cp -p filebeat.docker.yml ./filebeat-odroid/filebeat.yml
5cp -p filebeat.reference.yml ./filebeat-odroid
6cp -p README.md ./filebeat-odroid
7cp -pR module ./filebeat-odroid
8cp -pR modules.d ./filebeat-odroid

Then we can setup our Dockerfile like so:

1FROM ubuntu:latest
2RUN apt update
3RUN apt upgrade -y
4RUN apt install -y ca-certificates curl gawk libcap2-bin xz-utils
5RUN apt clean all
6ENV PATH="/usr/share/filebeat:${PATH}"
7COPY ./filebeat-odroid /usr/share/filebeat
8WORKDIR /usr/share/filebeat
9ENTRYPOINT ["filebeat"]

Deploying image to local

Now this step will largely depend on what runtime you are using for your containers in your cluster. I am using containerd which is the default with K3s so I have the extra step of importing the the image into containerd’s image repository.

1# if you are using docker as your runtime this should be enough
2docker build -t docker.elastic.co/beats/filebeat:8.5.1 .
3
4# if you use containerd you'll have to import the image to its repo'
5docker save docker.elastic.co/beats/filebeat:8.5.1 > image-tag.tar 
6k3s ctr images import image-tag.tar

Once the is imported successfully we just need to wait for the pod to restart again and we should be good to go:

1NAME                            READY   STATUS    RESTARTS         AGE
2elasticsearch-master-0          1/1     Running   16 (38h ago)     96d
3elasticsearch-master-1          1/1     Running   2 (10d ago)      10d
4filebeat-filebeat-929z2         1/1     Running   0                115m
5filebeat-filebeat-nlhn7         1/1     Running   747 (10d ago)    98d
6filebeat-filebeat-q4g8r         1/1     Running   5 (10d ago)      98d
7kibana-kibana-555ddb75f-sknz7   1/1     Running   7 (10d ago)      98d
8logstash-logstash-0             1/1     Running   2395 (19m ago)   98d

k3s ELK filebeats armv7 kubernetes homelab