Kubernetesk3sRaspberry PiEdgePerformance
Hosted Tixin on k8s on 2x Raspberry Pi 5s
// How I hosted a production-grade stack on two Raspberry Pi 5s with k3s — ~1000 concurrent connections and 16k req/sec during load testing.
TL;DR: I deployed the entire Tixin platform onto a tiny ARM edge cluster (2× Raspberry Pi 5) using k3s and tuned it to serve ~1000 concurrent connections while sustaining ~16k req/sec under load tests.
Why I tried this
Edge-first infra is underrated. I wanted a real-world test: can a tiny, cheap cluster run production-like traffic while staying stable and observable?
Architecture & setup
- Hardware: 2 × Raspberry Pi 5 (8GB)
- OS: Ubuntu Server 22.04 LTS (ARM64)
- Kubernetes: k3s (lightweight Kubernetes)
- Storage: NVMe SSDs via PCIe HATs for fast local persistence
- Ingress / Load-testing: k6 for load generation, a lightweight HAProxy for ingress in front of services
Key metrics
- Concurrent connections: ~1,000
- Peak throughput: ~16,000 req/sec (observed in targeted k6 tests)
- p95 latency: < 50ms in steady-state tests
Optimizations & learnings
- Use small, stateless services and push heavy state into read-optimized stores.
- Tune CFS quotas and CPU pinning on ARM to reduce noisy-neighbor effects.
- Local SSDs with writeback caches significantly reduce tail latency for disk-heavy services.
- Use service mesh lightly — observability over heavy traffic shaping on tiny clusters.
Stack
Kubernetes (k3s) · Node.js · Docker · k6 · Prometheus · Grafana
Repo & references
- Repo / code samples: https://github.com/Adityaadpandey (see relevant infra & k8s manifests). :contentReference[oaicite:1]{index=1}
END OF LOGReturn Home