High Memory Usage
Prevention
Section titled “Prevention”- Set memory limits on your pods
- Use kubectl describe node to see memory pressure
- Consider enabling memory-based pod eviction policies
Example
Section titled “Example”Postgres Archiver taking 16% because the wal archiving was not working.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMANDroot 4146498 29.2 34.5 4347184 2739396 ? Ssl Jun18 35510:07 /usr/local/bin/k3s server999 13428 0.0 16.1 1507764 1283196 ? Ss 2024 15:38 postgres: archiver archiving 000000010000000000000001Runbook
Section titled “Runbook”Check memory usage
Section titled “Check memory usage”free -htop -o %MEMCheck Container Pod Memory Usage
Section titled “Check Container Pod Memory Usage”with crictl
sudo crictl stats --output json | jq -r '.stats[] | "\(.attributes.labels."io.kubernetes.pod.name") \(.memory.workingSetBytes.value) \(.attributes.id)"' | sort -k2 -nr | head -10 | column -tKill processes by memory usage
Section titled “Kill processes by memory usage”ps aux --sort=-%mem | head -10kill -9 <high-memory-process-pids>Use systemd to restart kubelet/docker
Section titled “Use systemd to restart kubelet/docker”systemctl restart k3ssystemctl restart containerdLast resort (reboot the node)
Section titled “Last resort (reboot the node)”sudo reboot