Minikube version: 0.12.2
Environment:
What happened:
I'm running a fluentd pod to collect logs from /var/log/containers and ship them to splunk, I'm using the following image: https://github.com/Falkonry/docker-fluent-splunk
fluentd is running as root, so I'm expecting to be able to read the logs, but getting these warnings:
2016-12-01T22:41:01.814337960Z 2016-12-01 22:41:01 +0000 [warn]: /var/log/containers/kubernetes-dashboard-qcmyh_kube-system_kubernetes-dashboard-c98d8d1296b691493ac2a8ce7f30a42dbe8107d2e425cd12f681021144f1895f.log unreadable. It is excluded and would be examined next time.
2016-12-01T22:41:01.814351093Z 2016-12-01 22:41:01 +0000 [warn]: /var/log/containers/kube-dns-v20-zr41g_kube-system_POD-22aaa1f29033d57e6f1a40a2c87c6d3624815824adf9e5192d4ec611e1ab1909.log unreadable. It is excluded and would be examined next time.
2016-12-01T22:41:01.814364345Z 2016-12-01 22:41:01 +0000 [warn]: /var/log/containers/kubernetes-dashboard-qcmyh_kube-system_POD-465a601853d4a418984da54c1a5d7868368b6f221ba936988930cd22b87d4685.log unreadable. It is excluded and would be examined next time.
2016-12-01T22:41:01.814374217Z 2016-12-01 22:41:01 +0000 [warn]: /var/log/containers/kube-addon-manager-minikube_kube-system_kube-addon-manager-0813bd6a91f83b7d5171b8f291a1a353ce3c4672f5494319c61ba0d78c13f046.log unreadable. It is excluded and would be examined next time.
2016-12-01T22:41:01.814381784Z 2016-12-01 22:41:01 +0000 [warn]: /var/log/containers/kube-addon-manager-minikube_kube-system_POD-a894a06883d52c19b5e0f62bff14465f70525363d26bc66652cc31c36b266b67.log unreadable. It is excluded and would be examined next time.
What you expected to happen:
fluentd should be able to read the logs files.
How to reproduce it:
Deploy a falkonry/fluent-splunk image and monitor its log, here's the yaml I'm using:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd-splunk
namespace: kube-system
labels:
name: fluentd-logging
spec:
template:
metadata:
labels:
k8s-app: fluentd-logging
spec:
containers:
- name: fluentd-splunk
image: docker.io/falkonry/fluent-splunk:latest
resources:
limits:
cpu: 100m
memory: 200Mi
env:
- name: "SPLUNK_SERVER"
value: "localhost:8089"
- name: "SPLUNK_AUTH"
value: "admin:changeme"
- name: "SPLUNK_INDEX"
value: "main"
- name: "FLUENTD_ARGS"
value: "-q"
volumeMounts:
- name: varlog
mountPath: /var/log
- name: containers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: varlog
hostPath:
path: /var/log
- name: containers
hostPath:
path: /var/lib/docker/containers
Minikube boots into a tmpfs...however, there are some paths that we want persisted across starts and stops. For these, we symlink to persistance storage.
docker@minikube:~$ ls -la /var/log/containers/
total 40
drwxr-xr-x 2 root root 240 Dec 2 00:00 ./
drwxrwxr-x 3 root staff 220 Dec 2 00:00 ../
lrwxrwxrwx 1 root root 174 Dec 2 00:00 fluentd-splunk-phifx_kube-system_POD-fd74f4d3a5c45f355a43749be44006555d7f8aa5cac95a74de09d8470e3f973f.log -> /mnt/sda1/var/lib/docker/containers/fd74f4d3a5c45f355a43749be44006555d7f8aa5cac95a74de09d8470e3f973f/fd74f4d3a5c45f355a43749be44006555d7f8aa5cac95a74de09d8470e3f973f-json.log
lrwxrwxrwx 1 root root 174 Dec 2 00:00 fluentd-splunk-phifx_kube-system_fluentd-splunk-f29fe69453664a4b0cda062f31ac5d2d1ca9b801aa393e400dedb2d5d4c2e16e.log -> /mnt/sda1/var/lib/docker/containers/f29fe69453664a4b0cda062f31ac5d2d1ca9b801aa393e400dedb2d5d4c2e16e/f29fe69453664a4b0cda062f31ac5d2d1ca9b801aa393e400dedb2d5d4c2e16e-json.log
lrwxrwxrwx 1 root root 174 Dec 1 23:50 kube-addon-manager-minikube_kube-system_POD-080968157d80b61884ca7eed8d370230dc744614e9354a2a48ec9b571c703101.log -> /mnt/sda1/var/lib/docker/containers/080968157d80b61884ca7eed8d370230dc744614e9354a2a48ec9b571c703101/080968157d80b61884ca7eed8d370230dc744614e9354a2a48ec9b571c703101-json.log
lrwxrwxrwx 1 root root 174 Dec 1 23:50 kube-addon-manager-minikube_kube-system_kube-addon-manager-7d62ee5a532fbc3f087898fc121f30746488ec2cf018c82090426fabe531aa16.log -> /mnt/sda1/var/lib/docker/containers/7d62ee5a532fbc3f087898fc121f30746488ec2cf018c82090426fabe531aa16/7d62ee5a532fbc3f087898fc121f30746488ec2cf018c82090426fabe531aa16-json.log
lrwxrwxrwx 1 root root 174 Dec 1 23:50 kube-dns-v20-czx1r_kube-system_POD-cc8881165fe2fb240ea2648ad004753381774e1dfd3d5b123e97d2b4b406e726.log -> /mnt/sda1/var/lib/docker/containers/cc8881165fe2fb240ea2648ad004753381774e1dfd3d5b123e97d2b4b406e726/cc8881165fe2fb240ea2648ad004753381774e1dfd3d5b123e97d2b4b406e726-json.log
lrwxrwxrwx 1 root root 174 Dec 1 23:51 kube-dns-v20-czx1r_kube-system_dnsmasq-d527d387ce09cf07295c1c9ee9942054ee435dc508353425005887c1eca27d20.log -> /mnt/sda1/var/lib/docker/containers/d527d387ce09cf07295c1c9ee9942054ee435dc508353425005887c1eca27d20/d527d387ce09cf07295c1c9ee9942054ee435dc508353425005887c1eca27d20-json.log
lrwxrwxrwx 1 root root 174 Dec 1 23:51 kube-dns-v20-czx1r_kube-system_healthz-b7479db92f2926050b39a52d24858317e075d7891325f5044eeb38dcb783850b.log -> /mnt/sda1/var/lib/docker/containers/b7479db92f2926050b39a52d24858317e075d7891325f5044eeb38dcb783850b/b7479db92f2926050b39a52d24858317e075d7891325f5044eeb38dcb783850b-json.log
lrwxrwxrwx 1 root root 174 Dec 1 23:51 kube-dns-v20-czx1r_kube-system_kubedns-81c6783d543e047c0244fc96d84d73101f98a754888ae065e66186a70b783122.log -> /mnt/sda1/var/lib/docker/containers/81c6783d543e047c0244fc96d84d73101f98a754888ae065e66186a70b783122/81c6783d543e047c0244fc96d84d73101f98a754888ae065e66186a70b783122-json.log
lrwxrwxrwx 1 root root 174 Dec 1 23:50 kubernetes-dashboard-wx69s_kube-system_POD-836e96521bc5e57fcc0904bd12f6d85ecd436896e0e8c673b033cf46f9a3b214.log -> /mnt/sda1/var/lib/docker/containers/836e96521bc5e57fcc0904bd12f6d85ecd436896e0e8c673b033cf46f9a3b214/836e96521bc5e57fcc0904bd12f6d85ecd436896e0e8c673b033cf46f9a3b214-json.log
lrwxrwxrwx 1 root root 174 Dec 1 23:51 kubernetes-dashboard-wx69s_kube-system_kubernetes-dashboard-0b93c1737922f506ce895d442168945cd7c05c2d912c8e6384371fcfabd49566.log -> /mnt/sda1/var/lib/docker/containers/0b93c1737922f506ce895d442168945cd7c05c2d912c8e6384371fcfabd49566/0b93c1737922f506ce895d442168945cd7c05c2d912c8e6384371fcfabd49566-json.log
So really these log files are stored at /mnt/sda1/var/lib/docker/containers/...
I was able to fix this by changing the hostPath mount for containers to /mnt/sda1/var/lib/docker/containers
This should temporarily fix the issue, but I'll think of a better way to handle this.
Thank you @r2d4 !
I did as you suggested and updated the hostPath to /mnt/sda1/var/lib/docker/containers but still getting the same error.
Here's my updated yaml, any ideas what I'm doing wrong ?
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd-splunk
namespace: kube-system
labels:
name: fluentd-logging
spec:
template:
metadata:
labels:
k8s-app: fluentd-logging
spec:
containers:
- name: fluentd-splunk
image: docker.io/falkonry/fluent-splunk:latest
resources:
limits:
cpu: 100m
memory: 200Mi
env:
- name: "SPLUNK_SERVER"
value: "localhost:8089"
- name: "SPLUNK_AUTH"
value: "admin:changeme"
- name: "SPLUNK_INDEX"
value: "main"
- name: "FLUENTD_ARGS"
value: "-q"
volumeMounts:
- name: varlog
mountPath: /var/log
- name: containers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: varlog
hostPath:
path: /var/log
- name: containers
hostPath:
path: /mnt/sda1/var/lib/docker/containers
Ok, I figured it out.
Had to change also the mountPath to the same path, this configuration works:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd-splunk
namespace: kube-system
labels:
name: fluentd-logging
spec:
template:
metadata:
labels:
k8s-app: fluentd-logging
spec:
containers:
- name: fluentd-splunk
image: docker.io/falkonry/fluent-splunk:latest
resources:
limits:
cpu: 100m
memory: 200Mi
env:
- name: "SPLUNK_SERVER"
value: "localhost:8089"
- name: "SPLUNK_AUTH"
value: "admin:changeme"
- name: "SPLUNK_INDEX"
value: "main"
- name: "FLUENTD_ARGS"
value: "-q"
volumeMounts:
- name: varlog
mountPath: /var/log
- name: containers
mountPath: /mnt/sda1/var/lib/docker/containers
readOnly: true
volumes:
- name: varlog
hostPath:
path: /var/log
- name: containers
hostPath:
path: /mnt/sda1/var/lib/docker/containers
Hey it looks like you figured this out. Closing this, feel free to reopen if you still have issues, thanks!
I think that kubelet create some symbolic links in '/var/log/containers'(just links not real file), so you must mount both links and real files or only mount real file with right fluentd.conf.
Most helpful comment
Ok, I figured it out.
Had to change also the mountPath to the same path, this configuration works: