ingress-nginx controller pod doesn't show logs after configuring filebeat

Created on 19 Dec 2018  路  14Comments  路  Source: kubernetes/ingress-nginx

We are using nginx-ingress-controller: v0.20.0 . And our kubernetes version is 1.10.11. Whenever we deploy the ingress-nginx, we are getting the logs of the pod by command "kubectl logs with pod name".
After that, for persistence of ingress log we configured the filebeat container with ingress controller to dump the logs into kibana . In kibana dashboard we found both the error log and access log of ingress controller pod. But whenever we do the "kubectl logs nginx pod" it is not showing the access log. its showing only the error log.
The Dockerfile of ingress controller includes below two lines too.

RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log

Below is the volume mounts and logs path for filebeat.

      volumeMounts:
      - name: logs
        mountPath: /var/log/nginx
      env:
        - name: LOGS_PATH
          value: /var/log/nginx/*.log
        - name: ELASTICSEARCH_INDEX
          value: "nginx-svc-logs"

Below is the log of ingress controller after configuration of filebeat:

ubuntu@:~$ kubectl logs nginx-ingress-controller-6fc4f7-65 -n ingress-nginx -c nginx-ingress-controller

NGINX Ingress controller
Release: 0.20.0
Build: git-e8d8103

Repository: https://github.com/kubernetes/ingress-nginx.git

nginx version: nginx/1.15.5
W1218 13:10:00.031668 8 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1218 13:10:00.031889 8 main.go:196] Creating API client for https://100.64.0.1:443
I1218 13:10:00.040245 8 main.go:240] Running in Kubernetes cluster version v1.10 (v1.10.11) - git (clean) commit 637c7e288581ee40ab4ca210618a89a555b6e7e9 - platform linux/amd64
I1218 13:10:00.041556 8 main.go:101] Validated ingress-nginx/default-http-backend as the default backend.
I1218 13:10:00.143977 8 nginx.go:256] Starting NGINX Ingress controller
I1218 13:10:00.161437 8 event.go:221] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"60305c55-fe17-11e8-b4a2-0219198c6576", APIVersion:"v1", ResourceVersion:"22133789", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I1218 13:10:41.691568 8 status.go:197] new leader elected: nginx-ingress-controller-68f9fc4f7-65nj7

All 14 comments

Looks like there is a bug with ingress controller.

Why do you assume that?

Are you changing the permissions in the /var/log/nginx/ in the mount? Keep in mind the ingress controller runs as www-data user.

@aledbf We are not changing any permission for /var/log/nginx/. and I am taking the default image(0.20.0) of ingress-nginx controller.. After that no change inside the container too.

I tested with v.0.14.0 too. But same issue persists.

@tsahoo if you remove the volumeMount you see the logs right?

@aledbf Yes..If I remove the filebeat and volume mount I see the logs.

Yes..If I remove the filebeat and volume mount I see the logs.

Then, the volume mount is changing the permissions, not allowing nginx to write the files.
Please edit the ingres-nginx deployment, adding fsGroup in the securityContext section.
Like:

     securityContext:
        fsGroup: 33

or add an initContainer to change the permissions to 777

@aledbf Thanks for your update.. Will let you konw if it works.

@tsahoo another option is to use fluentbit to scrape kubernetes logs instead of nginx logs.
The controller by default outputs access logs to stdout.
https://akomljen.com/get-kubernetes-logs-with-efk-stack-in-5-minutes/

@aledbf

We're able to read the logs(from filebeat) and push it upstream(to ElasticSearch). We're unable to tail pod access logs from (kubectl logs).

As I understand, both access and error log files were just symbolic links to process's system streams.

access.log -> /dev/stdout -> /proc/self/1
error.log -> /dev/stderr -> /proc/self/2

So, kubectl logs which effectively tail stdout and stderr should show the logs. correct?

For some reason mounting a volume on path /var/log/nginx is destroying symlinks.

ls -l /var/log/nginx
-rw-r--r--. 1 root www-data  183 Dec 20 09:30 error.log
-rw-r--r--. 1 root www-data 117K Dec 20 10:15 access.log

(I assume) If the only source for access/error logs is stdout/stderr respectively. How are the files getting created in the first place?

How are the files getting created in the first place?

There are no files. We just use symlinks to redirect stdout and stderr. You can use https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#access-log-path to change the path

@aledbf

We just use symlinks to redirect stdout and stderr.

Ok, if I understood Nginx logging correctly, it'll log directly to symlinks annd those are redirected to stdout/stderr. Now because we are destroying symlinks by mounting a volume on /var/log/nginx, nothing will be available on stdout/stderr to see. Correct?

Yes. You can change the path to the mounted volume. Just make sure the www-data user (from the container) has permissions to write in the location

Anyway access/error logs are symlinks to stout/err right, so how does changing the default path help?

I mean instead of

/var/log/nginx/access.log -> /dev/stdout
/var/log/nginx/error.log -> /dev/stdout

I might get this

/my/custom/path/access.log -> /dev/stdout
/my/custom/path/error.log -> /dev/stdout

So, even if I mount emptydir volume on /my/custom/path/, still I won't be able to access stdout/err logs right.

But why symlinks are destroyed after volume mount? As I understand, container is started after the volumes are mounted.

But why symlinks are destroyed after volume mount? As I understand, container is started after the volumes are mounted.

Right, mounting volumes over existing paths remove any file/s in the path.

In this case this means there is no redirect to stdout/stderr

Closing. This works as expected. Mounting a volume in the log directory removes the redirect to stdout/stderr

Was this page helpful?
0 / 5 - 0 ratings