Moby: Send log to multiple log drivers

Created on 11 Nov 2015  ·  61Comments  ·  Source: moby/moby

I'd like to be able to demux logs to multiple log drivers. The use case for this is to log to disk with json-log driver so that docker logs (live streaming) works but also be able to send the logs to archive via e.g. fluentd log driver for longer term viewing/searching/filtering/etc.

arelogging kinfeature

Most helpful comment

@cpuguy83 Is this planned for a Docker CE release soon, or will it remaining an EE only feature?

All 61 comments

Perhaps viewing the fluentd logs is really the best option here (although supporting docker logs for other drivers than json-file and journald would be nice to have)

Using the json-file is really discouraged for production use, and can be quite resource-hungry for high-volume logging.

Agreed. Fluentd doesn't do any storage itself, just routing to various storage/processing backends so not really an option unless the backend supports log streaming. In my case (& Kubernetes examples) that's Elasticsearch normally which doesn't support streaming.

I didn't know that docker logs worked with journald - that would be a much better way to go & have rsyslog or something configured to route logs on from there perhaps.

Yup, it was added in https://github.com/docker/docker/pull/13707. Forgot which release that was (sorry :smile:)

_USER POLL_

_The best way to get notified when there are changes in this discussion is by clicking the Subscribe button in the top right._

The people listed below have appreciated your meaningfull discussion with a random +1:

@renanvicente

Would it be better to just log to something like syslog then have all the tools built around that forward to other locations? Its alot of overhead in the daemon to so something like this.

I agree with @crosbymichael here; I'm not sure we want to add the extra complexity in the daemon for this.

Agreed with @crosbymichael, more portable as any tool can tail the logfile based on one or many container IDs.

Can syslog driver still work with docker logs? I was under the impression that was only json-log & journald? I agree that your approach generally makes more sense though.

@jimmidyson not at the moment, but you can still do tail -f /var/log/syslog | grep --line-buffered <container-id> (or /var/log/messages on Fedora/CentOS etc.).

docker logs could wrap that for us to fully embrace logging strategies but this would be only a sugar on top of your system habits.

The underlying issue I'm trying to solve is moving logs off of nodes as quickly as possible for archiving/filtering/searching/etc while retaining docker logs functionality for live viewing of logs. I guess some configuration around journald should do that for me but it would be good to support docker logs for all log drivers (fluentd comes to mind as one to support somehow for that).

@jimmidyson rsyslog is your friend too :-)

@jimmidyson supporting docker logs on a logging backend is highly dependent on the service being called.
Logging to syslog for instance has no defined way to actually to and read those logs.

We have the same requirements, currently logs are pushed to log stash using the gelf driver but for quick debugging (example : kube-ui / docker logs) we would like to keep the functionality of the json-log driver. log-opt for json-log could be keeping 1-2 days of logs while log stash provide us with a long term archiving solution.

Agreed with @djsly . I want to use gelf driver. I can set up logstash to log to a local file, and use tail -f for quick debugging, but I think there will be json-formatted log there, containing extra information like 'container_id', 'image_name', so it will harder to perceive information comparing with docker logs format.

+1 for the same reasons as multiple people before me

+1

I hate leaving a me too comment, exactly the same problem here for aws cloudwatch logs, and not being able to user docker logs nor the kube-ui.

Aren't there better tools for reading/parsing logs than docker logs?
On a prod system I would expect to be able to read all my logs in one place.

@cpuguy83 For archiving logs there are loads of great tools. The problem is live streaming logs which, even in a single place, ultimately usually stream straight from docker logs. Both archiving & live streaming are important for prod systems.

@jimmidyson For live streaming you can do docker attach --no-stdin

What about an "aggregate" logging driver thats sole purpose is to delegate to multiple other logging drivers? That way we could log to both json-log to retain docker logs functionality (and configure it to have a small max file size for live streaming) and also use something like GELF.

Is this a terrible idea?

+1 for an aggregate diver delegating to multiple log drivers. That would give the most flexability. Did not know about "attach" for live-steam. Will try that. Thanks.

Thanks for pointing that out, @cpuguy83, that's really all that's needed for me. Logs to Logstash via gelf, docker attach --no-stdin for development and debugging. Beautiful!

+1

+1

+1

+1

+1

Stop spamming this bug with +1 comments, that's what the 👍 button is for: https://github.com/blog/2119-add-reactions-to-pull-requests-issues-and-comments

For what it's worth, it looks like docker logs now supports the journald logging driver. You can then use fluentd to stream from journald to wherever (e.g Elasticsearch), while still maintaining the benefits of docker logs.

@razic FWIW you could do that fluentd trick with json-file too :)

Except JSON driver is not recommended in production.

On Thursday, August 25, 2016, Sebastian Otaegui [email protected]
wrote:

@razic https://github.com/razic FWIW you could do that fluentd trick
with json-file too :)


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/docker/docker/issues/17910#issuecomment-242398689,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AANu7TM1wj3DhntE6bIbRQPWC1eDDAwTks5qjaGJgaJpZM4Ggfps
.

@razic It's "pretty good" now if you don't mind only having the logs on the local daemon. Also need to make sure to do rotations/max file sizes.

@cpuguy83 good to know. what has changed in the JSON driver since that recommendation? Also, I was under the impression that the docker daemon would take care of cleaning out the log directory as needed. I think we're making the switch to journald but let me know if you think there is any benefit of one over the other.

@razic Fixed memory leaks, json marshalling is optimized, support for file rotation.

Docker does not clean out the log directory until you remove the container unless you set max size and max number of files.

If you have journald available, might as well use it especially if you are using it for other services.

I am not sure about whether only I am clumsy or not, but when I tried the praised docker attach --no-stdin "solution" and after checking the logs I pressed CTRL+C (like with docker logs) then the container exited. I terminated 5 containers so before realizing this side-effect. That's definitely not nice - even if it is described here: https://docs.docker.com/engine/reference/commandline/attach/ --> I should have pressed ctrl+p then ctrl+q.
Anyway it's also mentioned there that

Because of this, it is not recommended to run performance critical applications that generate a lot of output in the foreground over a slow client connection. Instead, users should use the docker logs command to get access to the logs.

@szakasz Add --sig-proxy=false to avoid passing signals (e.g. ctrl c) to the container.

https://docs.docker.com/engine/reference/commandline/attach/

For what it's worth, it looks like docker logs now supports the journald logging driver. You can then use fluentd to stream from journald to wherever (e.g Elasticsearch), while still maintaining the benefits of docker logs.

Tried that. Fluentd can not read journald reliably as yet.

Is multiple drivers really needed or just the ability to be able to call docker logs when another driver is used?

By the way, logging plugins were just merged a multi-logger could be implemented there. I do not think we will add support for multiple drivers in the core, though we can look at a solution to enabling docker logs for all drivers.

@cpuguy83 retaining the ability to use docker logs would solve my use case. Being still able to configure how many logs are retained locally would be crucial though, so you don't run the server out of space which is one of the main points of using a different logging driver in the first place

@erindru I hope running out of space isn't the main case since we have rotation support 😄
Getting logs off an ephemeral machine though is pretty important.

Sorry what I meant was, in order for docker logs to work the logs need to be local, right? So retaining the ability to configure the local log rotation in addition to configuring the specified logging driver would be crucial

Multiple drivers would open up a lot of use cases and flexibility. I know we've talked about how complex it would be compared to a ring buffer or similar, but I think it's the "better" option long-term. Either would solve our specific use case though.

Short term all I need is to do docker logs when my log is still shipping elsewhere!

Exactly. having logs shipped off box is super important - but having docker logs available to quickly debug would be great.

+1 the ability to use local docker logs when another driver is used solves my use case, especially if true multiple drivers could be handled via a plugin

@shane-axiom Even if it take some time to get the feature into Docker, a plugin can declare that it supports reading logs, so this can wholly be handled by a plugin.

I'm also looking for the ability to use docker logs with another plugin being used. Multiple plugin support or a meta logging plugin that sends to other logging plugins would be a nice to have, but not required for the environments I'm supporting right now.

+1 for the ability to use docker logs with another plugin being used.

+1

+1 for the ability to use docker logs with another plugin being used.

+1 for docker logs along with a logging driver

+1. This would be extremely useful.

+1. Its very difficult to debug without having docker logs available..

This issue is cascading into downstream projects - for instance, since Docker is powering some Nomad tasks, there is no way to multiplex Nomad log streams with different drivers when using Docker-driven tasks. It'd be great if this feature got more attention.

Sending logs to a remote server using one of the logging drivers while still retaining the local logs would be a great option to have

This is available in Docker EE, btw.

@cpuguy83 Is this planned for a Docker CE release soon, or will it remaining an EE only feature?

@sudo-bmitch I can't answer that.

The upcoming Docker 20.10 release will come with the feature described above ("dual logging"), which uses the local logging driver as a ring-buffer, which makes docker logs work when using a logging driver that does not have "read" support (for example, logging drivers that send logs to a remote logging aggregator).

This feature was previously available in Docker Enterprise, but has been open-sourced in https://github.com/moby/moby/pull/40543

I think that addresses the essence of this issue (being able to use docker logs, irregardless of the logging driver that's used for a container).

For clarity; there are no plans currently to allow specifying multiple arbitrary logging-drivers per containers. I don't think we want to add that complexity, and orthogonal to the feature request reported here.

I'm closing this ticket as I think this is resolved by the above.

Was this page helpful?
0 / 5 - 0 ratings