Containers-roadmap: [EKS] Cloudwatch Logs for Containers

Created on 14 Dec 2018  路  19Comments  路  Source: aws/containers-roadmap

Tell us about your request
I would like to have a feature where I can enable a "Send all container logs to CloudWatch Logs" flag. And then forget about managing the tooling to push the logs to CloudWatch Logs.

Which service(s) is this request for?
EKS

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
I would like to have an aggregated, searchable, managed logging service for the logs of my Kubernetes Pod's. So for example I can search for a specific error trough all my K8S Pod logs. Currently this is possible to realize with a Helm chart: https://github.com/helm/charts/tree/master/incubator/fluentd-cloudwatch
But I would rather have a fully manged experience where I don't have to worry about Helm or IAM for example.

Are you currently working around this issue?
Not yet but I would probably start using the fluentd-cloudwatch Helm chart.

Additional context
In comparison Google Kubernetes Engine has a similar feature. Check the box marked "Enable Stackdriver Logging service" and your logs start flowing in to their hosted logging service. This request differs from https://github.com/aws/containers-roadmap/issues/26 where the control plane logs are requested, not the container logs.

EKS Proposed

Most helpful comment

Two requests:

CloudWatch Log Groups

We would like to see this implemented in a way that can automatically create a CloudWatch Logs LogGroup per Kubernetes Deployment.

The FluentD-CloudWatch plugin sends all logs to a single LogGroup and each Pod is represented as a Log Stream within that group. This is a poor experience when attempting to find Pod Logs.

Log Contents

The log agent used should be able to properly handle nested escaped JSON.
When log lines are emitted from a Container as JSON, the Docker JSONFile log driver adds escaping. Especially with CloudWatch Logs Insights it's important that these are properly parsed back out to valid JSON so they can be searched and aggregated.

As vincentheet noted, the experience on other managed providers is extremely streamlined, it will be great to have something similar available on EKS.

All 19 comments

Feedback: Currently there is no "How-To" which explains the steps sending pod logfiles into the cloudwatch service. There is only a "How-To" sending Cloud-Watch logs to Elastic-Search.
I think this missing "How-To" could be a good starting point.
However i support the idea of even a better integration with less effort to configure.

If nothing else, it would at least be beneficial if there were documentation on how to deploy fluentd to push all container logs to cloudwatch logs. Although theoretically an option fluentd daemonset and updated iam roles with cloudwatch permissions in the samples would be preferable.

@mfacenet check this out:
https://eksworkshop.com/logging

I see here at least 3 options
1) Logging by CloudWatch Agent -> AWS ElasticSearch
2) Logging by Logstash -> AWS ElasticSearch
3) Logging by Fluent -> AWS ElasticSearch

At least option 1 could be simplified and documented by EKS.
One additional idea for option 1:
Maybe the AMI for EKS could already be delivered with CloudWatchAgent optionally ?

Two requests:

CloudWatch Log Groups

We would like to see this implemented in a way that can automatically create a CloudWatch Logs LogGroup per Kubernetes Deployment.

The FluentD-CloudWatch plugin sends all logs to a single LogGroup and each Pod is represented as a Log Stream within that group. This is a poor experience when attempting to find Pod Logs.

Log Contents

The log agent used should be able to properly handle nested escaped JSON.
When log lines are emitted from a Container as JSON, the Docker JSONFile log driver adds escaping. Especially with CloudWatch Logs Insights it's important that these are properly parsed back out to valid JSON so they can be searched and aggregated.

As vincentheet noted, the experience on other managed providers is extremely streamlined, it will be great to have something similar available on EKS.

For folks, who is still waiting.

We (Outcold Solutions) just released a support of collectord (container-native log forwarding tool) for CloudWatch Logs and S3/Glue https://collectord.io/blog/2019-03-13-aws-centralized-logging-for-kubernetes/

It is very easy to install, allows you to keep the logs in long term on S3 (compressed, formatted, partitioned with Glue/Hive compatible format) and analyze them with Athena and QuickSight. And at the same time keep the logs in CloudWatch Logs as well.

But just to be clear, we aren't a part of AWS, we aren't open source company. Collectord is a proprietary software, that we distribute, including with AWS Marketplace. We charge the minimum possible amount for the licensing. At the bottom of the blog post I shared you can find a calculator, that can estimate the monthly price for using AWS Services and our license as well.

This is the only related issue I saw for sending EKS container logs to Cloudwatch, so it seems worth mentioning these docs: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-logs.html

Container Insights for EKS seems to be in developer preview so maybe this issue should move under that column?

@vincentheet would you consider CloudWatch Container Insights as a solution to this request? If not, what could we add to fulfill the need?

@tabern does CloudWatch Container Insights support shipping the raw container logs to CloudWatch logs?

@vincentheet would you consider CloudWatch Container Insights as a solution to this request? If not, what could we add to fulfill the need?

Although this might work it's not what I intended with my feature request. I currently have FluentD running in my EKS clusters but it involves some manual intervention and maintenance.

But I would rather have a fully managed experience where I don't have to worry about Helm or IAM for example.

In comparison Google Kubernetes Engine has a similar feature. Check the box marked 'Enable Stackdriver Logging service' and your logs start flowing in to their hosted logging service.

Unfortunately it seems like it's nowhere near the simplicity I would like as I look at the documentation: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-logs.html

You can now use Fluent Bit to send logs from K8s to CloudWatch, see the following:

Any updates on the native solution?
Or the fluent bit is the recommended way?
How would it work with FarGate, where DaemonSets are not available?

Personally, I've found the Quick Start Setup very user friendly. One line to setup and send logging & monitoring data to CloudWatch.

The log agent used should be able to properly handle nested escaped JSON.
When log lines are emitted from a Container as JSON, the Docker JSONFile log driver adds escaping. Especially with CloudWatch Logs Insights it's important that these are properly parsed back out to valid JSON so they can be searched and aggregated.

You can achieve that by adding a parser to the fluentd configuration. This allows you to query nested JSON fields from your log output in CloudWatch Insights.

    <filter **>
        @type parser
        format json
        key_name log
        reserve_time true
        reserve_data true
        emit_invalid_record_to_error false
        hash_value_field parsed_log
    </filter>

See https://medium.com/@k5trismegistus/use-cloudwatch-to-monitor-eks-cluster-and-gather-logs-from-rails-app-on-kubernetes-d86f4d9439f7

Unfortunately Container Insights is not available in GovCloud.

Is this issue going to cover Fargate as well?

At the moment the solution is to use logrotate and fluentd for each application pod. This increases the cost due to these additional logging related containers. For a small application container, the logging setup costs more than the application container. Is there any better solution available?

@pallabganai this is what we are working on for Fargate. The intention is to hide the sidecar ("hidecar") so that you don't have to deal with it.

hidecar - learnt something new today!

Seems like the native Fargate logging solution is out since the beginning of the month. Really a big deal as this was a huge con in terms of switching more services of ours to Fargate!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

jeremietharaud picture jeremietharaud  路  3Comments

groodt picture groodt  路  3Comments

mineiro picture mineiro  路  3Comments

talawahtech picture talawahtech  路  3Comments

ORESoftware picture ORESoftware  路  3Comments