Containers-roadmap: [EKS] [request]: FireLens on EKS Fargate

Created on 15 Jan 2020  路  19Comments  路  Source: aws/containers-roadmap

Community Note

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Tell us about your request

Fargate for ECS has FireLens, a managed observability experience built around the open source projects Fluentd and Fluent Bit. AWS for Fluent Bit provides a lightweight solution for AWS customers to process and ship telemetry data to many destinations.

We are evaluating ways in which we can make it easy, simple, and reliable to use AWS for Fluent Bit on EKS Fargate.

Which service(s) is this request for?
EKS Fargate

This solved #618

EKS Fargate FireLens

Most helpful comment

@cahman we are working on a blog post that can walk users on how to do this. The yaml below is an example of how you'd configure a pod running on Fargate to do that. There will be more info in the blog. This example logs to CW but you can change the fluentbit configuration to log to other back-ends.

apiVersion: v1
kind: ConfigMap
metadata:
  name: fluentbit-config
data:
  # Configuration files: server, input, filters and output
  # ======================================================
  fluent-bit.conf: |
    [INPUT]
        Name              tail
        Tag               *.logs
        Path              /var/log/*.log
        DB                /var/log/logs.db
        Mem_Buf_Limit     5MB
        Skip_Long_Lines   On
        Refresh_Interval  10
    [OUTPUT]
        Name              cloudwatch
        Match             *
        region            us-west-2
        log_group_name    eks-fargate-logs
        log_stream_prefix fargate-
        auto_create_group true
---
apiVersion: v1
kind: Pod
metadata:
  name: counter
spec:
  serviceAccountName: fargate
  containers:
  - name: count
    image: busybox
    args:
    - /bin/sh
    - -c
    - >
      i=0;
      while true;
      do
        echo "$i: $(date) this is an app log" >> /var/log/app.log;
        echo "$(date) $(uname -r) $i" >> /var/log/system.log;
        i=$((i+1));
        sleep 1;
      done
    volumeMounts:
    - name: varlog
      mountPath: /var/log
  - name: count-agent
    image: amazon/aws-for-fluent-bit:latest
    imagePullPolicy: Always
    ports:
      - containerPort: 2020
    env:
    - name: FLUENTD_HOST
      value: "fluentd"
    - name: FLUENTD_PORT
      value: "24224"
    volumeMounts:
    - name: varlog
      mountPath: /var/log
    - name: fluentbit-config
      mountPath: /fluent-bit/etc/
  terminationGracePeriodSeconds: 10
  volumes:
  - name: varlog
    emptyDir: {}
  - name: fluentbit-config
    configMap:
      name: fluentbit-config

All 19 comments

@PettitWesley can we use fluentd with eks fargate to send data to aws elasticsearch because fluentd daemonset implementation is not supported yet.

@farooqdevops At the moment, you can, but only if you make your app container write logs to a volume and have Fluentd read the files on the volume.

@PettitWesley Hi, sorry for being a noob, but is it possible that you can give some more details how to do? I would be forever grateful. What kind och volume?

@cahman we are working on a blog post that can walk users on how to do this. The yaml below is an example of how you'd configure a pod running on Fargate to do that. There will be more info in the blog. This example logs to CW but you can change the fluentbit configuration to log to other back-ends.

apiVersion: v1
kind: ConfigMap
metadata:
  name: fluentbit-config
data:
  # Configuration files: server, input, filters and output
  # ======================================================
  fluent-bit.conf: |
    [INPUT]
        Name              tail
        Tag               *.logs
        Path              /var/log/*.log
        DB                /var/log/logs.db
        Mem_Buf_Limit     5MB
        Skip_Long_Lines   On
        Refresh_Interval  10
    [OUTPUT]
        Name              cloudwatch
        Match             *
        region            us-west-2
        log_group_name    eks-fargate-logs
        log_stream_prefix fargate-
        auto_create_group true
---
apiVersion: v1
kind: Pod
metadata:
  name: counter
spec:
  serviceAccountName: fargate
  containers:
  - name: count
    image: busybox
    args:
    - /bin/sh
    - -c
    - >
      i=0;
      while true;
      do
        echo "$i: $(date) this is an app log" >> /var/log/app.log;
        echo "$(date) $(uname -r) $i" >> /var/log/system.log;
        i=$((i+1));
        sleep 1;
      done
    volumeMounts:
    - name: varlog
      mountPath: /var/log
  - name: count-agent
    image: amazon/aws-for-fluent-bit:latest
    imagePullPolicy: Always
    ports:
      - containerPort: 2020
    env:
    - name: FLUENTD_HOST
      value: "fluentd"
    - name: FLUENTD_PORT
      value: "24224"
    volumeMounts:
    - name: varlog
      mountPath: /var/log
    - name: fluentbit-config
      mountPath: /fluent-bit/etc/
  terminationGracePeriodSeconds: 10
  volumes:
  - name: varlog
    emptyDir: {}
  - name: fluentbit-config
    configMap:
      name: fluentbit-config

Thanks for this configuration sample @mreferre! Will this issue be updated when the blog post is published?

Thanks for this configuration sample @mreferre! Will this issue be updated when the blog post is published?

@jwvanhollebeke yes this issue is the public facing tracking element of this feature and will be updated as it progresses.

Whilst this seems reasonable on the surface, sending your application logs to a file is one thing, but when using services developed by others (take ALB-ingress-controller for example) it's less trivial to make these changes.
Rather than just pulling the image as part of the k8s deployment, I'd have to pull the image in a build pipeline, modify how/where it writes logs, and then host the image myself - and then maintain that for the foreseeable future too.

In addition, for circumstances where the application encounters an error before or outside of initiating a log driver etc, these logs would not be exported either?

@dalgibbard what you are pointing out is correct for the short term by pass we were suggesting (ie writing to a log file). We want Firelens to be able to get logs from stdout so those apps do not need to be changed.

@mreferre awesome, thanks for confirming!

@mreferre

We are also planning to use the configuration that you shared in this thread to send logs from our K8s jobs (jobs that run to completion) running on EKS Fargate and we will start writing logs to files to achieve this.
I have a question, since K8s pods linked to K8s jobs gets completed as soon as the application container exits with a success exit code, so in order to make sure that fluentd container does not miss transporting any logs to cloudwatch during the pod's transition from running to completed state, do we need to add any extra configuration?

My question might be little obvious and please excuse me for that :)

@cahman we are working on a blog post that can walk users on how to do this. The yaml below is an example of how you'd configure a pod running on Fargate to do that. There will be more info in the blog. This example logs to CW but you can change the fluentbit configuration to log to other back-ends.

apiVersion: v1
kind: ConfigMap
metadata:
  name: fluentbit-config
data:
  # Configuration files: server, input, filters and output
  # ======================================================
  fluent-bit.conf: |
    [INPUT]
        Name              tail
        Tag               *.logs
        Path              /var/log/*.log
        DB                /var/log/logs.db
        Mem_Buf_Limit     5MB
        Skip_Long_Lines   On
        Refresh_Interval  10
    [OUTPUT]
        Name              cloudwatch
        Match             *
        region            us-west-2
        log_group_name    eks-fargate-logs
        log_stream_prefix fargate-
        auto_create_group true
---
apiVersion: v1
kind: Pod
metadata:
  name: counter
spec:
  serviceAccountName: fargate
  containers:
  - name: count
    image: busybox
    args:
    - /bin/sh
    - -c
    - >
      i=0;
      while true;
      do
        echo "$i: $(date) this is an app log" >> /var/log/app.log;
        echo "$(date) $(uname -r) $i" >> /var/log/system.log;
        i=$((i+1));
        sleep 1;
      done
    volumeMounts:
    - name: varlog
      mountPath: /var/log
  - name: count-agent
    image: amazon/aws-for-fluent-bit:latest
    imagePullPolicy: Always
    ports:
      - containerPort: 2020
    env:
    - name: FLUENTD_HOST
      value: "fluentd"
    - name: FLUENTD_PORT
      value: "24224"
    volumeMounts:
    - name: varlog
      mountPath: /var/log
    - name: fluentbit-config
      mountPath: /fluent-bit/etc/
  terminationGracePeriodSeconds: 10
  volumes:
  - name: varlog
    emptyDir: {}
  - name: fluentbit-config
    configMap:
      name: fluentbit-config

Hi

Thanks for sharing this. Can you please let me know which serviceaccount we are referring to here as serviceAccountName: fargate ?

@cahman we are working on a blog post that can walk users on how to do this. The yaml below is an example of how you'd configure a pod running on Fargate to do that. There will be more info in the blog. This example logs to CW but you can change the fluentbit configuration to log to other back-ends.

apiVersion: v1
kind: ConfigMap
metadata:
  name: fluentbit-config
data:
  # Configuration files: server, input, filters and output
  # ======================================================
  fluent-bit.conf: |
    [INPUT]
        Name              tail
        Tag               *.logs
        Path              /var/log/*.log
        DB                /var/log/logs.db
        Mem_Buf_Limit     5MB
        Skip_Long_Lines   On
        Refresh_Interval  10
    [OUTPUT]
        Name              cloudwatch
        Match             *
        region            us-west-2
        log_group_name    eks-fargate-logs
        log_stream_prefix fargate-
        auto_create_group true
---
apiVersion: v1
kind: Pod
metadata:
  name: counter
spec:
  serviceAccountName: fargate
  containers:
  - name: count
    image: busybox
    args:
    - /bin/sh
    - -c
    - >
      i=0;
      while true;
      do
        echo "$i: $(date) this is an app log" >> /var/log/app.log;
        echo "$(date) $(uname -r) $i" >> /var/log/system.log;
        i=$((i+1));
        sleep 1;
      done
    volumeMounts:
    - name: varlog
      mountPath: /var/log
  - name: count-agent
    image: amazon/aws-for-fluent-bit:latest
    imagePullPolicy: Always
    ports:
      - containerPort: 2020
    env:
    - name: FLUENTD_HOST
      value: "fluentd"
    - name: FLUENTD_PORT
      value: "24224"
    volumeMounts:
    - name: varlog
      mountPath: /var/log
    - name: fluentbit-config
      mountPath: /fluent-bit/etc/
  terminationGracePeriodSeconds: 10
  volumes:
  - name: varlog
    emptyDir: {}
  - name: fluentbit-config
    configMap:
      name: fluentbit-config

Hi

Thanks for sharing this. Can you please let me know which serviceaccount we are referring to here as serviceAccountName: fargate ?

@vanagarwal it's referring to the created sidecar service account for Fluentd. But here is the post (just got published earlier today) https://aws.amazon.com/blogs/containers/how-to-capture-application-logs-when-using-amazon-eks-on-aws-fargate/

this solves #618

To define the log destination of your choice you use Fluent Bit鈥檚 configuration language. You can choose between CloudWatch, ElasticSearch, Kinesis Firehose and Kinesis Streams as outputs.

From the blog.

Is there going to be support for datadog as output on EKS Fargate? I tried it like so in my ConfigMap:

data:
  output.conf: |
    [OUTPUT]
        Name        datadog
        Match       *
...

I get:

Error from server: error when creating "config-dd-logging.yaml": admission webhook "0500-amazon-eks-fargate-configmaps-admission.amazonaws.com" denied the request: datadog is not a supported output plugin. Please fix the logging configmap

Also interested in what's the right approach for Sumologic?

@sandan @amagnus Right now the recommended approach for sending to partner destinations is to use Kinesis Firehose or CloudWatch as a middle man destination. Firehose can send to Datadog.

With the initial launch complete, we are working on a number of improvements to EKS Fargate logging, and I think eventually we will be able to enable the datadog output plugin and have a secrets integration to safely pass it API tokens.

@PettitWesley In the case of Sumologic, fluent-bit configuration is a simple HTTP output. Why not allow it out of the box?

Was this page helpful?
0 / 5 - 0 ratings