Minikube: Elastic Stack (ELK) with Minikube

Created on 14 Jul 2016  路  18Comments  路  Source: kubernetes/minikube

Maybe I missing something, but how is it possible to set the minikube to use

KUBE_LOGGING_DESTINATION=elasticsearch

As highlighted in this document:

http://kubernetes.io/docs/getting-started-guides/logging-elasticsearch/

kinfeature

Most helpful comment

You may. I have a lot of pods running, and I would like a unified log view for developers that would match what we have in production as closely as possible.

All 18 comments

Looks like this runs as an addon. We should add a feature to minikube which allows for configurable addons. Perhaps minikube addons list/enable/disable, also potentially store this in the config file so that state is preserved.

Until that point, is there a best practice for combining logs using minikube?

May I ask why you want to do that? It seems unnecessary overhead for a single node install?

You may. I have a lot of pods running, and I would like a unified log view for developers that would match what we have in production as closely as possible.

Hey,

I think you can just take the yaml files from here and run them in your cluster:
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch

to get the fluentd/elastic search addon running.

To add them to your cluster, you have a few options:

  1. Remove the kubernetes.io/cluster-service: "true" line from every yaml file, then kubectl create -f for all the yamls.
  2. Drop the yaml files into the /etc/kubernetes/addons/ directory inside the VM. The pods will then get created automatically by the addon manager.

@keithballdotnet Did that work for you? Just wondering if you hit https://github.com/kubernetes/kubernetes/issues/13313 ?

@jimmidyson so in the end I have used the following yaml...

https://gist.github.com/keithballdotnet/2fca9dd542ea2f244ed8571af34ab28f

This allows me to catch all the logging from all the containers, however, it does mean I lose nice context logging when I am dumping json to stdout/stderr.

Example entry:

{
  "_index": "logstash-2016.07.26",
  "_type": "fluentd",
  "_id": "AVYmmL24mmDiGo1LKjNg",
  "_score": null,
  "_source": {
    "log": "{\"exec\":\"./davadmin\",\"host\":\"davadmin-3899838813-sxbk3\",\"level\":\"debug\",\"msg\":\"Connection to MySQL OK\",\"source\":\"service.go:41\",\"time\":\"2016-07-26T09:45:37Z\"}\n",
    "stream": "stderr",
    "tag": "docker.var.lib.docker.containers.7fd39efe3f653e36092adf73e0e1fe1881eca1d6b2da336b74da7f16c4791e1f.7fd39efe3f653e36092adf73e0e1fe1881eca1d6b2da336b74da7f16c4791e1f-json.log",
    "@timestamp": "2016-07-26T09:45:37+00:00"
  },
  "fields": {
    "@timestamp": [
      1469526337000
    ]
  },
  "sort": [
    1469526337000
  ]
}

As you can see I have lost the ability to parse metadata from the log entries as I end up with json in json. So I am not sure I am happy with this solution. I might use a direct connection to elasticsearch when I can use it, but my concern is that what happens when elasticsearch is down for some reason.

It appears though that this is still an ongoing discussion with the best practice for logging and k8s: #24677

Any tips or advice would be warmly received.

I lose nice context logging when I am dumping json to stdout/stderr.

The fluentd kubernetes metadata filter plugin (that I created & is used in the specified image) actually can handle that by merging JSON log fields with fluentd record (merge_json_log config option in https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter). I'm surprised that doesn't work ootb but perhaps they're disabling it for some reason.

my concern is that what happens when elasticsearch is down for some reason.

fluentd has built in output buffering & retries.

@jimmidyson Note, that I am using the basic elasticsearch image. Do I also need to be running this one: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch/es-image to get the filter working?

Closed as dupe of #432

FYI: I got the approach of keithballdotnet working in minikube v0.9.0 but had to do some tweaks:

  • used image gcr.io/google_containers/fluentd-elasticsearch:1.11
  • changed the volume paths since minkube uses symlinked mounts as follows (note the /mnt/sda1 prefixes):
{
...
   spec:
      containers:
      - name: fluentd-elasticsearch
        image: gcr.io/google_containers/fluentd-elasticsearch:1.11
        volumeMounts:
         - name: varlibdockercontainers
           mountPath: /mnt/sda1/var/lib/docker/containers
         - name: varlibboot2docker
           mountPath: /var/lib/boot2docker
         - name: varlog
           mountPath: /var/log
      volumes:
         - hostPath:
             path: /mnt/sda1/var/lib/boot2docker
           name: varlibboot2docker
         - hostPath:
             path: /mnt/sda1/var/lib/docker/containers
           name: varlibdockercontainers
         - hostPath:
             path: /var/log
           name: varlog
...
}

The metadata filter mentioned by jimmidyson works fine in unfolding the JSON in version 1.11 of the image but it seems this commit removed it from version 1.19 onwards.

If anyone else tries to get the ELK stack working on Minikube and are experiencing pods just instantly terminating. It seams that the default 1024 MB of memory is not enough to get Elasticsearch going and you need to beaf up your Minikube VM.

Related #587.

@Starefossen

minikube start --memory=8192

Can help there.

Followed @woldan's approach. Minor note - the 1.19 image actually contains the plugin installed, just @aledbf had commented out the entry of the filter in the fluentd configuration. I mounted a new config volume that uncommented the filter. Then overrode the td-agent command to start using the modified config, and voila - I had the metadata processed properly. Still no idea why they commented out the filter setting (but left the plugin).

@diakula That was re-enabled in this commit. Here's my final yaml for posterity

a bit new here.
followed @keithharvey approach
Looks like everything is properly up however it seems like Kibana cannot communicate with logstash.
I can see "Elasticsearch plugin is red" error.
any idea what am i missing ?

image

@woldan god bless you, you just ended two hours of debugging

In the end I got it working too. you need to carefully read through all this post. I used @diakula solution with only one small change: I enforced kibana to use tag 5.4.3 instead of latest. It seems that latest image of Kibana wants elasticsearch 5.5 which is not yet available on dockerhub. So, I simply rollback (I need a working POC by tomorrow, this was a quick win). I more elegant solution would have been to use the elastic registry with elasticsearch 5.5 - I tried it but didn't work out of the box (image kept crashing) and I didn't have time to investigate.

Was this page helpful?
0 / 5 - 0 ratings