We currently have documentation on how to run Beats on Kubernetes. We can expand it to offer an easy path to use Beats together with Elastic Cloud on Kubernetes (ECK).
The end goal would be to have a very straight forward way to deploy Beats automatically configured to output to an Elasticsearch+Kibana cluster managed by ECK.
The steps would probably look like this:
Configuring Beats output will require reading from ECK generated secrets to retrieve credentials and certificates to access Elasticsearch or Kibana.
It would be awesome to end up with separate steps but also an all-in-one manifest that deploys everything with a single command.
I would focus this issue on Metricbeat + Filebeat, others can follow later.
cc @sorantis @agup006 not sure if there is any effort started around this, would love your input here
Thanks for starting this @exekias - @adamquan has a blog on some of these topics and adding @pebrc @sebgl @idanmo in case folks have related content.
@exekias Sounds like a good idea. I wonder if it makes sense to have the mentioned manifest can as a native citizen in every ECK deployment, for self-monitoring?
Hi,
I installed minikube & ECK on Windows 10. I'd love to contribute as well, because ECK commands of the quick start are for Linux ;)
I also tried installing filebeat on ECK, and tried enabling ssl without verification but this didn't work.
Here is the part I changed from the filebeat.yml config:
output.elasticsearch:
hosts: ['https://${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
protocol: https
ssl.enabled: true
ssl.verification_mode: none
I saw the document from @adamquan which is great! That's the basis for a blog post?
I am using a combination of
output.elasticsearch:
hosts: ['https://${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
ssl.certificate_authorities:
- /mnt/elastic/tls.crt
with this DaemonSet with assuming a ECK managed Elasticsearch cluster with the name hello-eck exists in the same namespace
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: default
labels:
k8s-app: filebeat
spec:
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.3.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: hello-eck-es-http
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
key: elastic
name: hello-eck-es-elastic-user
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
- name: es-certs
mountPath: /mnt/elastic/tls.crt
readOnly: true
subPath: tls.crt
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
- name: es-certs
secret:
secretName: hello-eck-es-http-certs-public
Thank you @blookot! it seems the blog post went out already https://www.elastic.co/blog/getting-started-with-elastic-cloud-on-kubernetes-data-ingestion, which is pretty close to what we want to document.
Thanks for sharing @pebrc, that looks perfect! I like that there is an easy way to reuse existing secrets to grab certs and passwords. It sounds like we have all the ingredients to make a really cool case
I would love to see an example where the beats container authenticates as a reduced privilege user. I have something similar to above working, but giving beats containers superuser rights isn't right. The user I've created through Kibana doesn't seem to work; beats gets a 401 error from elasticsearch.
@wfhartford we have a known issue in ECK 1.0.0-beta1 where the native realm is accidentally disabled. I am wondering if you are hitting that https://www.elastic.co/guide/en/cloud-on-k8s/current/release-highlights-1.0.0-beta1.html#k8s_native_realm_disabled
This will be resolved from:
@pebrc Thanks for pointing me in the right direction. I added xpack.security.authc.realms.native.native0.order: 0 to my elasticsearch deployment descriptor to enable native authentication and I'm on the right track. In case it helps anyone, here is my full YAML file:
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
namespace: juicy
name: juicy-beats
spec:
version: 7.4.0
nodeSets:
- name: default
count: 1
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
xpack.security.authc.realms.native.native0.order: 0
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
I added metricbeat's elasticsearch monitoring to the ConfigMap (above system.yml) i in metricbeat-kubernetes.yaml
monitoring.yml: |-
- module: elasticsearch
metricsets:
- ccr
- cluster_stats
- enrich
- index
- index_recovery
- index_summary
- ml_job
- node_stats
- shard
period: 10s
hosts: ['https://${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
ssl.verification_mode: none
xpack.enabled: true
EDIT: THE ABOVE ACTUALLY DOES NOT WORK, because it's monitoring the service (load balanced address). I assume we are still waiting for an in-pod beat.
I think we can close it since it is now tracked at elastic/cloud-on-k8s#2143.
Hy, i fired up the ECK Stack and tried installing filebeat: However it fails:
ERROR pipeline/output.go:100 Failed to connect to backoff(elasticsearch(http://elastic-es-http:9200))
kubectl port-forward -n=elastic-stack service/elastic-es-http 9200
does work fine from my local machine
all ressources are in the same ns (_elastic-stack_)
Hey @Berndinox!
This stuff is now getting tracked on https://github.com/elastic/cloud-on-k8s/issues/2143.
Please have a look at the manifests that already exist at https://github.com/elastic/cloud-on-k8s/tree/master/config/recipes/beats.
Thanks for the Links!
https://www.elastic.co/guide/en/beats/filebeat/current/running-on-kubernetes.html
is pointing to: https://raw.githubusercontent.com/elastic/beats/7.6/deploy/kubernetes/filebeat-kubernetes.yaml
the scheme (_https_) is missing there
The manifest that is expected to work along with ECK is not missing it though. See https://github.com/elastic/cloud-on-k8s/blob/master/config/recipes/beats/2_filebeat-kubernetes.yaml#L26
Yes, but the official documentation (elastic.co) site is pointing to https://raw.githubusercontent.com/elastic/beats/7.6/deploy/kubernetes/filebeat-kubernetes.yaml where its definitly not in place. Thanks for your fast reponse! BR
Well, the official documentation is pointing to a general purpose manifest which indeed cannot fit all cases, so it only includes a very basic one (with no https).
Making the manifests working out of the box along with ECK is a special case and it will be documented on ECK repository (elastic/cloud-on-k8s#2143). So since you are working with ECK the manifests you can work with now are here.
well i just follwed exactly the guide on elstic.co.
Most helpful comment
I am using a combination of
with this DaemonSet with assuming a ECK managed Elasticsearch cluster with the name
hello-eckexists in the same namespace