Charts: [stable/elasticsearch] How to use the keystore with a secret

Created on 1 Jul 2019  路  8Comments  路  Source: helm/charts

_(This is a question/support request, not an issue)_

Hello,

I would like to configure ES backups to S3. I installed the repository-s3 plugin, all good but the tricky part comes when I want to configure the credentials to access the bucket. I need to configure the Keystore to add two secure values:

  • s3.client.default.access_key
  • s3.client.default.secret_key

According to the README, we can use the cluster.keystoreSecret setting to specify the name of the secret holding secure options. But when I look at how this secret is used, it seems to be directly mounted to:
/usr/share/elasticsearch/config/elasticsearch.keystore
(this seems to be the PR that added that functionality: https://github.com/helm/charts/pull/7477/files)

If I exec into one pod and look into that file, it's a binary and I cannot read it directly.
I tried to create a secret with something like that:

apiVersion: v1
kind: Secret
metadata:
  name: eskeystore
data:
  s3.client.default.access_key: xxx
  s3.client.default.secret_key: xxx

and set cluster.keystoreSecret to eskeystore but it doesn't work.

So my question is: how can I generate the Keystore with the 2 values written above?

Thanks a lot!

Most helpful comment

@rivetmichael
Logon to one of your images:

kubectl exec -it <pod> -n <namespace> sh 

Add your secrets to the keystore:

echo ACCESS_KEY| bin/elasticsearch-keystore add --stdin --force s3.client.default.access_key 
echo SECRET_KEY | bin/elasticsearch-keystore add --stdin --force s3.client.default.secret_key`

use kubectl cp to copy the binary:

kubectl cp NAMESPACE/POD:/usr/share/elasticsearch/config/elasticsearch.keystore .

upload the secret:

kubectl create secret generic eskeystore --from-file=./elasticsearch.keystore

All 8 comments

You actually need to add those settings to an ES keystore and store the keystore as the secret with something like:

kubectl create secret generic eskeystore --from-file=./elasticsearch.keystore

Then, just make sure cluster.keystoreSecret matches the secret name and restart your ES nodes for the change to take effect. I had to do this with a cluster I had already deployed, so I just added the S3 settings to the keystore on one of the nodes in the cluster, used kubectl to copy the keystore to my local machine, and then ran the above command to deploy it in my ES cluster's namespace.

Thanks a lot @macgyver603! I did as you said, I manually added the values into the keystore, and I used kubectl cp to copy the file from the container to my local machine. Then I created the secret containing the keystore and I configured the backup for ES (with something like that):

curl -H 'Content-Type: application/json' -X PUT \
-d '{"type": "s3", "settings": { "bucket": "my.awesome.bucket", "base_path": "elasticsearch/backup", "compress": "true", "storage_class": "standard" }}' \
http://elasticsearch-client.elastic-stack.svc:9200/_snapshot/backup

Then in my elasticsearch curator configuration, I configured the snapshot:

configMaps:
  action_file_yml: |-
    ---
    actions:
      1:
        action: snapshot
        description: "Create snapshot"
        options:
          repository: backup
          continue_if_exception: False
          wait_for_completion: True
          disable_action: False
        filters:
          - filtertype: pattern
            kind: regex
            value: ".*$"

I needed to fix the policy as well but I just followed the plugin documentation.

And now it's working, I get snapshots uploaded to S3! :)

That's a fairly good solution, I would have hoped not to have to do those manual steps for the Keystore, it's a bit dodgy but it's gonna be good enough for now!

@macgyver603 @maximerenou50 could you please provide an example of the file used to create the secret ./elasticsearch.keystore ?

kubectl create secret generic eskeystore --from-file=./elasticsearch.keystore

Should it be

s3.client.default.access_key some-key
s3.client.default.secret_key som-secret

or

s3.client.default.access_key: some-key
s3.client.default.secret_key: som-secret

Thanks for your help

@rivetmichael
Logon to one of your images:

kubectl exec -it <pod> -n <namespace> sh 

Add your secrets to the keystore:

echo ACCESS_KEY| bin/elasticsearch-keystore add --stdin --force s3.client.default.access_key 
echo SECRET_KEY | bin/elasticsearch-keystore add --stdin --force s3.client.default.secret_key`

use kubectl cp to copy the binary:

kubectl cp NAMESPACE/POD:/usr/share/elasticsearch/config/elasticsearch.keystore .

upload the secret:

kubectl create secret generic eskeystore --from-file=./elasticsearch.keystore

@dunkelbunt1 thank you for your help.
I was wondering if there was a possibility to use initContainer rather than doing a manual operation at first ?

In the chart value, need to specify keystore with secretName like:

keystore:

  • secretName: eskeystore

and manually define eskeystore secret like in the issue description.

Then the key will be mounted to the pod and used in the initContainer to add the keys.

I did the exact setup but found another issue

OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Exception in thread "main" org.elasticsearch.bootstrap.BootstrapException: java.io.IOException: Is a directory: SimpleFSIndexInput(path="/usr/share/elasticsearch/config/elasticsearch.keystore")
Likely root cause: java.io.IOException: Is a directory
    at java.base/sun.nio.ch.FileDispatcherImpl.read0(Native Method)
    at java.base/sun.nio.ch.FileDispatcherImpl.read(FileDispatcherImpl.java:48)
    at java.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:276)
    at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:245)
    at java.base/sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:223)
    at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.readInternal(SimpleFSDirectory.java:178)
    at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:342)
    at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:54)
    at org.apache.lucene.store.BufferedChecksumIndexInput.readByte(BufferedChecksumIndexInput.java:41)
    at org.apache.lucene.store.DataInput.readInt(DataInput.java:101)
    at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:194)
    at org.elasticsearch.common.settings.KeyStoreWrapper.load(KeyStoreWrapper.java:208)
    at org.elasticsearch.bootstrap.Bootstrap.loadSecureSettings(Bootstrap.java:230)
    at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:295)
    at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159)
    at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150)
    at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)
    at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124)
    at org.elasticsearch.cli.Command.main(Command.java:90)
    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:116)
    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:93)
Refer to the log for complete error details.

I replaced the cluster.keystoreSecret with eskeystore and found the above issue.
Secret named eskeystore is already created

$ kubectl get secrets
NAME                                                              TYPE                                  DATA   AGE
default-token-flvx6                                               kubernetes.io/service-account-token   3      57d
eskeystore                                                        Opaque                                1      8m34s

If I comment the `cluster.keystoreSecret' then it works fine, am I making a mistake here ?

I replaced the cluster.keystoreSecret with eskeystore and found the above issue.

File name in the command:
kubectl create secret generic eskeystore --from-file=./elasticsearch.keystore

must be exactly elasticsearch.keystore
It's name will be taken as parameter name.

Was this page helpful?
0 / 5 - 0 ratings