Charts: [incubator/druid] Middlemanager tries to access S3 despite of `druid.storage.type=local`

Created on 18 Feb 2019  路  3Comments  路  Source: helm/charts

Is this a request for help?: No


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Version of Helm and Kubernetes:

$ helm version
Client: &version.Version{SemVer:"v2.12.2", GitCommit:"7d2b0c73d734f6586ed222a567c5d103fed435be", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.2", GitCommit:"7d2b0c73d734f6586ed222a567c5d103fed435be", GitTreeState:"clean"}

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"archive", BuildDate:"2018-12-14T20:49:34Z", GoVersion:"go1.11.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:00:57Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

Which chart:

incubator/druid

Chart version: 0.1.0 (currently only version)
Using image: maver1ckpl/druid-docker:0.12.3-2

What happened:

I installed incubator/druid with default values.
I tried to ingest a kafka topic by posting a proper spec to druid/indexer/v1/supervisor.

The ingestion task fail and the logs in /var/druid/indexing-logs say:

2019-02-16T05:50:37,570 ERROR [main] io.druid.cli.CliPeon - Error when starting up.  Failing.
com.google.inject.ProvisionException: Unable to provision, see the following errors:

1) Error in custom provider, com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain
  at io.druid.storage.s3.S3StorageDruidModule.getRestS3Service(S3StorageDruidModule.java:107) (via modules: com.google.inject.util.Modules$OverrideModule -> io.druid.storage.s3.S3StorageDruidModule)
  at io.druid.storage.s3.S3StorageDruidModule.getRestS3Service(S3StorageDruidModule.java:107) (via modules: com.google.inject.util.Modules$OverrideModule -> io.druid.storage.s3.S3StorageDruidModule)
  while locating org.jets3t.service.impl.rest.httpclient.RestS3Service
    for the 1st parameter of io.druid.storage.s3.S3DataSegmentKiller.<init>(S3DataSegmentKiller.java:46)
  while locating io.druid.storage.s3.S3DataSegmentKiller
  at io.druid.storage.s3.S3StorageDruidModule.configure(S3StorageDruidModule.java:87) (via modules: com.google.inject.util.Modules$OverrideModule -> io.druid.storage.s3.S3StorageDruidModule)
  while locating io.druid.segment.loading.DataSegmentKiller annotated with @com.google.inject.multibindings.Element(setName=,uniqueId=148, type=MAPBINDER, keyType=java.lang.String)
  at io.druid.guice.Binders.dataSegmentKillerBinder(Binders.java:46) (via modules: com.google.inject.util.Modules$OverrideModule -> io.druid.storage.s3.S3StorageDruidModule -> com.google.inject.multibindings.MapBinder$RealMapBinder)
  while locating java.util.Map<java.lang.String, io.druid.segment.loading.DataSegmentKiller>
    for the 1st parameter of io.druid.segment.loading.OmniDataSegmentKiller.<init>(OmniDataSegmentKiller.java:39)
  while locating io.druid.segment.loading.OmniDataSegmentKiller
  at io.druid.cli.CliPeon$1.configure(CliPeon.java:176) (via modules: com.google.inject.util.Modules$OverrideModule -> com.google.inject.util.Modules$OverrideModule -> io.druid.cli.CliPeon$1)
  while locating io.druid.segment.loading.DataSegmentKiller
    for the 5th parameter of io.druid.indexing.common.TaskToolboxFactory.<init>(TaskToolboxFactory.java:108)
  at io.druid.cli.CliPeon$1.configure(CliPeon.java:165) (via modules: com.google.inject.util.Modules$OverrideModule -> com.google.inject.util.Modules$OverrideModule -> io.druid.cli.CliPeon$1)
  while locating io.druid.indexing.common.TaskToolboxFactory
    for the 1st parameter of io.druid.indexing.overlord.ThreadPoolTaskRunner.<init>(ThreadPoolTaskRunner.java:100)
  at io.druid.cli.CliPeon$1.configure(CliPeon.java:192) (via modules: com.google.inject.util.Modules$OverrideModule -> com.google.inject.util.Modules$OverrideModule -> io.druid.cli.CliPeon$1)
  while locating io.druid.indexing.overlord.ThreadPoolTaskRunner
  while locating io.druid.indexing.overlord.TaskRunner
    for the 4th parameter of io.druid.indexing.worker.executor.ExecutorLifecycle.<init>(ExecutorLifecycle.java:78)
  at io.druid.cli.CliPeon$1.configure(CliPeon.java:182) (via modules: com.google.inject.util.Modules$OverrideModule -> com.google.inject.util.Modules$OverrideModule -> io.druid.cli.CliPeon$1)
  while locating io.druid.indexing.worker.executor.ExecutorLifecycle

In /opt/druid/conf/druid/_common/common.runtime.properties there is the druid.storage.type=local therefore I would assume that no S3-credentials are needed.

If you really want to support S3 you probably should include jets3t.properties and make it configurable. (Or switch to the new 0.13.0 version which uses aws-sdk instead of jet3t.)

What you expected to happen:

A simple helm install incubator/druid should work out of the box. (And probably simply use local storage instead of S3.)

Using S3 should be possible by specifying all S3 parameters (including an non-AWS endpoint) as values.

How to reproduce it (as minimally and precisely as possible):

  • Install using helm install incubator/druid
  • post a proper spec to druid/indexer/v1/supervisor
  • watch /var/druid/indexing-logs in the middlemanager node

Anything else we need to know:

lifecyclstale

Most helpful comment

Looks like simply removing druid-s3-extensions in druid.extensions.loadList will fix the issue.

Unfortunately there is currently no chart option to either deactivate s3 completely or activate it completely. (i.e. setting endpoint and credentials.)

Currently working on a proper pull request. :)

All 3 comments

Looks like simply removing druid-s3-extensions in druid.extensions.loadList will fix the issue.

Unfortunately there is currently no chart option to either deactivate s3 completely or activate it completely. (i.e. setting endpoint and credentials.)

Currently working on a proper pull request. :)

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

This issue is being automatically closed due to inactivity.

Was this page helpful?
0 / 5 - 0 ratings