Thanos, Prometheus and Golang version used
thanos v0.1.0rc2
What happened
Thanos-store is consuming 50gb of memory during startup
What you expected to happen
Thanos-store does not consume so much memory for starting up
Full logs to relevant components
store:
level=debug ts=2018-07-27T15:51:21.415788856Z caller=cluster.go:132 component=cluster msg="resolved peers to following addresses" peers=100.96.232.51:10900,100.99.70.149:10900,100.110.182.241:10900,100.126.12.148:10900
level=debug ts=2018-07-27T15:51:21.416254389Z caller=store.go:112 msg="initializing bucket store"
level=warn ts=2018-07-27T15:52:05.28837034Z caller=bucket.go:240 msg="loading block failed" id=01CKE41VDSJMSAJMN6N6K8SABE err="new bucket block: load index cache: download index file: copy object to file: write /var/thanos/store/01CKE41VDSJMSAJMN6N6K8SABE/index: cannot allocate memory"
level=warn ts=2018-07-27T15:52:05.293692332Z caller=bucket.go:240 msg="loading block failed" id=01CKE41VE4XXTN9N55YPCJSPP2 err="new bucket block: load index cache: download index file: copy object to file: write /var/thanos/store/01CKE41VE4XXTN9N55YPCJSPP2/index: cannot allocate memory"
Anything else we need to know
Some time after initialization the ram usage goes down to normal levels, something around 8Gb
Another thing that's happening is that my thanos-compactor consumer way too much ram memory as well, the last time it ran, it used up to 60Gb of memory.
I run store with this args:
containers:
- args:
- store
- --log.level=debug
- --tsdb.path=/var/thanos/store
- --s3.endpoint=s3.amazonaws.com
- --s3.access-key=xxx
- --s3.bucket=xxx
- --cluster.peers=thanos-peers.monitoring.svc.cluster.local:10900
- --index-cache-size=2GB
- --chunk-pool-size=8GB
Environment:
Do you have compactor running? If not that is expected as you might have
millions of small blocks in your bucket stored in super inefficient way
pt., 27 lip 2018, 17:57 użytkownik Felipe Cavalcanti <
[email protected]> napisał:
Thanos, Prometheus and Golang version used
thanos v0.1.0rc2What happened
Thanos-store is consuming 50gb of memory during startupWhat you expected to happen
Thanos-store does not consume so much memory for starting upFull logs to relevant components
store:level=debug ts=2018-07-27T15:51:21.415788856Z caller=cluster.go:132 component=cluster msg="resolved peers to following addresses" peers=100.96.232.51:10900,100.99.70.149:10900,100.110.182.241:10900,100.126.12.148:10900
level=debug ts=2018-07-27T15:51:21.416254389Z caller=store.go:112 msg="initializing bucket store"
level=warn ts=2018-07-27T15:52:05.28837034Z caller=bucket.go:240 msg="loading block failed" id=01CKE41VDSJMSAJMN6N6K8SABE err="new bucket block: load index cache: download index file: copy object to file: write /var/thanos/store/01CKE41VDSJMSAJMN6N6K8SABE/index: cannot allocate memory"
level=warn ts=2018-07-27T15:52:05.293692332Z caller=bucket.go:240 msg="loading block failed" id=01CKE41VE4XXTN9N55YPCJSPP2 err="new bucket block: load index cache: download index file: copy object to file: write /var/thanos/store/01CKE41VE4XXTN9N55YPCJSPP2/index: cannot allocate memory"Anything else we need to know
One thing that's happening is that my thanos-compactor consumer way too
much ram memory as well, the last time it ran, it used up to 60Gb of memory.I run store with this args:
containers: - args: - store - --log.level=debug - --tsdb.path=/var/thanos/store - --s3.endpoint=s3.amazonaws.com - --s3.access-key=xxx - --s3.bucket=xxx - --cluster.peers=thanos-peers.monitoring.svc.cluster.local:10900 - --index-cache-size=2GB - --chunk-pool-size=8GBEnvironment:
- OS (e.g. from /etc/os-release): kubernetes running on debian
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/improbable-eng/thanos/issues/448, or mute the thread
https://github.com/notifications/unsubscribe-auth/AGoNuygVNcs4idveMfvyH5JaitCZ7Mjzks5uKzhxgaJpZM4VjvTD
.
hi @Bplotka
I do actually, it also uses tons of memory (~60Gb) in the last run, is this normal?
it also uses tons of memory (~60Gb) in the last run, is this normal?
Yes, fix is in review: https://github.com/improbable-eng/thanos/pull/529
But huge mem usage on startup for store gateway is mainly because fine greained blocks - I think your compactor did not compact all things yet. Might that be the case?
that's a different problem, #529 fixes the downsampling issue #297
@xjewer no:
it also uses tons of memory (~60Gb) in the last run, is this normal?
It = means compactor in https://github.com/improbable-eng/thanos/issues/448#issuecomment-408502890 (: so https://github.com/improbable-eng/thanos/pull/529 is actually fix for this
oh, I missed the details, ok then 😀
TL;DR - We are currently seeing thanos-store consuming incredibly large amounts of memory during initial sync and then being OOM killed. It is not releasing any memory as it is performing the initial sync and there is very likely a memory leak. Memory leak is likely to be occurring in https://github.com/improbable-eng/thanos/blob/v0.1.0/pkg/block/index.go#L105-L154
Thanos, Prometheus and Golang version used
thanos-store 0.1.0, Golang 1.11 (built with quay.io/prometheus/golang-builder:1.11-base)
What happened
thanos-store is consuming 32gb of memory during initial sync, then being OOM (out of memory) killed
What you expected to happen
thanos-store not to use this much memory on initial sync and to progress past the initial sync
Full logs to relevant components
No logs are emitted whilst the initial sync is occurring, see graphs below
Anything else we need to know
Here is a graph of the total memory usage (cache + rss), rss memory usage and cache memory usage:

We have Kubernetes memory limits on the thanos-store container set to 32Gb, which is why it is eventually killed when it reaches this point.
Our thanos S3 bucket is currently 488.54404481872916G, 15078 objects in size.
We've noticed that thanos-store doesn't progress past the InitialSync function - https://github.com/improbable-eng/thanos/blob/v0.1.0/cmd/thanos/store.go#L113 and exceed the memory limits of the container before finishing.
We've modified the goroutine count for how many blocks are being processed concurrently. It is currently hardcoded 20, but by changing it to a much lower number, e.g. 1, we can have thanos-store last longer before being OOM killed. Although it does take longer to do the InitialSync - https://github.com/improbable-eng/thanos/blob/v0.1.0/pkg/store/bucket.go#L231
The goroutine count for SyncBlocksshould really be a configurable option as well, hard coding it to 20 is not ideal.
Through some debugging, we've identified the loading of the index cache as the location of the memory leak - https://github.com/improbable-eng/thanos/blob/v0.1.0/pkg/store/bucket.go#L1070
By commenting out that function from the newBucketBlock function, thanos-store is able to progress past the InitialSync, (albeit without any index caches) and consumes very little memory.
We then ran some pprof heap analysis on the thanos-store as the memory leak was occurring and it identified block.ReadIndexCache as consuming alot of memory, see image below of the pprof heap graph

The function in question - https://github.com/improbable-eng/thanos/blob/v0.1.0/pkg/block/index.go#L105-L154. The heap graph above suggests that the leak is in the json encoding/decoding of the index file and for some reason is not releasing memory.
Any update on this? We have blocks that after compaction but before downsampling are 400GB which means either we run a massively expensive AWS instance or just add a massive swap file.
Any update on this? We can't use Thanos at the scale that we want to because of this.
Any update on this? Can we have a timeline for this fix?
Thanks guys for this, especially @awprice for detailed work. This is interesting as our heap profiles were totally different - suggesting proper place -> fetching bytes into buffer for actual series in the query.
Maybe you have bigger indexes and not much traffic on the query side? You said:
Our thanos S3 bucket is currently 488.54404481872916G, 15078 objects in size.
This is totally reasonable number. Can you check you biggest blocks? How larger they are and notably what is the index size? (:
Also sorry for delay, I totally missed this issue
@bwplotka Apologies for the late reply, here is some info on our largest blocks/index sizes:
Largest block - 14 GiB
Index size for that block - 6 GiB
Our largest index is 7 GiB
Let's get back to this.
We need better OOM flow for our store gateway. Some improvements that needs to be done:
chunk Pool size + index cache size which is unexpected. This means "leak" somewhere else or byte ranges getting out of chunk pool hardcoded ranges. We need to take a look on this as well.Lot's of work, so help is wanted (:
In separate thread we are working on Querier cache, but that's just hiding the actual problem (:
cc @mjd95 @devnev
Two more info items:
I managed to reclaim half of the 5GB by adding this
GODEBUG=madvdontneed=1
Nice, but I would say diving why such decision for Golang itself would be more useful?
Also https://github.com/golang/go/issues/28466
I'm concerned that this may be confusing to users, but more concerned that this may confuse automated systems that monitor RSS and/or container memory usage, particularly if those systems make decisions based on this.
We need to move Thanos to Go 1.12.5: https://github.com/prometheus/prometheus/issues/5524
I think it would be a lot improved already by just providing guidance on sizing of chunk pool and index cache sizes. If the grafana dashboards provided also included enough to figure out more what was going on and how close one was to limits, that would also be helpful.
Hi All! Let's attack this further. For a detailed overview of ideas if anyone wants to help we started this umbrella issue. Feel free to propose improvements and discuss existing ideas (:
:point_up: Deleted the comment as it does not help to resolve this particular issue for the community (:
We're also seeing massive memory consumption by Thanos :( What impact should we expect if any by reducing store.grpc.series-max-concurrency? Are there other flags that we can explore?
Any update on this?
FYI: This issue was closed as the major rewrite happened on master above 0.10.0. It's still experimental but you can enable it via https://github.com/thanos-io/thanos/blob/master/cmd/thanos/store.go#L78 (--experimental.enable-index-header).
We are still working on various benchmarks especially around query resource usage, but functionally it should work! (:
Please try it our on dev/testing/staging environments and give us feedback! :heart:
Hey @bwplotka, I just tried here and I got:
$ docker run --rm thanosio/thanos:v0.10.1 store --experimental.enable-index-header
Error parsing commandline arguments: unknown long flag '--experimental.enable-index-header'
thanos: error: unknown long flag '--experimental.enable-index-header'
What am I doing wrong? 🤔
@caarlos0 Hi, this feature is not included in v0.10.1 release. You can use the latest master branch docker image to try it.
docker pull quay.io/thanos/thanos:master-2020-01-25-cf4e4500
Oh ok, sorry, I misread 🙏
Got this master-2020-01-25-cf4e4500 running for some time. 50% memory improvement. Great work and thanks to all people involved.

How is the startup latency? I assume you use the experimental flag as well,
right?
Can you send me the heap profile? (:
Knd Regards,
Bartek
On Tue, 11 Feb 2020 at 21:19, Jorge Arco notifications@github.com wrote:
Got this master-2020-01-25-cf4e4500 running for some time. 50% memory
improvement. Great work and thanks to all people involved.
[image: chart]
https://camo.githubusercontent.com/1e2d43903f158f2043f569467d88d0ef3b11ef93/68747470733a2f2f692e696d6775722e636f6d2f676b36554459652e706e67—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/thanos-io/thanos/issues/448?email_source=notifications&email_token=ABVA3O226PCG6DTSMMCTYLTRCMI6NA5CNFSM4FMO6TB2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELODR4I#issuecomment-584857841,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABVA3O4IS3MGR2C4IBGDYHDRCMI6NANCNFSM4FMO6TBQ
.
Yes, experimental flag is enabled. I'll not touch this in the next days as I'm quite busy with other stuff but will try to provide you the profile
I am getting the below error with thanos store when using latest master branch docker image(quay.io/thanos/thanos:master-2020-01-25-cf4e4500) and by enabling --experimental.enable-index-header flag. I am having kubernetes for thanos deployment deployment.
kubectl logs thanos-store-gateway-7bc8f64766-7vwxs -f
level=debug ts=2020-02-13T12:52:08.756136456Z caller=main.go:101 msg="maxprocs: Updating GOMAXPROCS=[4]: determined from CPU quota"
level=info ts=2020-02-13T12:52:08.75644186Z caller=main.go:149 msg="Tracing will be disabled"
level=info ts=2020-02-13T12:52:08.756582213Z caller=factory.go:43 msg="loading bucket configuration"
level=info ts=2020-02-13T12:52:08.757133644Z caller=inmemory.go:167 msg="created in-memory index cache" maxItemSizeBytes=131072000 maxSizeBytes=2147483648 maxItems=math.MaxInt64
level=info ts=2020-02-13T12:52:08.757292099Z caller=store.go:223 msg="index-header instead of index-cache.json enabled"
level=info ts=2020-02-13T12:52:08.757417647Z caller=options.go:20 protocol=gRPC msg="disabled TLS, key and cert must be set to enable"
level=info ts=2020-02-13T12:52:08.757656237Z caller=store.go:297 msg="starting store node"
level=info ts=2020-02-13T12:52:08.757767835Z caller=prober.go:127 msg="changing probe status" status=healthy
level=info ts=2020-02-13T12:52:08.757806998Z caller=http.go:53 service=http/server component=store msg="listening for requests and metrics" address=0.0.0.0:10902
level=info ts=2020-02-13T12:52:08.757797309Z caller=store.go:252 msg="initializing bucket store"
level=info ts=2020-02-13T12:52:08.868409206Z caller=prober.go:107 msg="changing probe status" status=ready
level=info ts=2020-02-13T12:52:08.868446453Z caller=http.go:78 service=http/server component=store msg="internal server shutdown" err="bucket store initial sync: sync block: MetaFetcher: iter bucket: Access Denied"
level=info ts=2020-02-13T12:52:08.868478867Z caller=prober.go:137 msg="changing probe status" status=not-healthy reason="bucket store initial sync: sync block: MetaFetcher: iter bucket: Access Denied"
level=warn ts=2020-02-13T12:52:08.86849345Z caller=prober.go:117 msg="changing probe status" status=not-ready reason="bucket store initial sync: sync block: MetaFetcher: iter bucket: Access Denied"
level=info ts=2020-02-13T12:52:08.8685051Z caller=grpc.go:98 service=gRPC/server component=store msg="listening for StoreAPI gRPC" address=0.0.0.0:10901
level=info ts=2020-02-13T12:52:08.868521799Z caller=grpc.go:117 service=gRPC/server component=store msg="gracefully stopping internal server"
level=info ts=2020-02-13T12:52:08.868604202Z caller=grpc.go:129 service=gRPC/server component=store msg="internal server shutdown" err="bucket store initial sync: sync block: MetaFetcher: iter bucket: Access Denied"
level=error ts=2020-02-13T12:52:08.86863483Z caller=main.go:194 msg="running command failed" err="bucket store initial sync: sync block: MetaFetcher: iter bucket: Access Denied"
I am using the same bucket before when the thanos-store docker image improbable/thanos:v0.3.2 and there were no this access denied error but the initial syncing was got stuck and eventually the pod got OOM killed. :(
We see the same access denied error as uvaisibrahim with going to the new release. s3 backed storage, no changes other than updating what image we're running and adding the --experimental.enable-index-header flag. If I revert to v0.10.1 and remove the experimental flag, things start up and run (though need a lot of memory to do so)
Try just a new release without the flag. This error which is really client not being able to talk to S3 does not have anything to do with the experimental feature. (: It might be miconfiguration.
Yes, it's not flag specific, but just changing the image to use the new release causes the error. No other settings are changed. It's a k8s deploy so config is identical except for the image change. Maybe there are other changes in the master-2020-01-25-cf4e4500 release, that introduce this error, but figured that it's worth mentioning since it seemed was the identical error that at least one other user on the release was seeing.
FYI, the issue is with the updated Thanos version not working with existing configs.
This Thanos change https://github.com/thanos-io/thanos/pull/2033 updated the minio-go library to v6.0.45 which introduced this bug https://github.com/minio/minio-go/issues/1223 which has a fix merged in minio-go v6.0.47. I created a custom/updated build of the master thanos release with just the go.mod updated to use "github.com/minio/minio-go/v6 v6.0.47" and built a new docker image with that Thanos image, and now with all the same configs everything works as expected.
With the experimental flag, my store's memory usage is reduced by 27% and so far everything seems to be healthy and functioning as expected.
Thanks for that info @genericgithubuser! Would you like to open a PR to update the minio-go dependency? 😄
In case people still needs this, you can now test with the container v0.11.0-rc.1.
It's working correctly for us on AWS.
We're testing the changes currently in our testing/staging environment in an EKS cluster. The memory has reduced by 60-70%. I'll keep you updated after more tests.
We're using the v0.11.0 image with experimental flag.
Most helpful comment
I think it would be a lot improved already by just providing guidance on sizing of chunk pool and index cache sizes. If the grafana dashboards provided also included enough to figure out more what was going on and how close one was to limits, that would also be helpful.