Basically https://github.com/prometheus/prometheus/blob/master/docs/querying/api.md#delete-series is unsafe to run with Thanos sidecar that uploads blocks to bucket because:
It is worth to note that it does not make sense much to do delete_series on local Prometheus with Thanos backup, because most likely all blocks are already uploaded at the moment when Prometheus replaces these with rewritten blocks. So deleted series will be seen only locally. Plus every older block will have still that series
Solution:
Thanks [slack]@jean-louis for spotting this!
Great we found the root cause :)
Even better would be (in a dream scenario), that you can do delete_series in Thanos. And that Thanos deletes series in the bucket also.
But it might be a good start to let the compactor just delete the new/adjusted blocks on each run.
But it might be a good start to let the compactor just delete the new/adjusted blocks on each run.
What do you mean? I think it is way more clear to block this behaviour and be explicit about it. (not sure yet how). Basically if we do this ("let the compactor just delete the new/adjusted blocks on each run.") we have few problems:
delete_series but some other configuration errors? (e.g same external labels)All of this makes me think to do either:
a. Block or explicitly do not allow this logic
b. Block or explicitly do not allow this logic, but implement some batch job code that will delete series from the whole history (from bucket) - actually user can implement that code on its own even now.
c. Integrate delete series with the whole stack (sidecar + compactor) - but I think that is way to complex for the actual value.
Can we first enumerate what are the use cases for delete series?
I thought the main one is to reduce the amount of unnecessary metric after they are stored in Prometheus, but with Thanos, space is not that much of a problem.
Second, maybe some "right to be forgotten" rule, but never seen that needed in metrics.
Any ideas?
The reason I deleted a serie was that the client was initially configured incorrectly, and sent metrics with wrong tags to Prometheus.
This caused graphs to be broken for that host.
After reconfiguration it was fine, but somehow I need to get rid of the broken samples in the TSDB.
I guess, it's a rare case. TSDB buckets are supposed to be immutable, so deleting the series from the storage means literally recalculate whole data, which could be unwarrantedly expensive.
Bear in mind this is seldom case, to solve the right to be forgotten problem, it could be a query/store side decision: somehow preserving/applying rules (could be limiting by time ranges to apply to requests when series had existed?) to the series you want to be deleted, just skipping them in a query time.
@xjewer: its rare indeed. And doesn't happen by default.
But at least it should be documented/handled in Thanos when it does happen :) Or there should be a warning in the docs.
I have an issue where the delete series from S3/GCS is really needed.
When running compactor I get:
level=error ts=2018-08-20T21:55:21.849911888Z caller=main.go:160 msg="running command failed" err="compaction: gather index issues for block /var/lib/thanos-compact/compact/0@{monitor=\"master\",replica=\"1\"}/01CN46CYFR2CQCH567H5Q33007: out-of-order label set {ArchPath=\"MC.BBM.whatever1.cluster.whatever2\",__name__=\"test_metric\",az=\"us-east-1b\",cluster=\"whatever\",env=\"stage\",instance=\"xxx\",job=\"node\",key1=\"0\",key1=\"1\",service=\"xxxx\"} for series 346654"
as someone created some funky metrics which breaks the compactor process. The only reasonable way I can continue using Thanos now is by deleting these metrics from S3. I cannot afford to delete all the blocks
Small update: the "out-of-order label set" error in the compactor was caused by the honor_labels: true option for that specific job. I guess Prometheus label overwrite messes up the way Thanos handles the metrics when compacting.
Cool, have you put an issue on Prometheus github?
Let's figure out how to unblock you. I think that all of those issues, like out of order label set etc, is nice to know and super important to solve, but fixing this is generally not necessary for overall thanos compaction. It seems like the solution here would be to just alert on this (produce metric, log) but continue the compaction as it's critical for system healthiness. Will try to find some time to produce a fix for this.
Not sure, if delete series helps here as you would have something produced broken series continously (: so you would endup deleting this series every 3h (:
Can we first enumerate what are the use cases for delete series?
I thought the main one is to reduce the amount of unnecessary metric after they are stored in Prometheus, but with Thanos, space is not that much of a problem.
This is generally true, but there is another dimension to consider: memory. We have been experiencing recently problems with the thanos store processes being OOMkilled. The underlying cause is the number of series we have in the object bucket, which is significant as a result of a long period of time in which we (inadvertently) have been uploading metrics with high cardinality dimensions. Now that we have identified the issue, we'd like to be able to remove those metrics from the storage in order to reduce the memory footprint of the store processes, but alas, I haven't found a way to do so yet that does not involve removing most of the blocks.
Ok, so:
Use cases
- Reduce cardinality once happen and fixed root cause.
- Potentially, "right to be forgotten" rule?
We have a couple of options:
Can we first enumerate what are the use cases for delete series?
I thought the main one is to reduce the amount of unnecessary metric after they are stored in Prometheus, but with Thanos, space is not that much of a problem.This is generally true, but there is another dimension to consider: memory. We have been experiencing recently problems with the
thanos storeprocesses being OOMkilled. The underlying cause is the number of series we have in the object bucket, which is significant as a result of a long period of time in which we (inadvertently) have been uploading metrics with high cardinality dimensions. Now that we have identified the issue, we'd like to be able to remove those metrics from the storage in order to reduce the memory footprint of thestoreprocesses, but alas, I haven't found a way to do so yet that does not involve removing most of the blocks.
I have the same scenario, I found metrics with high cardinality and need to remove them because of.
AFAIK the only solution would be either scaling up thanos resources or to delete data from S3?
is it possible to just delete entire tsdb blocks?
for me the first blocks had a lot of metrics with high cardinality, and, if deleting those metrics is not possible, I would be happy to be able to delete entire blocks instead.
@caarlos0 you can do that just fine - but better turn off compactor temporarily for that. The concern is that compactor might be compacting that very block to bigger block and thus deletion would not help.
This issue/PR is has been automatically marked as stale because it has not had recent activity. Please comment on status otherwise the issue will be closed in a week. Thank you for your contributions.
Most helpful comment
This is generally true, but there is another dimension to consider: memory. We have been experiencing recently problems with the
thanos storeprocesses being OOMkilled. The underlying cause is the number of series we have in the object bucket, which is significant as a result of a long period of time in which we (inadvertently) have been uploading metrics with high cardinality dimensions. Now that we have identified the issue, we'd like to be able to remove those metrics from the storage in order to reduce the memory footprint of thestoreprocesses, but alas, I haven't found a way to do so yet that does not involve removing most of the blocks.