Cannot delete measurments.
I accidentally created a bunch of MS SQL server measurements into my Telegraf db, and want to remove them. When i try to remove them with DROP MEASUREMENT "
My mistake seems to have created over 1000 measurements, so i suspect the bug/issue is due to the number of measurements.
If i create a new test database i can drop measurements as expected.
__System info:__
Influx 1.5.1 running on Red Hat 4.8.5-11
__Steps to reproduce:__
Create a lot of measurements - i did this by trying to use the MS SQL Server telegraf input.
__Expected behavior:__
Id expect the measurement to no longer be returned after running the DROP MEASUREMENT command.
__Actual behavior:__
SHOW MEASUREMENT continues to return the measurement..and will not die ;-(
__Additional info:__ [Include gist of relevant config, logs, etc.]
telegraf_measurements.zip
Here is a list of the measurements...quite a lot!
+1 I cant drop anything at all.
Even SERIES are still there :(
Same issue too, no problem when we switch back to 1.3 version.
@max3163 thanks for the tips, it seems that 1.4.1 works fine too
I moved from 1.4.3 to 1.5.2, everything seems to work fine for me.
Same issue here in version 1.5.2
In my experience, dropping a series is not always "instant". If a "drop" is not given the correct parameters, it can silently do nothing without an error.
This command has no observability and what it does (or doesn't) do is hidden from users. This needs to be improved and "drop series" made more transparent in what it does (or at least the option of having it behave that way.)
A more "verbose" drop command is required, telling you how many series were found to drop, including 0 if none are to be dropped.
Plus include something in the log file - maybe write out to the log file the name of each series dropped and the total number dropped.
Same here. Can't delete old measurements I don't want:
root@influx:/mnt/data/influxdb# influx
Connected to http://localhost:8086 version 1.5.2
InfluxDB shell version: 1.5.2
> use graphite
Using database graphite
> show series where host='freenas.lan' limit 10
key
---
aggregation_value,host=freenas.lan,instance=cpu-sum,type=cpu,type_instance=idle
aggregation_value,host=freenas.lan,instance=cpu-sum,type=cpu,type_instance=interrupt
aggregation_value,host=freenas.lan,instance=cpu-sum,type=cpu,type_instance=nice
aggregation_value,host=freenas.lan,instance=cpu-sum,type=cpu,type_instance=system
aggregation_value,host=freenas.lan,instance=cpu-sum,type=cpu,type_instance=user
cpu_temp_value,host=freenas.lan,instance=0,type=temperature
cpu_temp_value,host=freenas.lan,instance=1,type=temperature
cpu_temp_value,host=freenas.lan,instance=2,type=temperature
cpu_temp_value,host=freenas.lan,instance=3,type=temperature
cpu_value,host=freenas.lan,instance=0,type=cpu,type_instance=idle
> drop series where host='freenas.lan'
> show series where host='freenas.lan' limit 10
key
---
aggregation_value,host=freenas.lan,instance=cpu-sum,type=cpu,type_instance=idle
aggregation_value,host=freenas.lan,instance=cpu-sum,type=cpu,type_instance=interrupt
aggregation_value,host=freenas.lan,instance=cpu-sum,type=cpu,type_instance=nice
aggregation_value,host=freenas.lan,instance=cpu-sum,type=cpu,type_instance=system
aggregation_value,host=freenas.lan,instance=cpu-sum,type=cpu,type_instance=user
cpu_temp_value,host=freenas.lan,instance=0,type=temperature
cpu_temp_value,host=freenas.lan,instance=1,type=temperature
cpu_temp_value,host=freenas.lan,instance=2,type=temperature
cpu_temp_value,host=freenas.lan,instance=3,type=temperature
cpu_value,host=freenas.lan,instance=0,type=cpu,type_instance=idle
> select * from aggregation_value where host='freenas.lan' limit 10
name: aggregation_value
time host instance type type_instance value
---- ---- -------- ---- ------------- -----
1500261791902566479 freenas.lan cpu-sum cpu idle 604978350
1500261791902566479 freenas.lan cpu-sum cpu interrupt 348403
1500261791902566479 freenas.lan cpu-sum cpu nice 464942
1500261791902566479 freenas.lan cpu-sum cpu system 15242180
1500261791902566479 freenas.lan cpu-sum cpu user 15781610
1500261801903654932 freenas.lan cpu-sum cpu idle 604983172
1500261801903654932 freenas.lan cpu-sum cpu system 15242230
1500261801903654932 freenas.lan cpu-sum cpu user 15781673
1500261811887399490 freenas.lan cpu-sum cpu idle 604988107
1500261811887399490 freenas.lan cpu-sum cpu interrupt 348405
>
In my case I have now waited a week and restarted the database multiple times, the measurement is still not dropped
Are there anyone looking into this?
Yes. We are investigating this.
For those experiencing this issue...we are assuming that:
The part where we could use additional hints -- and where we are broadening our tests in an attempt to replicate this -- is regarding the following:
a) did you switch the index-version setting from inmem to tsi1?
b) did you also build/rebuild the index using influx-inspect buildtsi command?
https://docs.influxdata.com/influxdb/v1.5/tools/influx_inspect/#influx-inspect-buildtsi
Yes I've upgraded to 1.5.x and change index from inmem to tsi1, also build the index. Although I must say that I didn't build the index right away, as far as I remember I did it a day later.
I was able to reproduce the behavior.
How to reporduce
Started influxdb version 1.5.1 using inmem indexing. I created a database with shard duration of 1 hour. I then inserted data for 6 hours (I'm not sure it needs to be that long). Near the middle of 6th the hour I switched the indexing to tsi1 and restarted the the services. It then inserted data for another 8 hours (again not sure if it needs to be that long). This created a situation where some shards were created using inmem, some mixed and some tsi.
Findings
Drop measurement requests succeed but the measurement still shows up in the list of measurements. Additionally, when the measurement is listed a show series command has no series with that measurement in it.
> show measurement
...
measurement97
measurement98
measurement99
> drop measurement measurement99
> show measurement
...
measurement97
measurement98
measurement99
> show series
...
measurement96,96uniq0=uniq,96uniq1=uniq,96uniq2=uniq,96uniq3=uniq,dup0=dup,dup1=dup,dup2=dup,dup3=dup
measurement97,347uniq0=uniq,347uniq1=uniq,347uniq2=uniq,347uniq3=uniq,dup0=dup,dup1=dup,dup2=dup,dup3=dup
measurement97,97uniq0=uniq,97uniq1=uniq,97uniq2=uniq,97uniq3=uniq,dup0=dup,dup1=dup,dup2=dup,dup3=dup
measurement98,348uniq0=uniq,348uniq1=uniq,348uniq2=uniq,348uniq3=uniq,dup0=dup,dup1=dup,dup2=dup,dup3=dup
measurement98,98uniq0=uniq,98uniq1=uniq,98uniq2=uniq,98uniq3=uniq,dup0=dup,dup1=dup,dup2=dup,dup3=dup
>
I'm using dockerized influxdb and found next workaround:
do DROP MEASUREMENT "xyz"
do SHOW MEASUREMENTS and check that measurement was not removed
stop & restart influxdb instance
do SHOW MEASUREMENTS and see that "xyz" measurement does not exist anymore.
Facing similar issue here. Can drop newly built measurements but old ones which were populated before upgrade aren't erased. Any update on this?
@serputko thanks for the workaround. I did the same things. and the "xyz" measurement does not show in "show measurements". Yet when i start to write "abc" data into new measurement which also named "xyz", there are deprecated field keys which are not part of my new "abc" data. So i think the "xyz" measurement is still not properly deleted.
Facing this issue with 1.6.1 with a database that was created using 1.6.1, ie no migrated data.
Tried to restart the instance as well but no luck.
```Connected to http://localhost:8086 version 1.6.1
InfluxDB shell version: 1.6.1
use iPerf
Using database iPerf
show measurements
name: measurementsname
iPerfLog
drop measurement "iPerfLog"
show measurements
name: measurementsname
iPerfLog
drop measurement iPerfLog
show measurements
name: measurementsname
iPerfLog
```
I'm seeing this too :
> drop MEASUREMENT procstats;
> SHOW QUERIES
qid query database duration status
--- ----- -------- -------- ------
5 SHOW QUERIES telegraf 70µs running
> show series from procstat
procstat,host=xxx.wle,pattern=.,process_name=vnetd
procstat,host=xxx.wle,pattern=.,process_name=xinetd
....
# service influxdb restart
> show series from procstat
procstat,host=xxx.wle,pattern=.,process_name=vnetd
procstat,host=xxx.wle,pattern=.,process_name=xinetd
....
I'm using influxdb 1.6.1, and I did switch the index-version to tsi1 at some stage.
restarting influxdb does not make procstat series go away.
Facing the same issue both with Influxdb 1.5.2 and 1.6.3 (tsm memory index)
I really can't understand how such timeseries storage can be used in production when you can't delete data :(
Are you continuing to feed data into the cluster? If so, the measurement is re-created...as new data points arrive.
Are you continuing to feed data into the cluster?
yes, but without "incorrect" fields, which type I want to drop.
example:
was
# http_request measurement
http_calls_count: integer
time_spent_in_http_calls: integer # incorrect type, I want to delete
all points have deleted but scheme not changed (time_spent_in_http_calls: integer still exist)
Using 1.6.3, not an upgraded instance. Restarting the influxdb server caused the changes to be visible.
Hi,
Same here..
drop measurement system
select * from systemafter some seconds...
select * from system
name: system
time Temperature Battery 1 host host_1 load1 load15 load5 n_cpus n_users uptime uptime_format
---- --------------------- ---- ------ ----- ------ ----- ------ ------- ------ -------------
1540909380000000000 host_name 0.03 0.1 0.06 24 3 19452 5:24
The point is that "Temperature Battery 1" is NOT defined in the system!! Is a snmp input writen to another measurement correctly...
Also, is written to host_1 that has not been defined by me.... normally, should go to "host"..
I just cannot drop a measurement; When I drop it, it appears to go away but when I insert a row all the old field types comes back
But doing the same with another measurement name works. Somehow, the old measurement field types are cached; How can I completely wipe out a measurement
Hi,
Like sada, nothing to do.. I cannot erase the measurement "system"...
Like him/her, I did everything... everything... Checked that I'm not using "host" as field away... Making drop hundred times...
Last tests, I stopped telegraf to avoid writing the DB, drop the measurement and restart influxdb. Verify that the measurement "system" is not there.. good...
Launched again telegraf __only__ with the [[input.system]] activate and..again.. the "system" measurement is remade with "host" "host_1" etc...
I know that in the past (some days ago) I did a mistake and I wrote in "system" fields named "host" and another but how comes that I drop the measurement and they are "rewritten" again?
Like Sada, is like the field types are cached somewhere and when you creates again the measurement re-take the old values...
To work around:
The problem is cleared - this bug is very very annoying and very expensive if the database is large.
I am unsure if influxdb is ready ! I have started to switch over to "https://prometheus.io/"
There are too many bugs in influxdb
I can consistently recreate the problem in a fresh influxdb environment.
Running influxdb version 1.6.4 on my snynology nas with docker.
Im am writing unit tests for a python script, and the following unit test consistently recreates the problem:
`
def testWriteBatch(self):
meas="TEST_MEASUREMENT_FOR_TESTING_BATCH"
# start with trying to remove the measurement from influxdb (if it exists), so we can start clean
helper.removeMeasurementData(meas, self.host, self.database)
# THESE LISTS MUST BE THE SAME SIZE, ONE ROW IS ONE ENTRY IN INFLUXDB
tags=[] # list of rows. One row is a dict with tagName/tagValue => tag set
fields=[] # list of rows. One row is a dict with fieldName/fieldValue => field set
for i in range(0,100):
fields.append({"val":100})
tags.append({})
for t in range(0, random.randrange(1, 6)): # change 6 to 5, and the problem will not occur
tags[-1]["tag" + str(t)] = "v" + str(t) # change str(t) to str(t + (i * 10)) and the problem will not occur
# this function will add the data to influxdb by:
# - doing a http post (batch)
# - retrieve the data by doing a http get query
# - compare data, and throw if mismatch
# - delete measurement: query succeeds, but an empty measurement is still present
helper.addCompareAndRemove(self, self.host, self.database, meas, fields, tags)
`
So the script creates 100 entries in influxdb, every row has a random amount of tags (1-6), and 1 field.
I hope this helps.
Thanks, everyone, for describing the problem in such detail so far. After some digging into this, it would be great if anyone can provide answers to these questions:
To collect shard index types, you can either navigate to the shard folder and check if it has an index folder, or run this command:
influx -database <database> -execute 'show stats' | grep indexType | sort | uniq
The only field that's of interest is the index type, so feel free to redact whatever else.
It seems there may be multiple bugs here, because, for example, some users are reporting that the issue goes away after a restart while it persists for others. A complete set of answers to these questions from anyone having the problem will help get closer to fixing their bug.
Thanks @MitchVL for those instructions, but sadly I was unable to reproduce the issue with just that information. Could you provide more detail on what the helper is and how exactly it performs those methods? The answers to the questions in the previous comment would also help.
well - i am moving my db to Prometheus ...
these kinds of intense glaring issues will be hard to reproduce and cause resiliency issues in long run.
there may be fundamental design issue ; will let you know if i can give a way to reproduce it in future
@zeebo I just tried with 1.7.1 to drop some measurements which I previously failed to drop using 1.6.4 and now the operation was successful, I just need to wait now for the compaction process to finish.
Hi @zeebo
the answers to your questions:
Is this on a fresh database?
-- No. using some months
What indexes are being used for your shards?
-- None... your command return "null"
Does the issue persist after a restart?
-- Yes
Are you writing to the measurement the same time you're dropping it
-- No. When happened with the 1.6 version, I dropped the measurement and wait some hours before verify that the measurement didn't exist, restart the telegraf service and the problem came.
Does the issue still happen on version 1.7.1?
-- No, I didn't see the problem any more because I didn't test it... I'll tell you when I test.
Regards.
I have the problem that I can drop the measurement in 1.7.1, but the field types are still stored somewhere. So in my case ( https://github.com/influxdata/telegraf/issues/5055 ) I cannot recreate the measurement with differing data types. Is this by design ( https://github.com/influxdata/influxdb/commit/c14b0e81b7d719acc117c0a83c2ee60a5e2c1641 )? If yes, is that behavior documented somewhere ? How can I get rid of the measurement and the old field types ?
There's a 1.7.2rc0 release that contains fixes (#10517, #10516, #10509) for all of the issues I found in my local testing. It would be very helpful to try it out and see if the issues are fixed.
@bolek2000 Indeed, there were cases where the fields index would not be cleaned up. Were you running DROP MEASUREMENT or a DELETE query? If it was a DROP MEASUREMENT, I would expect the fields index to be cleaned up, so I would try out the rc0 and if it still doesn't work, I'd love to be able to help debug further.
@zeebo I always used DROP MEASUREMENT . How long do you think it should take until the index is cleared ? Is it possible to get a .deb package for testing. I only have this problem on the production machine (Ubuntu 16.04) so I would be glad not to install build tools etc. there.
@bolek2000 It should be that by the time the DROP MEASUREMENT command returns, all of stuff should be deleted. Here are some links to prebuilt binaries for linux/amd64:
I am still not able to drop measurements, and I have been waiting quite some time for an update that fixes this. How is it even possible that you are releasing versions that are so buggy???
I would be ashamed if I put clients in such position, also totally no support on your community forum.
drop measurement "collectd"
drop series from "collectd"
delete from "collectd"
influxdb-1.7.1-1.x86_64
CentOS Linux release 7.5.1804 (Core)
Linux db1 3.10.0-862.11.6.el7.x86_64 #1 SMP Tue Aug 14 21:49:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
@f1-outsourcing ... I cannot believe some comments I read...
"ashamed", "no support" ...
Well... you get a product that is working (at least for me) at 98% very well... There is some problems, is true, but you get it __for free__ with help from zeebo and others. I disagree completely with your comment.
Instead of this kind of comments, try to be proactive, take the code yourself and make the corrections to help others. At least, explain the problem, what did you try, etc...
Regards.
I agree with @tinotuno @f1-outsourcing 's comment is totally counter productive, I also had an issue with influxdb but the devs were very helpful and after exchanging some information they were able to find and fix the issue I had and I'm grateful for that.
Some information about the size of the dataset you are trying to drop might be helpful, what retention policies you have, shard sizes, etc. If you have tens of GB of data and you are trying to drop one measurement in a database that have dozens of measurements the drop process will trigger a compaction which might take time to finish.
Did you try running this to monitor compaction while you execute the drop?
SELECT /.*CompactionsActive/ FROM "_internal"."monitor"."tsm1_engine" where time > now() - 1m group by id
@f1-outsourcing I’m sorry you’re still experiencing this problem. From your post, it looks like you’re running 1.7.1. Have you tried the 1.7.2rc0 release candidate, and if so, did you still experience the problem? Also, how is the bug manifesting? Is it that the series still shows up in queries like “SHOW MEASUREMENTS”, or does it show up when querying for data, or some combination? Having as detailed and specific answers as possible really helps me narrow down where the bugs may exist in the system. Thanks.
Also, thank you everyone for the patience and details. This issue can be very frustrating so it’s important we all respond with empathy. The goal is to get it fixed as robustly and quickly as possible.
InfluxDB 1.7.2 is now available. It contains a number of improvements that have been made to Delete and Drop overall -- much of which was based on the inputs provided here.
We appreciate the community members who flagged this and constructively collaborated with us on this challenging issue. We had existing test cases in place and I personally used DROP MEASUREMENT regularly with previous version -- and there were no issues. The variety of usage and examples provided were helpful in identifying the specific issues. But, with all software, there may be additional issues that we'll need to address. We can and we will as they are identified.
We are awaiting feedback in terms of whether all the cases have been addressed and that we can close out this issue. Please let us know what you experience. Thanks...
@f1-outsourcing ... I cannot believe some comments I read...
"ashamed", "no support" ...
Well... you get a product that is working (at least for me) at 98% very well...
So lets see how, you or you family react when you drive a car that will only take you to 98% of the destination.
There is some problems, is true, but you get it for free with help from zeebo and others. I disagree completely with your comment.
The for free argument is totally irrelevant. If your kid gets something for free at the butcher, do you also accept that it fell previously on the floor or came from the garbage? It is such an unprofessional narrow minded point of view.
Instead of this kind of comments, try to be proactive, take the code yourself and make the corrections to help others.
At least, explain the problem, what did you try, etc...
If everyone is having this 'pussy' attitude, you will get into a downward spiral, where mistakes are 'ok'. Some things are not 'ok' to have in a production release. If you do have them, you should have thorough look at your development cycle.
If a basketball team is not performing, do you ever see the coach entering the court and join the team? Besides I (and others) have documented issues quite thoroughly on the community forum.
I agree with @tinotuno @f1-outsourcing 's comment is totally counter productive, I also had an issue with influxdb but the devs were very helpful and after exchanging some information they were able to find and fix the issue I had and I'm grateful for that.
Some information about the size of the dataset you are trying to drop might be helpful, what retention policies you have, shard sizes, etc. If you have tens of GB of data and you are trying to drop one measurement in a database that have dozens of measurements the drop process will trigger a compaction which might take time to finish.
I have default setup. 4 servers sending some data, around 4000 series, 15 measurements send from collectd. Don't have retention policies yet, was about to apply them with downsampling. I have been growing the disk constantly till 60GB. When I started trying to drop the old measurements of _internal (that were in the collections database) I lost somehow, somewhere 20GB.
I don't think it is related to some process to finish, because cpu load is low and days are in between me working on this.
Did you try running this to monitor compaction while you execute the drop?
SELECT /.*CompactionsActive/ FROM "_internal"."monitor"."tsm1_engine" where time > now() - 1m group by id
Thanks, I will make a note of this, so I can run it in the future
InfluxDB 1.7.2 is now available. It contains a number of improvements that have been made to Delete and Drop overall -- much of which was based on the inputs provided here.
We appreciate the community members who flagged this and constructively collaborated with us on this challenging issue. We had existing test cases in place and I personally used DROP MEASUREMENT regularly with previous version -- and there were no issues. The variety of usage and examples provided were helpful in identifying the specific issues. But, with all software, there may be additional issues that we'll need to address. We can and we will as they are identified.
We are awaiting feedback in terms of whether all the cases have been addressed and that we can close out this issue. Please let us know what you experience. Thanks...
Thanks for releasing 1.7.2. However it looks like I have still the same issue.
show databases;
precision rfc3339
use "collections"."autogen"
show measurements
drop measurement collectd
show measurements
drop series from "collectd"
show series from "collectd"
drop series from "collectd" where "hostname"='db1'
select * from collectd order by time desc limit 10
name: collectd
time batchesTx batchesTxFail bind bytesRx droppedPointsInvalid hostname pointsParseFail pointsRx pointsTx readFail
---- --------- ------------- ---- ------- -------------------- -------- --------------- -------- -------- --------
2018-01-16T23:59:50Z 2732 0 :25826 868809792 1889925 db1 0 11669872 11668618 0
2018-01-16T23:59:40Z 2731 0 :25826 868506024 1889332 db1 0 11665765 11664594 0
2018-01-16T23:59:30Z 2730 0 :25826 868258469 1888810 db1 0 11662375 11660162 0
2018-01-16T23:59:20Z 2729 0 :25826 868011499 1888327 db1 0 11658974 11656125 0
2018-01-16T23:59:10Z 2728 0 :25826 867722193 1887732 db1 0 11655084 11652582 0
delete from "collectd"
select * from collectd order by time desc limit 2
name: collectd
time batchesTx batchesTxFail bind bytesRx droppedPointsInvalid hostname pointsParseFail pointsRx pointsTx readFail
---- --------- ------------- ---- ------- -------------------- -------- --------------- -------- -------- --------
2018-01-16T23:59:50Z 2732 0 :25826 868809792 1889925 db1 0 11669872 11668618 0
I am verifying the state of the database with this,
[@db1 ~]# influx_inspect verify -dir /var/lib/influxdb
/var/lib/influxdb/data/_internal/monitor/4/000000001-000000001.tsm: healthy
/var/lib/influxdb/data/_internal/monitor/5/000000003-000000002.tsm: healthy
/var/lib/influxdb/data/collections/autogen/14/000000276-000000002.tsm: healthy
/var/lib/influxdb/data/collections/autogen/14/000000276-000000003.tsm: healthy
/var/lib/influxdb/data/collections/autogen/14/000000278-000000001.tsm: healthy
/var/lib/influxdb/data/collections/autogen/20/000000383-000000002.tsm: healthy
/var/lib/influxdb/data/collections/autogen/20/000000383-000000003.tsm: healthy
/var/lib/influxdb/data/collections/autogen/20/000000383-000000004.tsm: healthy
/var/lib/influxdb/data/collections/autogen/27/000000305-000000005.tsm: healthy
/var/lib/influxdb/data/collections/autogen/27/000000305-000000006.tsm: healthy
/var/lib/influxdb/data/collections/autogen/27/000000305-000000007.tsm: healthy
/var/lib/influxdb/data/collections/autogen/273/000000304-000000002.tsm: healthy
/var/lib/influxdb/data/collections/autogen/273/000000304-000000003.tsm: healthy
/var/lib/influxdb/data/collections/autogen/273/000000304-000000004.tsm: healthy
/var/lib/influxdb/data/collections/autogen/450/000000301-000000002.tsm: healthy
/var/lib/influxdb/data/collections/autogen/450/000000301-000000003.tsm: healthy
/var/lib/influxdb/data/collections/autogen/450/000000301-000000004.tsm: healthy
/var/lib/influxdb/data/collections/autogen/5/000000518-000000002.tsm: healthy
/var/lib/influxdb/data/collections/autogen/5/000000518-000000003.tsm: healthy
/var/lib/influxdb/data/collections/autogen/5/000000518-000000004.tsm: healthy
/var/lib/influxdb/data/collections/autogen/627/000000128-000000004.tsm: healthy
/var/lib/influxdb/data/collections/autogen/627/000000160-000000003.tsm: healthy
/var/lib/influxdb/data/collections/autogen/627/000000192-000000003.tsm: healthy
/var/lib/influxdb/data/collections/autogen/627/000000224-000000003.tsm: healthy
/var/lib/influxdb/data/collections/autogen/627/000000232-000000002.tsm: healthy
/var/lib/influxdb/data/collections/autogen/627/000000240-000000002.tsm: healthy
/var/lib/influxdb/data/collections/autogen/627/000000241-000000001.tsm: healthy
/var/lib/influxdb/data/collections/autogen/627/000000242-000000001.tsm: healthy
/var/lib/influxdb/data/collections/autogen/627/000000244-000000001.tsm: healthy
/var/lib/influxdb/data/collections/autogen/66/000000037-000000002.tsm: healthy
/var/lib/influxdb/data/collections/autogen/66/000000039-000000001.tsm: healthy
/var/lib/influxdb/data/collections/autogen/96/000000405-000000002.tsm: healthy
/var/lib/influxdb/data/collections/autogen/96/000000405-000000003.tsm: healthy
/var/lib/influxdb/data/collections/autogen/96/000000405-000000004.tsm: healthy
/var/lib/influxdb/data/collections/autogen/97/000000390-000000002.tsm: healthy
/var/lib/influxdb/data/collections/autogen/97/000000390-000000003.tsm: healthy
/var/lib/influxdb/data/collections/autogen/97/000000390-000000004.tsm: healthy
/var/lib/influxdb/data/test/autogen/26/000000001-000000002.tsm: healthy
/var/lib/influxdb/data/test/autogen/28/000000001-000000002.tsm: healthy
/var/lib/influxdb/data/test/autogen/29/000000001-000000002.tsm: healthy
/var/lib/influxdb/data/test/autogen/30/000000001-000000002.tsm: healthy
/var/lib/influxdb/data/test/autogen/67/000000003-000000003.tsm: healthy
Broken Blocks: 0 / 4754914, in 704.375546668s
I will give it another try tonight.
still nothing, changing logging level from warn -> info -> debug also does not show anything. (i only see lvl=info)
This is so strange....I've run through exactly the same commands and everything works.
> select count(*) from docker_container_status
name: docker_container_status
time count_exitcode count_finished_at count_oomkilled count_pid count_started_at
---- -------------- ----------------- --------------- --------- ----------------
0 374330 158 374330 374330 374330
> drop series from docker_container_status
> select count(*) from docker_container_status
name: docker_container_status
time count_exitcode count_oomkilled count_pid count_started_at
---- -------------- --------------- --------- ----------------
0 5 5 5 5
I expected there to be some data as the measurement is being populated while I executed the drop.
Another example:
> select count(*) from docker_container_mem
name: docker_container_mem
time count_active_anon count_active_file count_cache count_container_id count_hierarchical_memory_limit count_inactive_anon count_inactive_file count_limit count_mapped_file count_max_usage count_pgfault count_pgmajfault count_pgpgin count_pgpgout count_rss count_rss_huge count_total_active_anon count_total_active_file count_total_cache count_total_inactive_anon count_total_inactive_file count_total_mapped_file count_total_pgfault count_total_pgmajfault count_total_pgpgin count_total_pgpgout count_total_rss count_total_rss_huge count_total_unevictable count_total_writeback count_unevictable count_usage count_usage_percent count_writeback
---- ----------------- ----------------- ----------- ------------------ ------------------------------- ------------------- ------------------- ----------- ----------------- --------------- ------------- ---------------- ------------ ------------- --------- -------------- ----------------------- ----------------------- ----------------- ------------------------- ------------------------- ----------------------- ------------------- ---------------------- ------------------ ------------------- --------------- -------------------- ----------------------- --------------------- ----------------- ----------- ------------------- ---------------
0 374122 374122 374122 374125 374122 374122 374122 374130 374122 374130 374122 374122 374122 374122 374122 374122 374122 374122 374117 374122 374122 374117 374122 374122 374117 374122 374122 374122 374122 374122 374122 374130 374130 374122
> drop series from docker_container_mem where host = 'telegraf-getting-started'
> select count(*) from docker_container_mem
name: docker_container_mem
time count_active_anon count_active_file count_cache count_container_id count_hierarchical_memory_limit count_inactive_anon count_inactive_file count_limit count_mapped_file count_max_usage count_pgfault count_pgmajfault count_pgpgin count_pgpgout count_rss count_rss_huge count_total_active_anon count_total_active_file count_total_cache count_total_inactive_anon count_total_inactive_file count_total_mapped_file count_total_pgfault count_total_pgmajfault count_total_pgpgin count_total_pgpgout count_total_rss count_total_rss_huge count_total_unevictable count_total_writeback count_unevictable count_usage count_usage_percent count_writeback
---- ----------------- ----------------- ----------- ------------------ ------------------------------- ------------------- ------------------- ----------- ----------------- --------------- ------------- ---------------- ------------ ------------- --------- -------------- ----------------------- ----------------------- ----------------- ------------------------- ------------------------- ----------------------- ------------------- ---------------------- ------------------ ------------------- --------------- -------------------- ----------------------- --------------------- ----------------- ----------- ------------------- ---------------
0 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5
@f1-outsourcing -- can you run and provide the output of:
./influx_inspect report /var/lib/influxdb/data
also, can you share the output of:
influx -database <database> -execute 'show stats' | grep indexType | sort | uniq
Just wanted to drop a line and say that I was hit by the same bug and the following steps fixed it.
# do everything under the user running influx or you will end up with bad permissions
su -l influxdb
# convert the TSM shards to TSI format (old -> new format/type)
influx_inspect buildtsi -datadir /location_of_influxdb_data/ -waldir /location_of_influxdb_wal/
# do an influxdb restart to be sure the new shard files are loaded and OK
systemctl restart influxdb.service
# DONE. You should be able to drop whatever you want
DROP MEASUREMENT "godKnowsWhat"
--
Thanks for that report. I think the bugs may have caused the on disk data to mismatch what's in the index in some cases, so if you're on inmem, see if upgrading to 1.7.2 and TSI fixes it, and if you're already on TSI, try rebuilding the index.
Hi,
2 comments:
@servergeeks : thank you, but if you have no shell for influxdb user (my case in centos) you must run the commands like that:
su -s /bin/bash -c 'influx_inspect buildtsi -datadir /var/lib/influxdb/data/ -waldir /var/lib/influxdb/wal/' influxdb
@zeebo : Thank you. I'm trying to "rebuild" the index but all the results are: "tsi1 index already exists, skipping" I didn't find any option to erase the index and remake... We should erase all the index directories by hand??
When I run influx, for example, the command know where are the db, index, etc.. Why you don't implement a command like: "influxdb rebuild-index" "influxdb delete series..." etc... Is soooo complicate putting the wal directory, the datadir, the index path.. I'm completely lost.
Thank you.
Sorry for the lack of details. I did some checking around and couldn't find any documentation about rebuilding. I did take some time to come up with this script to help move all the index directories into a backup directory:
find $DATA_DIR -type d -name index | while read index; do
mkdir -p "backup/$index";
mv "$index" "$(dirname "backup/$index")";
done
After running that, you should be able to run buildtsi normally. If all seems well after that, do whatever with the backup directory.
Well, thanks. For helping other people. The steps I did:
systemctl stop influxdb
cp -a /var/lib/influxdb /var/lib/influxdb.backup
My data is very important... verify that both copies are OK:
diff <(find /var/lib/influxdb -type f -exec md5sum {} + | sort -k 2 | cut -f1 -d" ") <(find /var/lib/influxdb.backup -type f -exec md5sum {} + | sort -k 2 | cut -f1 -d" ")
Erase all the indexes:
find /var/lib/influxdb -type d -name index | while read index; do rm -Rf "$index" ; done
Verify that there is any index in the directory:
find /var/lib/influxdb/ -type d -name index | wc -l (should be 0)
Build the index:
su -s /bin/bash -c 'influx_inspect buildtsi -datadir /var/lib/influxdb/data/ -waldir /var/lib/influxdb/wal/' influxdb
Restart influxdb:
systemctl restart influxdb
That's all.
Awesome. If anyone else is experiencing this and is willing to send me their tsm and tsi index files, that might help in figuring out what went wrong, if it can be automatically fixed, and if it has been prevented for the future.
Well, thanks. For helping other people. The steps I did:
systemctl stop influxdb
cp -a /var/lib/influxdb /var/lib/influxdb.backup
My data is very important... verify that both copies are OK:
diff <(find /var/lib/influxdb -type f -exec md5sum {} + | sort -k 2 | cut -f1 -d" ") <(find /var/lib/influxdb.backup -type f -exec md5sum {} + | sort -k 2 | cut -f1 -d" ")
Use rsync for this
also, can you share the output of:
influx -database <database> -execute 'show stats' | grep indexType | sort | uniq
tags: database=collections, engine=tsm1, id=14, indexType=tsi1, path=/var/lib/influxdb/data/collections/autogen/14, retentionPolicy=autogen, walPath=/var/lib/influxdb/wal/collections/autogen/14
tags: database=collections, engine=tsm1, id=20, indexType=tsi1, path=/var/lib/influxdb/data/collections/autogen/20, retentionPolicy=autogen, walPath=/var/lib/influxdb/wal/collections/autogen/20
tags: database=collections, engine=tsm1, id=273, indexType=tsi1, path=/var/lib/influxdb/data/collections/autogen/273, retentionPolicy=autogen, walPath=/var/lib/influxdb/wal/collections/autogen/273
tags: database=collections, engine=tsm1, id=27, indexType=tsi1, path=/var/lib/influxdb/data/collections/autogen/27, retentionPolicy=autogen, walPath=/var/lib/influxdb/wal/collections/autogen/27
tags: database=collections, engine=tsm1, id=450, indexType=tsi1, path=/var/lib/influxdb/data/collections/autogen/450, retentionPolicy=autogen, walPath=/var/lib/influxdb/wal/collections/autogen/450
tags: database=collections, engine=tsm1, id=5, indexType=tsi1, path=/var/lib/influxdb/data/collections/autogen/5, retentionPolicy=autogen, walPath=/var/lib/influxdb/wal/collections/autogen/5
tags: database=collections, engine=tsm1, id=627, indexType=tsi1, path=/var/lib/influxdb/data/collections/autogen/627, retentionPolicy=autogen, walPath=/var/lib/influxdb/wal/collections/autogen/627
tags: database=collections, engine=tsm1, id=66, indexType=tsi1, path=/var/lib/influxdb/data/collections/autogen/66, retentionPolicy=autogen, walPath=/var/lib/influxdb/wal/collections/autogen/66
tags: database=collections, engine=tsm1, id=96, indexType=tsi1, path=/var/lib/influxdb/data/collections/autogen/96, retentionPolicy=autogen, walPath=/var/lib/influxdb/wal/collections/autogen/96
tags: database=collections, engine=tsm1, id=97, indexType=tsi1, path=/var/lib/influxdb/data/collections/autogen/97, retentionPolicy=autogen, walPath=/var/lib/influxdb/wal/collections/autogen/97
tags: database=_internal, engine=tsm1, id=4, indexType=inmem, path=/var/lib/influxdb/data/_internal/monitor/4, retentionPolicy=monitor, walPath=/var/lib/influxdb/wal/_internal/monitor/4
tags: database=test, engine=tsm1, id=26, indexType=tsi1, path=/var/lib/influxdb/data/test/autogen/26, retentionPolicy=autogen, walPath=/var/lib/influxdb/wal/test/autogen/26
tags: database=test, engine=tsm1, id=28, indexType=tsi1, path=/var/lib/influxdb/data/test/autogen/28, retentionPolicy=autogen, walPath=/var/lib/influxdb/wal/test/autogen/28
tags: database=test, engine=tsm1, id=29, indexType=tsi1, path=/var/lib/influxdb/data/test/autogen/29, retentionPolicy=autogen, walPath=/var/lib/influxdb/wal/test/autogen/29
tags: database=test, engine=tsm1, id=30, indexType=tsi1, path=/var/lib/influxdb/data/test/autogen/30, retentionPolicy=autogen, walPath=/var/lib/influxdb/wal/test/autogen/30
tags: database=test, engine=tsm1, id=67, indexType=tsi1, path=/var/lib/influxdb/data/test/autogen/67, retentionPolicy=autogen, walPath=/var/lib/influxdb/wal/test/autogen/67
./influx_inspect report /var/lib/influxdb/data
DB RP Shard File Series New (est) Min Time Max Time Load Time
_internal monitor 4 000000001-000000001.tsm 283 282 2018-01-16T15:21:50Z 2018-01-16T23:59:50Z 1.582704ms
collections autogen 5 000000518-000000002.tsm 6122 6116 2018-04-20T17:17:00.89970615Z 2018-10-07T23:59:56.267065474Z 6.84388ms
_internal monitor 5 000000003-000000002.tsm 273 108 2018-03-09T17:23:10Z 2018-03-09T17:25:20Z 205.604µs
collections autogen 5 000000518-000000004.tsm 2981 2985 2018-04-20T17:16:56.728210179Z 2018-10-07T23:59:57.757807494Z 3.697606ms
collections autogen 5 000000518-000000003.tsm 4914 4962 2018-04-20T17:17:06.048491248Z 2018-10-07T23:59:57.73049176Z 6.470641ms
collections autogen 14 000000282-000000002.tsm 7670 7 2018-10-08T00:00:05.657784789Z 2018-10-12T11:05:25.661060562Z 7.473614ms
collections autogen 14 000000282-000000003.tsm 5946 3 2018-10-08T00:00:05.657194272Z 2018-10-12T11:05:25.659709601Z 5.670971ms
collections autogen 20 000000383-000000003.tsm 5323 454 2018-11-12T12:09:39.709188569Z 2018-11-18T23:59:57.697625927Z 6.763005ms
collections autogen 20 000000383-000000002.tsm 5693 232 2018-11-12T12:09:33.644223912Z 2018-11-18T23:59:51.782128089Z 6.962537ms
collections autogen 20 000000383-000000004.tsm 3424 130 2018-11-12T12:09:33.643067145Z 2018-11-18T23:59:58.776103081Z 4.181172ms
test autogen 26 000000001-000000002.tsm 14 14 2015-08-18T00:00:00Z 2015-08-23T23:54:00Z 171.852µs
collections autogen 27 000000305-000000006.tsm 4796 59 2018-06-18T00:00:00.235249692Z 2018-06-24T23:59:59.46297638Z 6.478479ms
collections autogen 27 000000305-000000005.tsm 5406 22 2015-08-24T00:00:00Z 2018-06-24T23:59:59.221588298Z 6.576819ms
collections autogen 27 000000305-000000007.tsm 373 8 2018-06-18T00:00:00.232505126Z 2018-06-24T23:59:59.46297638Z 536.849µs
test autogen 28 000000001-000000002.tsm 14 0 2015-08-31T00:00:00Z 2015-09-06T23:54:00Z 164.798µs
test autogen 29 000000001-000000002.tsm 14 0 2015-09-07T00:00:00Z 2015-09-13T23:54:00Z 133.467µs
test autogen 30 000000001-000000002.tsm 14 0 2015-09-14T00:00:00Z 2015-09-18T21:42:00Z 137.607µs
collections autogen 66 000000043-000000002.tsm 10134 29 2018-07-23T09:43:06.941831662Z 2018-07-23T15:36:18.830666933Z 1.998885ms
test autogen 67 000000003-000000003.tsm 1 1 2018-11-17T23:21:42.81993998Z 2018-11-17T23:21:42.81993998Z 113.771µs
collections autogen 96 000000405-000000004.tsm 4145 57 2018-11-19T00:00:01.579255448Z 2018-11-25T23:59:58.766118111Z 5.015772ms
collections autogen 96 000000405-000000002.tsm 5383 53 2018-11-19T00:00:01.579996535Z 2018-11-25T23:59:51.619516534Z 6.4419ms
collections autogen 96 000000405-000000003.tsm 5095 55 2018-11-19T00:00:01.751702771Z 2018-11-25T23:59:53.999581804Z 6.685757ms
collections autogen 97 000000390-000000002.tsm 5840 26 2018-09-03T00:00:01.350218831Z 2018-09-09T23:59:56.266124017Z 10.367587ms
collections autogen 97 000000390-000000003.tsm 4923 17 2018-09-03T00:00:01.357401072Z 2018-09-09T23:59:57.729283152Z 6.713072ms
collections autogen 97 000000390-000000004.tsm 3671 11 2018-09-03T00:00:01.348966437Z 2018-09-09T23:59:57.752376864Z 4.497502ms
collections autogen 273 000000304-000000004.tsm 602 0 2018-11-26T00:00:01.579279779Z 2018-12-02T23:59:58.764113657Z 870.869µs
collections autogen 273 000000304-000000002.tsm 5056 4 2018-11-26T00:00:01.579944115Z 2018-12-02T23:59:53.995436242Z 6.32782ms
collections autogen 273 000000304-000000003.tsm 4923 10 2018-11-26T00:00:03.95962611Z 2018-12-02T23:59:57.713223482Z 6.287338ms
collections autogen 450 000000301-000000004.tsm 613 0 2018-12-03T00:00:01.579239785Z 2018-12-09T23:59:58.779141748Z 926.888µs
collections autogen 450 000000301-000000002.tsm 5057 0 2018-12-03T00:00:01.579962192Z 2018-12-09T23:59:53.994383204Z 6.392524ms
collections autogen 450 000000301-000000003.tsm 4888 0 2018-12-03T00:00:03.957784577Z 2018-12-09T23:59:57.71713086Z 6.306323ms
collections autogen 627 000000293-000000004.tsm 307 0 2018-12-10T00:00:01.579292958Z 2018-12-16T11:41:37.68327154Z 444.416µs
collections autogen 627 000000293-000000003.tsm 5032 3 2018-12-10T00:00:01.582037876Z 2018-12-16T11:41:37.713413261Z 6.022899ms
collections autogen 627 000000293-000000002.tsm 5224 1 2018-12-10T00:00:01.579931287Z 2018-12-16T11:41:33.988783316Z 6.459025ms
Summary:
Files: 34
Time Range: 2015-08-18T00:00:00Z - 2018-12-16T11:41:37.713413261Z
Duration: 29195h41m37.713413261s
Statistics
Series:
- _internal (est): 390 (2%)
- collections (est): 15244 (97%)
- test (est): 15 (0%)
Total (est): 15661
Completed in 282.074125ms
Just wanted to drop a line and say that I was hit by the same bug and the following steps fixed it.
1. Upgrade to 1.7.2 (apt,yum,whatever) 2. Do TSM to TSI migration# do everything under the user running influx or you will end up with bad permissions su -l influxdb # convert the TSM shards to TSI format (old -> new format/type) influx_inspect buildtsi -datadir /location_of_influxdb_data/ -waldir /location_of_influxdb_wal/ # do an influxdb restart to be sure the new shard files are loaded and OK systemctl restart influxdb.service # DONE. You should be able to drop whatever you want DROP MEASUREMENT "godKnowsWhat"--
Did this got a messages like below, but did not result in being able to drop
2018-12-21T11:40:18.807789Z info Rebuilding retention policy {"log_id": "0CW8uIJG000", "db_instance": "collections", "db_rp": "autogen"}
2018-12-21T11:40:18.810653Z info Rebuilding shard {"log_id": "0CW8uIJG000", "db_instance": "collections", "db_rp": "autogen", "db_shard_id": 14}
2018-12-21T11:40:18.811422Z info Checking index path {"log_id": "0CW8uIJG000", "db_instance": "collections", "db_rp": "autogen", "db_shard_id": 14, "path": "data/collections/autogen/14/index"}
2018-12-21T11:40:18.812105Z info tsi1 index already exists, skipping {"log_id": "0CW8uIJG000", "db_instance": "collections", "db_rp": "autogen", "db_shard_id": 14, "path": "data/collections/autogen/14/index"}
2018-12-21T11:40:18.812769Z info Rebuilding shard {"log_id": "0CW8uIJG000", "db_instance": "collections", "db_rp": "autogen", "db_shard_id": 20}
2018-12-21T11:40:18.813406Z info Checking index path {"log_id": "0CW8uIJG000", "db_instance": "collections", "db_rp": "autogen", "db_shard_id": 20, "path": "data/collections/autogen/20/index"}
2018-12-21T11:40:18.814084Z info tsi1 index already exists, skipping {"log_id": "0CW8uIJG000", "db_instance": "collections", "db_rp": "autogen", "db_shard_id": 20, "path": "data/collections/autogen/20/index"}
2018-12-21T11:40:18.814774Z info Rebuilding shard {"log_id": "0CW8uIJG000", "db_instance": "collections", "db_rp": "autogen", "db_shard_id": 27}
2018-12-21T11:40:18.814927Z info Checking index path {"log_id": "0CW8uIJG000", "db_instance": "collections", "db_rp": "autogen", "db_shard_id": 27, "path": "data/collections/autogen/27/index"}
2018-12-21T11:40:18.815070Z info tsi1 index already exists, skipping {"log_id": "0CW8uIJG000", "db_instance": "collections", "db_rp": "autogen", "db_shard_id": 27, "path": "data/collections/autogen/27/index"}
Well, thanks. For helping other people. The steps I did:
systemctl stop influxdb
cp -a /var/lib/influxdb /var/lib/influxdb.backup
My data is very important... verify that both copies are OK:
diff <(find /var/lib/influxdb -type f -exec md5sum {} + | sort -k 2 | cut -f1 -d" ") <(find /var/lib/influxdb.backup -type f -exec md5sum {} + | sort -k 2 | cut -f1 -d" ")
Erase all the indexes:
find /var/lib/influxdb -type d -name index | while read index; do rm -Rf "$index" ; done
Verify that there is any index in the directory:
find /var/lib/influxdb/ -type d -name index | wc -l(should be 0)
Build the index:
su -s /bin/bash -c 'influx_inspect buildtsi -datadir /var/lib/influxdb/data/ -waldir /var/lib/influxdb/wal/' influxdb
Restart influxdb:
systemctl restart influxdbThat's all.
I deleted all index directories, like you stated and then rebuild them. Now the measurements are gone. Makes me wonder if data is also gone. Because when I ran queries on the now ‘disappeared measurements’, they returned data. Or was this all coming from the index?
Awesome. If anyone else is experiencing this and is willing to send me their tsm and tsi index files, that might help in figuring out what went wrong, if it can be automatically fixed, and if it has been prevented for the future.
These are the index directories of my collections database before I erased them
https://rgw.roosit.eu:7480/RoosIT:test/collections-index.tgz
Same issue in 1.7.4
Same issue here in version 1.5.2
why is this still an issue for about one year? This is essential for having influxdb in production.
1.5.2 is pretty old, I've see no problems in the latest stable release in this area although the occasional restart is still necessary in some cases when dropping series, I'm using inmem indexes.
the occasional restart is still necessary in some cases when dropping series
correct. Also with 1.7.4. This seems quite strange to me.
Using 1.7.6 with docker.
Drop measurement don't work instantly if there is a lot of data.
Need to restart the server.
Can confirm same issue with 1.7.4. Converted data and wal to tsi1. Had a measurement no longer needed with very high cardinality due to having data as tags that should have instead been fields.
Used select into query to migrate the data to another table, and convert tags to fields. Cardinality dropped dramatically, improving memory and overall performance. Now I'd like to remove the old measurement.
I tried to run drop measurement, but appears nothing is happening. So instead I ran deletes from that measurement until no data left. Restarted influx, and then try and run drop measurement Metrics_test. Just hangs
Select * from Metrics_test returns nothing.
Select count(*) from Metrics_test returns nothing.
But, show series exact cardinality returns:
name: Metrics_test
count
153288
The problem still exists on 1.7.8 after "stop->start" influxdb.service
nstcc2@nstcloudcc2:~$ influx -database collectd -execute 'show stats' | grep indexType | sort | uniq
tags: database=collectd, engine=tsm1, id=11, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/11, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/11
tags: database=collectd, engine=tsm1, id=14, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/14, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/14
tags: database=collectd, engine=tsm1, id=17, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/17, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/17
tags: database=collectd, engine=tsm1, id=20, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/20, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/20
tags: database=collectd, engine=tsm1, id=23, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/23, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/23
tags: database=collectd, engine=tsm1, id=26, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/26, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/26
tags: database=collectd, engine=tsm1, id=29, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/29, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/29
tags: database=collectd, engine=tsm1, id=32, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/32, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/32
tags: database=collectd, engine=tsm1, id=35, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/35, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/35
tags: database=collectd, engine=tsm1, id=38, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/38, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/38
tags: database=collectd, engine=tsm1, id=41, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/41, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/41
tags: database=collectd, engine=tsm1, id=5, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/5, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/5
tags: database=collectd, engine=tsm1, id=8, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/8, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/8
tags: database=_internal, engine=tsm1, id=18, indexType=inmem, path=/var/lib/influxdb/data/_internal/monitor/18, retentionPolicy=monitor, walPath=/var/lib/influxdb/wal/_internal/monitor/18
tags: database=_internal, engine=tsm1, id=21, indexType=inmem, path=/var/lib/influxdb/data/_internal/monitor/21, retentionPolicy=monitor, walPath=/var/lib/influxdb/wal/_internal/monitor/21
tags: database=_internal, engine=tsm1, id=24, indexType=inmem, path=/var/lib/influxdb/data/_internal/monitor/24, retentionPolicy=monitor, walPath=/var/lib/influxdb/wal/_internal/monitor/24
tags: database=_internal, engine=tsm1, id=27, indexType=inmem, path=/var/lib/influxdb/data/_internal/monitor/27, retentionPolicy=monitor, walPath=/var/lib/influxdb/wal/_internal/monitor/27
tags: database=_internal, engine=tsm1, id=30, indexType=inmem, path=/var/lib/influxdb/data/_internal/monitor/30, retentionPolicy=monitor, walPath=/var/lib/influxdb/wal/_internal/monitor/30
tags: database=_internal, engine=tsm1, id=33, indexType=inmem, path=/var/lib/influxdb/data/_internal/monitor/33, retentionPolicy=monitor, walPath=/var/lib/influxdb/wal/_internal/monitor/33
tags: database=_internal, engine=tsm1, id=36, indexType=inmem, path=/var/lib/influxdb/data/_internal/monitor/36, retentionPolicy=monitor, walPath=/var/lib/influxdb/wal/_internal/monitor/36
tags: database=_internal, engine=tsm1, id=39, indexType=inmem, path=/var/lib/influxdb/data/_internal/monitor/39, retentionPolicy=monitor, walPath=/var/lib/influxdb/wal/_internal/monitor/39
nstcc2@nstcloudcc2:~$
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed because it has not had recent activity. Please reopen if this issue is still important to you. Thank you for your contributions.
Does 1.7.9 fix this?
Problem still exist in 1.7.9-1 !
show series where "envtype"='dev';
drop series where "envtype"='dev';
show series where "envtype"='dev'; <-- still shows series
Seems that in first place, it droped some of the series but now not droping anything more :(
Just chiming in, the issue stills persists in 1.7.10. A DELETE deletes the data, but the measurements and series persist... Is there really no way to fix this?
Most helpful comment
Just wanted to drop a line and say that I was hit by the same bug and the following steps fixed it.
--