This is different than drop series. Drop series can take a set of tags, or an ID.
Delete series can additionally take a time range.
@corylanou -- I think this is done, right?
@otoolep No, DROP SERIES is complete, but not DELETE SERIES. They difference, from what I understand, is that delete will remove data from a series, but not actually drop the meta data, even if the delete empties the entire series. Delete can also remove a partial set of points from a series, where as drop always completely drops the series.
That's right, with DELETE you can specify WHERE clause filters on either field values or the time range. So you can remove just part of a series.
Any update on this? I'm on 0.9.2.1 and don't see delete functionality.
This is not currently scheduled for any particular release. I would estimate late this year for this particular functionality unless the community votes it up.
To be clear, the docs about the query language (https://influxdb.com/docs/v0.9/query_language/spec.html) that list the delete_stmt and have an example: DELETE FROM cpu WHERE region = 'uswest' just isn't wired up, and doesn't work?
Thanks for the note, @cschneid. I made https://github.com/influxdb/influxdb.com/issues/191 to address the docs issue.
+1
+1
+1
+1
Need to be able to do stuff like:
DELETE FROM measurement WHERE tagKey = 'tagValue'
DELETE FROM measurement WHERE valueKey >= value
DELETE FROM measurement WHERE time < now() - 30d
+1
+1
+1
Deleting data points is essential for any database
Should be able to do
DELETE FROM measurement WHERE time < now() - 30d
like we could in influxdb 0.7
@comcomservices for that use case perhaps retention policies cover what you need.
@beckettsean We certainly use them but allot of times we need to remove bad data points from a data set as our clients or often OCD about these types of abnormalities, right now we add annotations to our graphs to explain these events but still deleting points like we had in 0.7 would be much better, right now we are working on a script to export everything, delete the series and then re-import only the data we want but feel that this is very far from ideal
Jon
P.S. Like this graph below where the voltage is all off skew, it would be nice to just have no data during this period because now this abnormality will stain all of our data and mis scales our graphs

Thanks for the details, @comcomservices. Another workaround is to overwrite bad points with an orthogonal field set. Since you're using Grafana I'll assume most of your points have a single field named value. If you find a bad point or series of bad points, you can overwrite them with a different field set that doesn't include value.
Let's say this is the bad point: my_measurement,tag=foo,anothertag=bar value=12 1234567890. You could write this point to the database: my_measurement,tag=foo,anothertag=bar ignore=True 1234567890 and it will silently overwrite the prior point. (The measurement, tag set, and timestamp are identical, and InfluxDB overwrites points, it does not store duplicates.)
Queries for SELECT value... or SELECT MEAN(value)... or SELECT COUNT(value)... will all ignore that point now, because it has no field named value, just a boolean field named ignore. SELECT *... queries would still return the point, but since * cannot be used inside of functions, that shouldn't affect most graph-able queries.
+1
+1 on this and also #4404 for reporting an error until this is implemented.
+1
+1
+1
+1
You could write this point to the database:
my_measurement,tag=foo,anothertag=bar ignore=True 1234567890and it will silently overwrite the prior point.
@beckettsean Is this still true? I just tried this, but now my row has both the original field and its value and the new field and its value. (i.e. value = 12 and ignore = True.)
@jordanbtucker with 0.10 it is no longer possible to overwrite the field set of a point. The database stores the union of the two field sets, with any duplicates taking the value from the most recently written point.
I've just noticed that the "delete by overwrite" workaround doesn't work anymore after upgrade to 0.10.
Is there any other workaround available for removing bad datapoints until we have proper DELETE SERIES implementation?
@jordanbtucker @skazi0
Overwriting a point was never a specific design goal with b1/bz1, just a handy side effect of the on-disk representation. Since it was a side effect it wasn't part of the test suite. The new TSM engine doesn't exhibit that same behavior, so the regression slipped right on by.
We are working to implement functionality that encompasses this and more, which is the DELETE SERIES functionality. DELETE SERIES with a narrow enough time range becomes effectively DELETE POINT.
In the interim there is no way to remove a bad field key. You can overwrite it with a new field value, but you cannot delete the point or the field itself.
The fact that this behavior was not by design is clear to me.
Too bad there is no similar workaround for the TSM engine.
I guess we'll have to wait for the real feature then.
Any informations about the estimate time for the implementation of this must have functionality ?
+1
+1
+1
@comcomservices for that use case perhaps retention policies cover what you need.
@beckettsean Is it possible to apply a new retention policy to the existing points?
What we finally have come to is keeping a copy of all the raw data before putting it into the database, when we get bad data we can delete the entire series and re-import the cleaned original data.
Sad but until a better workaround is available it's all we can do.
@comcomservices you mean.. really write the points to a text file that you keep somewhere? you're not doing an export, changing the file and then reimport again?
@pmvilaca we used to do that but now we have a simple binary database that stores the raw binary floats and does memory caching to reduce hdd ops... It's a backup to us as well as we have been plagued by corruption of our influx databases and this allows us a second degree of protection where exporting and reimporting would not
+1
You can try doing it like cassandra, with tombstones to keep performance
Pls could you consider to work on it with high priority for 0.12 ???
Work on a database with no way to delete distinct series by time is rly difficult.
@max3163 you can delete a series using the DROP SERIES command. This is to delete just some of the data within a series, but it is very high priority.
You provide a time series database, for me, the time clause is important in all queries. I just hope you could consider to add this feature rly soon.
is there an idea when this will be available?
+1
Great!
Most helpful comment
+1
Need to be able to do stuff like:
DELETE FROM measurement WHERE tagKey = 'tagValue'
DELETE FROM measurement WHERE valueKey >= value
DELETE FROM measurement WHERE time < now() - 30d