ILM has been included in Elasticsearch, which allows us to manage the lifecycle
of an index, however, this lifecycle management does not currently include
periodic snapshots of the index.
In order to provide a full replacement for other cluster periodic management
tools out there (such as Curator), we should add snapshot management to
Elasticsearch.
Ideally this would fall under the same sort of management than ILM provides, the
difference, however, is that snapshots are multi-index whereas index lifecycle
policies are applied to a single index (and all actions are executed on a single
index).
We need a way of specifying a periodic and/or scheduled snapshots of a given set
of indices using a specific repository, perhaps something like this (all of the
API is made up)
PUT /_slm/policy/snapshot-every-day
{
// Run this every day at 2:30am
"schedule": "0 30 2 * * ?",
// What the snapshot should be named, supporting date-math
"name": "<production-snap-{now/d}>",
// Which snapshot repository to use for the snapshot
"repository": "my-s3-repository",
// "config" is a map of all the options that the regular snapshot API takes
"config": {
"indices": ["foo-*", "important"],
"ignore_unavailable": true,
"include_global_state": false
}
}
Elasticsearch will then manage taking snapshots of the given indices for the
repository on the schedule specified. The status of the snapshots would have to
be stored somewhere, likely in an index (.tasks
perhaps?)
Some other things that would be nice (but not required) to support:
"max_count": 10
meaning to"max_age": "7d"
meaning to keep a weeks'/_slm/policy
(currently GET|PUT|DELETE /_ilm/snapshot/<policy-id>
) (@dakrone) #41320_meta
in CreateSnapshotRequest
(@gwbrown) #41281_meta
associating each snapshot with the policy that created it (@gwbrown) #43132Pinging @elastic/es-core-features
Pinging @elastic/es-distributed
@dakrone asked the cloud team to discuss our requirements, with a view to potentially replacing our snapshot logic down the road. I'm going to talk about how it currently works and then very quickly summarize in a list of (high level) requirements at the bottom:
The Cloud requirements I infer:
cc @nordbergm / @paulcoghlan - feel free to add/correct/amend anything you think is useful. Don't reply, just edit the comment directly.
One of the major things for me is the snapshot resiliency work we've been doing for the past 6 months or more. This effectively boils down to the challenges S3 eventual consistency has caused us with corrupt snapshots, and the cool down periods we've had to introduce as a result.
These are very specific Cloud/S3 challenges. I don't believe GCP necessarily has the same issues because GCS is way more consistent. Still, if we don't consider it I worry we'll suffer more snapshot corruption again.
Another area I've been thinking about is access control. Snapshots in Cloud today are controlled by cloud admins, and can't easily be meddled with by the cluster admin. They can reduce the retention to minimum 2 snapshots, and they can disable snapshots if they go through support and understand the risks etc.
With ILM/SLM it would be good to understand what kind of access the cluster admins would have to configuration and how we could restrict access. In case of disaster we want to be sure the cluster has snapshots and the cluster admin hasn't broken the configuration by accident.
@dakrone Would it be possible to add some meta-data argument to the create policy API that would help the user figure which policy created the snapshot . two options come to mind:
@yaronp68 interesting suggestion, I do think that would be useful.
@original-brownbear what do you think about us adding something like that to the CreateSnapshotRequest
? I'm not sure exactly what the backwards compatibility issues could be (I assume nothing too difficult to work around).
@dakrone @yaronp68
technically speaking there is no reason not to add a metadata field to the snapshot in cluster-state and stored in the repository as far as I can see.
The question is, whether it's worth the added complexity I guess :) I'm not against it, but if we can do it without adding more complexity to the cluster state and repository that may be better.
=> my question: If you're already planning to "Persist a history of successful/failed snapshots in an ES index", why not just add the metadata for each snapshot to the history in that index?
@dakrone maybe it's possible to persist to a metadata file in the repository and not in cluster state to avoid changes to cluster state
my question: If you're already planning to "Persist a history of successful/failed snapshots in an ES index", why not just add the metadata for each snapshot to the history in that index?
@original-brownbear I believe the idea is that when listing snapshots for a repository, you could then tell which snapshot came from what (manually triggered, triggered via policyA
, policyB
, etc). We will have something on the other side (for a policy, what's the last snapshot taken), I think the desire was for something the other direction.
@dakrone maybe it's possible to persist to a metadata file in the repository and not in cluster state to avoid changes to cluster state
@yaronp68 we need to persist at least one end state in the cluster state, because in the event of a snapshot failure, we wouldn't be able to persist it in a metadata file in the repo, because the snapshot failed :) so we have to have a place to have something like "your snapshot failed because of XYZ" for users to see.
@yaronp68
maybe it's possible to persist to a metadata file in the repository and not in cluster state to avoid changes to cluster state
I would rather we not do this, sorry. The repository is currently undergoing some redesign to resolve issues like https://github.com/elastic/elasticsearch/issues/38941.
If we start putting custom blobs in the repo that's gonna be one more thing to worry about when we make changes there. Plus, the eventually consistent nature of some blob-stores like s3 will also create problems for a metadata file/blob that would be read and updated I would assume?
-> the private index for the snapshot history seems like the safest bet to me still. If that's not an option for some reason the cluster state is still the better option compared to a custom repository blob.
the private index for the snapshot history seems like the safest bet to me still. If that's not an option for some reason the cluster state is still the better option compared to a custom repository blob.
We are planning to store the latest success and failure in the cluster state (only one of each), and store the result for every snapshot invocation into an index for history/alerting purposes.
I believe the idea is that when listing snapshots for a repository, you could then tell which snapshot came from what (manually triggered, triggered via policyA, policyB, etc). We will have something on the other side (for a policy, what's the last snapshot taken), I think the desire was for something the other direction
I see. In that case I think adding this information to the cluster state (and then as a result to the snapshot metadata we store in the repository) may be an option. In the end, the repository is the only place we can store that metadata to if we want to be able to use it with the snapshot list.
I think adding this information to the cluster state (and then as a result to the snapshot metadata we store in the repository) may be an option.
This is unclear to me; I think the original desire was for something in CreateSnapshotRequest
(perhaps an origin
String) so when SLM issued the request it could specify the policy name, which is then stored with the snapshot's metadata (just like the list of indices, start time, end time, etc). How does that involve the cluster state?
@dakrone
How does that involve the cluster state?
Sorry that was needlessly confusing :) Just by virtue of how this is implemented we'd have to add that information to the ephemeral cluster state. It's not important for the feasibility though :) -> I'm fine with adding this to the request and then storing it to the snapshot meta in the repo. That we should be able to do in a BwC manner.
I'm fine with adding this to the request and then storing it to the snapshot meta in the repo. That we should be able to do in a BwC manner.
Great! I'll open a separate issue for that so we can track it.
Going to close this as SLM has been merged to master and 7.x and will be in the 7.4 release.
Further work on retention can be found at https://github.com/elastic/elasticsearch/issues/43663
Most helpful comment
Going to close this as SLM has been merged to master and 7.x and will be in the 7.4 release.
Further work on retention can be found at https://github.com/elastic/elasticsearch/issues/43663