User Story
As a operator I would like to set my CAPI management cluster to a state of Paused or Running for the reason of needing to safely backup or restore my CAPI cluster.
Detailed Description
In order to safely backup, restore, duplicate a cluster and leave it in 'stopped' state there is a need for the following clusterctl commands.
`
clusterctl state stop
clusterctl state start
clusterctl status
CAPI processes are stopped
CAPI processes are running
`
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind feature
/help
/priority backlog
Today, clusters have a spec.paused field you can use to pause reconciliation, you'd have to patch every cluster you have in your management cluster. In alternative, another potential and safe solution is to scale down the deployments to replicas = 0, back up the cluster, and scale them back up.
@vincepri:
This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
/help
/priority backlogToday, clusters have a
spec.pausedfield you can use to pause reconciliation, you'd have to patch every cluster you have in your management cluster. In alternative, another potential and safe solution is to scale down the deployments toreplicas = 0, back up the cluster, and scale them back up.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/milestone Next
/priority awaiting-more-evidence
Will build this an external cli tool and reach into mover.go to pause or
resume the app
On Wed, 27 May 2020 at 17:32, Vince Prignano notifications@github.com
wrote:
/milestone Next
/priority awaiting-more-evidence—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes-sigs/cluster-api/issues/3085#issuecomment-634785034,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAA6C36RKQR33VS6HYJGL6TRTU6DBANCNFSM4NHWLMNA
.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stal
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
@casibbald is this still a problem or can we close the issue?
I came up with a very ugly workaround however it would be good to have it
from the clusterctl binary tool.
On Mon, 4 Jan 2021 at 15:00, Fabrizio Pandini notifications@github.com
wrote:
@casibbald https://github.com/casibbald is this still a problem or we
can close the issue?—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes-sigs/cluster-api/issues/3085#issuecomment-753961515,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAA6C35S26M6FH3DWKQ3WPTSYG3YDANCNFSM4NHWLMNA
.
/remove-lifecycle rotten
/remove priority/awaiting-more-evidence
/area clusterctl
Could we potentially leverage the management cluster operator to allow something like this? @wfernandes