Enhancements: Support Out-of-Tree CloudStack Cloud Provider

Created on 2 Jan 2019  路  25Comments  路  Source: kubernetes/enhancements

Enhancement Description

  • One-line enhancement description (can be used as a release note):
    Support Out-of-Tree CloudStack Cloud Provider by running the cloud-controller-manager
  • Primary contact (assignee):
    @andrewsykim
  • Responsible SIGs:
    SIG Cloud Provider
  • Reviewer(s) - (for LGTM) recommend having 2+ reviewers (at least one from code-area OWNERS file) agreed to review. Reviewers from multiple companies preferred:
    @andrewsykim
  • Approver (likely from SIG/area to which enhancement belongs):
    @andrewsykim
  • Enhancement target (which target equals to which milestone):

    • Alpha release target (x.y)

    • Beta release target (x.y)

    • Stable release target (x.y)

Ref https://github.com/kubernetes/enhancements/issues/88

/sig cloud-provider

sicloud-provider

All 25 comments

@ngtuna @sebgoa @svanharmelen are any of you available to take this?

@andrewsykim I left the company that used both Kubernetes and CloudStack about 8 months ago and currently don't use either of them on a regular basis anymore. So unfortunately I will not be able to help out here.

same here. This needs to be picked up by an active community member like @rhtyd

Don't think this issue will be alpha in v1.14.

/assign @andrewsykim

/assign @rhtyd

@justaugustus: GitHub didn't allow me to assign the following users: rhtyd.

Note that only kubernetes members and repo collaborators can be assigned and that issues/PRs can only have 10 assignees at the same time.
For more information please see the contributor guide

In response to this:

/assign @rhtyd

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Hi @andrew, I'm a 1.16 Enhancement Shadow. Is this feature going to be graduating alpha/beta/stable stages in 1.16? Please let me know so it can be added to the 1.16 Tracking Spreadsheet.

Once coding begins or if it already has, please list all relevant k/k PRs in this issue so they can be tracked properly.

I notice there is no KEP for this Enhancement. As a reminder, every enhancement requires a KEP in an implementable state with Graduation Criteria explaining each alpha/beta/stable stages requirements.

Milestone dates are Enhancement Freeze 7/30 and Code Freeze 8/29.

Thank you.

We're aware of at least two CloudStack controller managers (ccms) developed by CloudStack users/contributors:
https://github.com/swisstxt/cloudstack-cloud-controller-manager (cc @onitake @joschi36)
https://github.com/tsuru/custom-cloudstack-ccm (cc @andrestc @cezarsa)

It's possible we integrate in-built kubernetes support in CloudStack (via kubeadm and custom systemvmtemplate etc) in near future versions, in which case the controller can be part of CloudStack codebase and its setup and orchestration will be managed by CloudStack as part of the k8s cluster setup and lifecycle management.

@rhtyd We'll gladly transfer ownership of swisstxt/cloudstack-cloud-controller-manager to the Apache/CloudStack project if you decide to accept it.

We still have a few issues that weren't anticipated when we factored the CCM out of KCM, but they should definitely be fixed to make the controller production-ready for most users. Other than that, we're using it productively in-house very successfully, so I think it would be a good addition.

Thanks @onitake I'll start a discussion thread on dev@ ML and mention your intent.

FYI I've started a discussion on CloudStack dev mailing list and request this repository: https://github.com/apache/cloudstack-kubernetes-provider

Hey there @rhtyd -- 1.17 Enhancements shadow here. I wanted to check in and see if you think this will be happening in 1.17?

The current release schedule is:

  • Monday, September 23 - Release Cycle Begins
  • Tuesday, October 15, EOD PST - Enhancements Freeze
  • Thursday, November 14, EOD PST - Code Freeze
  • Tuesday, November 19 - Docs must be completed and reviewed
  • Monday, December 9 - Kubernetes 1.17.0 Released

If you do, I'll add it to the 1.17 tracking sheet (https://bit.ly/k8s117-enhancements). Once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. 馃憤

To be accepted in the release, all enhancements MUST have a KEP, the KEP MUST be merged, in an implementable state, and have both graduation criteria/test plan.

Thanks!

Thanks for fixing the docs in https://github.com/kubernetes/website/pull/16129 - I closed the other issue + pull request.

@jeremyrickard Me and @rhtyd would be happy to contribute, but I have no idea what exactly is expected right now. Neither of us are currently members of the sig/cloud-provider team, and while I've read some of the Kubernetes contribution process documents, I don't understand what's currently going on with the cloud providers and how external cloud providers should be handled in the future.

https://github.com/apache/cloudstack-kubernetes-provider is merged and currently only needs ready-to-use container images on a public registry. This will be done very soon. The README on the Github repository documents how to use the cloud provider, and the documentation on the k8s website now points to the repository. As far as we're concerned, it's ready to be used an can even serve as a drop-in replacement for the old built-in cloud provider.

What can we do to help the issue move forward?

Hey @onitake, thanks for raising your hand! The best place to start is probably by reaching out to sig-cloud-provider and looping in @andrewsykim and @cheftako. Best way to do that, if you haven't already, is probably via the SIG's slack channel: #sig-cloud-provider or the mailing list.

Generally, there needs to be a KEP for enhancements, and the KEP has some requirements:

  • is in the implementable state
  • defines a test plan
  • defines graduation criteria

Seems like the work for this has largely been accomplished, so it might be as simple as that, but I think it's probably best for @andrewsykim and @cheftako to chime in and we can make the determination of how to handle this situation?

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Hey there @onitake and @rhtyd -- 1.18 Enhancements shadow here. I wanted to check in and see if you think this Enhancement will be graduating to alpha in 1.18? Do you still need help getting started? The first step would be to write a KEP for this - you could use one of the other cloud provider KEPs as an example.

The current release schedule is:
Monday, January 6th - Release Cycle Begins
Tuesday, January 28th EOD PST - Enhancements Freeze
Thursday, March 5th, EOD PST - Code Freeze
Monday, March 16th - Docs must be completed and reviewed
Tuesday, March 24th - Kubernetes 1.18.0 Released

To be included in the release, this enhancement must have a merged KEP in the implementable status. The KEP must also have graduation criteria and a Test Plan defined.

If you would like to include this enhancement, once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. 馃憤

We'll be tracking enhancements here: http://bit.ly/k8s-1-18-enhancements
Thanks!

@johnbelamaric @andrewsykim I will submit a KEP, similar to what's already in https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider .

I'm not sure we still need it though: Right now we're tracking the cloud provider code in https://github.com/apache/cloudstack-kubernetes-provider/ and the documentation was updated to link there: https://v1-17.docs.kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#cloudstack

I think maintaining all the cloud providers in projects that belong to the stakeholders (in this case the Apache foundation) is better than having them in the k8s project. What's the official opinion on this?

I would defer to the @kubernetes/sig-cloud-provider for that opinion. However, I will say that if there is an existing solution that works and is not part of the k8s org, and no one sees a need for this, we should just close the issue.

I would prefer a KEP for anyything going into a CP repo. (Eg. https://github.com/kubernetes/cloud-provider-aws). On the plus side having your repo there means that you are less likely to be broken by changes to K/K, as people can check your code. In addition we can integrate you into things like the geting started on a cloud documentation.

@cheftako I discussed this with @andrewsykim on @kubernetes/sig-cloud-provider a while back, and the process of getting a new repository in the k8s project and becoming its owner is prohibitive for newcomers such as me and @rhtyd . Neither of us has contributed to Kubernetes much yet.

This is a tricky situation -- our CI wouldn't let you merge or approve PRs unless you're an org member so that's something that is actually required and we can't sponsor you as org members unless you've contributed to the project in some way first.

Having said that, I would glady be willing to help both of you become org members so we can move this work along. Our SIG backlog is growing and could use some help https://github.com/kubernetes/cloud-provider/issues

@onitake hope you don't mind if I close this issue for now, we can reopen it once either of you can become OWNERs of the proposed repo.

/close

@andrewsykim: Closing this issue.

In response to this:

@onitake hope you don't mind if I close this issue for now, we can reopen it once either of you can become OWNERs of the proposed repo.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

saschagrunert picture saschagrunert  路  6Comments

andrewsykim picture andrewsykim  路  12Comments

prameshj picture prameshj  路  9Comments

povsister picture povsister  路  5Comments

justaugustus picture justaugustus  路  7Comments