Hi,
I quickly tested if it was possible to run kind inside a docker container with my modded version on DinD (zoobab/dind), and I could successfully run a cluster in there.
I will try to make a first version to run Kind as a docker oneliner, like:
$ docker run --privileged kind
We could do this very easily, it's not clear how useful this is.
Nesting your cluster in another layer of docker makes it harder to work with. We do run kind on Kubernetes clusters for example, but it does not look like this.
Can you elaborate on why we should do this and what the usage looks like?
not sure if I understand this, but wouldn't something like this be faster than go get (which requires go) for a quick (throwaway) cluster?
We have CLI binaries available on the releases page with the intention of avoiding go get, FWIW.
Yeah, but still downloading releases is an extra step vs just docker run..
Not to mention that usually binary releases have all kind of problems like "not availabe for windows", "not :latest" (like currently)
too many layers...we do have to end up in a host eventually.
not running kind in a container by default gets my +1.
So how we ship and version things is something I've put a lot of thought into getting right, it is important for testing usage that we ship reliable, stable releases with clear upgrade paths. Particularly for supporting against Kubernetes at HEAD which is a key end-goal.
Yeah, but still downloading releases is an extra step vs just docker run..
we ship via go for that for now - go get sigs.k8s.io/kind && kind ...
shipping via docker run would mean either:
It would also be another thing to maintain and qualify, which we don't really have the capacity for right now.
Not to mention that usually binary releases have all kind of problems like "not availabe for windows", "not :latest" (like currently)
That's because we're not shipping for windows until we can actually ensure it works correctly. That release in fact does not work correctly on windows. See #181, which we are working on.
Similarly there is not release for :latest because the branch is under development and needs more testing on platforms we ship to. At this point in time I don't wish to ship artifacts that we cannot guarantee the quality of.
Adventurous users _can_ build it themselves with just go get -u sigs.k8s.io/kind, but "lazy" / CI users have access to a stable release that won't change underneath them and which we can predictably support.
I don't think minikube etc. offer anything like this officially. They do, however, offer things like homebrew packages instead which I think we are more likely to do for the 1.0 release. See #88 for tracking on homebrew, I would assume we'd also look at chocolatey and other similar tools when we're ready.
We _might_ officially do something like this someday, but I don't think we can do it well right now.
kubectl is similarly a go binary you will likely want to install to actually do anything with kind - I think our one binary and one "node" image is the most widely usable and maintainable target for now.
Agreed. Once binary downloads are always automated and available (including master), then I don't see any reason to maintain docker images.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
See bug #451 for a use case for such a container.
CircleCI, CloudBuilder and many other CI/CD systems use containers to setup the test environment/services. Having Kind containers optimized for CircleCI, CB, maybe docker compose
would be great - it's more than having a binary downloaded, sometimes mounting volumes and
uid and other mappings can help.
For networking purposes it's best to run your workload alongside kind in
the same namespace etc.., that image is going to be docker in docker + the
kind binary, plus whatever your tests need.
Our image publishing is hampered by
https://github.com/kubernetes/k8s.io/issues/158, we've been flying low with
a semi-official docker hub registry as a stopgap but I'd rather not
increase dependency on it.
Similarly setting up other CI on this repo is painful (I can't just enable
it myself), so I've not used Circle etc. yet. In the future we'll probably
obtain an unofficial repo to test these.
Between Google and Kubernetes, it will probably be a while before that
happens.
On Tue, Apr 30, 2019, 10:33 Costin Manolache notifications@github.com
wrote:
See bug #451 https://github.com/kubernetes-sigs/kind/issues/451 for a
use case for such a container.CircleCI, CloudBuilder and many other CI/CD systems use containers to
setup the test environment/services. Having Kind containers optimized for
CircleCI, CB, maybe docker compose
would be great - it's more than having a binary downloaded, sometimes
mounting volumes and
uid and other mappings can help.—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes-sigs/kind/issues/170#issuecomment-488043668,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAHADK5MS66ZDMFGOO6RPO3PTB7HBANCNFSM4GKGXMJA
.
For reference, regarding Travis and CircleCI, there are already several projects using kind
And github.com/istio/installer - CircleCI and BuildKite. ( and we are looking forward to also support GCB).
You can also create a fork and enable CircleCI with the free open source plan - Kind seems to work fine with that.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
https://github.com/kind-ci/examples is hosting various additional CI going forward / ensuring kind works there and has examples (thanks @munnerz for setting this up!) this solved the process issues, but still needs work. hoping to ramp up more people to help...
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
too many layers...we do have to end up in a host eventually.
not running kind in a container by default gets my +1.