During the DNS attack today on github.com and other internet end-points, we were not able to deploy new K8s clusters. We need to address capability of self-hosting docker containers and other components that are downloaded. For instance with channels:
"https://raw.githubusercontent.com/kubernetes/kops/master/channels/stable": error fetching
Larger companies will want to self-host binaries, dockers, and metadata.
Keywords:
This should be a global discussion about all dependency management in the project. We could/should offer a way to easily overload some of these parameters...
My gut makes me think yml with Viper....
Agreed - this was really just an error/shortcut I made - I didn't think it through.
I'd say a "base directory" for all our resources would be helpful. And we should pull from there:
kops upgrade, which is really just a shortcut over existing edit functionality)And then you could repoint your base directory to your private builds / whatever.
We already can preload docker images over HTTP and then docker load them - we use that for e2e. So we're close.
I think it's mostly "just" a matter of improving our build process. I saw @mikedanese 's super cool work on getting bazel into the core, and I think it would be great to leverage that once it's in (though we will still want an easy "make" for building kops the CLI tool itself)
I would say that we can define a docker Reg for -
Http / s3 repo for all artifacts like nodeup.
https://github.com/kubernetes/kops/pull/730 another one
I1028 16:17:44.574437 78106 channel.go:68] Loading channel from "https://raw.githubusercontent.com/kubernetes/kops/master/channels/stable"
Will start logging these as I notice them
Thanks ... nodeup as well. All of the dockers :)
@robertojrojas this is what I was talking about. This is a tip of the ice burg problem. You interested in assisting?
@chrislovecnm sure! So, there are deps needed at the time kops is executing and deps needed within the cloud provider (with or without internet access), right?
We have
What is the best way to communicate this to you?
Oh and thanks. This is a huge need for the community btw. For example DNS attacks have stopped deployments. Aka not good.
We should support K8S internal containers such as pause-amd64, for that we should pass the flag "--pod_infra_container_image" to the kubelet.
https://github.com/kubernetes/kubernetes/issues/4896
External dependencies
Can be specified using environment variable CNI_VERSION_URL
The current source is storage.googleapis.com https://github.com/kubernetes/kops/blob/789bfcf07b54bdfd852883fde79177950064a099/upup/pkg/fi/cloudup/networking.go#L72
Can be specified on the command line via --channel
The current source is github.com https://github.com/kubernetes/kops/blob/968366d444d056ffcfc978d5449f53df68b1ac4c/pkg/apis/kops/channel.go#L29
The base url can be changed via KOPS_BASE_URL
The specific urls can be changed via NODEUP_URL and PROTOKUBE_IMAGE
The current source is s3
If c.Cluster.Spec.KubernetesVersion is a url the following images are loaded from that url.
imagePath := baseURL + "/bin/linux/amd64/" + component + ".tar"
Individual images can be specified for each of the above items via the config - https://github.com/kubernetes/kops/blob/97afdf9f97f56ab5a369b444d2c39621e8e6ba73/pkg/apis/kops/v1alpha2/componentconfig.go
Can be specified on the kubelet via --pod-infra-container-image
The current source is gcr
gcr.io/google_containers/hyperkube-amd64
gcr.io/google_containers/etcd:2.2.1
gcr.io/google_containers/exechealthz-amd64:1.2
gcr.io/google_containers/cluster-proportional-autoscaler-{{Arch}}:1.0.0
gcr.io/google_containers/kubedns-{{Arch}}:1.9
gcr.io/google_containers/kube-dnsmasq-{{Arch}}:1.4
gcr.io/google_containers/dnsmasq-metrics-{{Arch}}:1.0
gcr.io/google_containers/exechealthz-{{Arch}}:1.2
Currently these containers depend on gcr.io and can no be pre-loaded.
Networking provider such as weave or calico as well ...
List above looks quite comprehensive. But I don't see kubelet in the list. Where does that come from?
Implementation
kops toolbox bill-of-materials - which will generate a list of items that are installed with kops.Extended List
@sstarcher has a great list, but here are a few more.
https://github.com/kubernetes/kops/pull/2419 provides a list of inventory items.
https://github.com/kubernetes/kops/issues/2571 will provide a tool to stage those items.
Final PR will be the implementation of API values to allow dynamic setting for the staging area. The staging area for assets will be a docker repo and VFS.
Larger companies will want to self-host binaries, dockers, and metadata.
And also the security paranoid. In our case, it would potentially simplify some things.
I'm already using my own nodeup bucket, and freeze/promote nodeup testing environment with a small s3sync, but many things just pull from gcr.io without my direct control.
Still a work in progress
/assign
/close
as this is implemented
as this is implemented
@chrislovecnm , done? where can I read the documentation how to use ?
Is there a document for us to refer?
@chrislovecnm Done? Is there a link on how to get started?
I would also like to see documentation on this as well. Our use case is that we would like to push a docker config to all of our nodes to require that all images must come from our private registry and must be signed which would obviously break cluster components without this.
Most helpful comment
And also the security paranoid. In our case, it would potentially simplify some things.
I'm already using my own nodeup bucket, and freeze/promote nodeup testing environment with a small s3sync, but many things just pull from gcr.io without my direct control.