Many Kubernetes clusters support a local registry, to make pushing images directly to the cluster fast. But every cluster documents it with manual shell scripts:
Because every cluster is slightly different, this makes it hard for tools to automate this work for users.
Clusters should use annotations on the kube-system namespace to document how to configure a registry
I propose 3 new "standard" annotations:
x-k8s.io/registry: the host of the registry (hostname and port), as seen from outside of the cluster
x-k8s.io/registry-from-cluster: the host of the registry (hostname and port), as seen from inside the cluster
x-k8s.io/registry-help: The URL of documentation where users can read more about how to set a registry up.
Tools that would benefit from local registries should read this annotation and display it to the user
All Kind clusters would have a registry-help annotation pointing users to https://kind.sigs.k8s.io/docs/user/local-registry/
We'd also update the example script at https://kind.sigs.k8s.io/docs/user/local-registry/ to add the x-k8s.io/registry annotation during setup.
This proposal is based on what we've been recommending to people in Tilt, with a lot of success: https://docs.tilt.dev/choosing_clusters.html#custom-clusters
update 5/4 - changed the proposal to use the kube-system namespace instead of node annotations
I also filed this feature request here https://github.com/rancher/k3d/issues/234 and here https://github.com/ubuntu/microk8s/issues/1173, and would love other ideas on places I should post about this
FWIW x-k8s.io currently is typically used by community projects under something like <project>.x-k8s.io, I think there's guidance about this somewhere but I haven't needed to look in a while.
i'm not sure if there's something this would neatly fall under.
I like the general idea though, if not a standard annotation maybe something like checking for .*/registry-help etc?
sending a poke to cluster-lifecycle, which seems like they might be the appropriate group for this, given that it's sort of a cluster-addon detail.
this is an interesting topic for the SIG cluster-lifecycle Zoom meeting:
https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle
see: Regular SIG Meeting...
and potentially going trough a K8s KEP process if there is interest.
initial take:
@nicks it's interesting that you picked node annotations to store this data. are you expecting that "cluster registry" may differ per node? (edit: looks like @neolit123 touched on this same question)
i would recommend to make an expectation that if there exists registry configuration it applies to all nodes(== entire cluster). given that, it's worth considering alternative places for storage:
@neolit123 thanks for the pointer! is it ok if I just add it to the meeting agenda for tomorrow?
re: some of the specific technical points that @neolit123 and @cppforlife made:
I really like the suggestion to set it per-namespace! I'll update the proposal at the top of the post
For our purposes, "local" means any insecure registry that's co-located with the cluster and eligible to be exposed directly to the developer's machine. The term is trying to evoke more how the developer should expect to interact with it than how it's implemented.
The role of this annotation is to communicate to tools outside Kubernetes the current state, rather than for communicating inside Kubernetes the desired state (the latter is more what I see ConfigMap is for)
I like the idea that long-term, all these local registry solutions should converge towards a shared ClusterRegistry CR. But I'm not totally sure what that looks like yet, and would want to try building a POC before I proposed anything. This is mostly suggesting a first step towards documenting the solutions that already exist.
oh - and would also love better suggestions on the annotation "hostname"; I also had trouble finding a good recommendation on this
is it ok if I just add it to the meeting agenda for tomorrow?
sure, please join the mailing list (google group), to be able to edit the document.
Strikes me that this is similar to what we can find via the ComponentConfig API.
We had a long discussion about this in the sig-cluster-lifecycle meeting today. Thanks @neolit123 for facilitating. The group proposed a few alternative implementations that I want to think on. I'm going to start a KEP in sig-cluster-lifecycle to kick the tires on this proposal
@bboreham I'm reading up on ComponentConfig right now -- ya, I see the resemblance. And the resemblance to the Kind config (https://godoc.org/sigs.k8s.io/kind/pkg/internal/apis/config#Cluster) There's no way to access those from outside the cluster, right?
kubeadm stores it's component config in a configmap which you can access via the kubernetes API
Er and kind's config is modeled after component config but ... without pulling in all the heavy k8s API machinery.
I just found this issue and it is somewhat related to what I'm doing:
My goal is to test a system deployed on kubernetes in KinD. We use GCR and build and upload images to gcr.io in CI. As a result, the system we are deploying are referencing these images via their gcr.io/.... names. Since we are mutating these images, rebuilding them, and testing them in a KinD based environment, it's very cumbersome to start of a CI run, get the images to build, pull it. This is not even mentioning the problems surrounding authorization, where tokens to GCR.io expires frequently.
To solve this problem, when I change any images, I would build and tag it locally with their full name including the gcr.io part. Then I can use kind load docker-image to load the image into KinD. As a result, Kubernetes pods can "pull" these locally built "gcr.io" images. However, this is very slow with a large number of images and larger images. So I introduced a registry via a simple StatefulSet deployed into KinD along with the following patch:
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
endpoint = ["http://localhost:31500"]
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 31500 # Docker registry
hostPort: 31500
listenAddress: "127.0.0.1"
With this, I can build, tag, and push the images into localhost:31500/... and Kubernetes will correctly pull images named with the gcr.io prefix from localhost:31500. This behaviour works in KinD 0.7.0, but appears to be broken in 0.8.1 as Kubernetes now get stuck at pulling images.
I'm commenting here for two reasons:
registry-statefulset.yaml:
apiVersion: v1
kind: Namespace
metadata:
name: registry
---
apiVersion: v1
kind: Service
metadata:
name: registry-int
namespace: registry
labels:
app: registry
spec:
ports:
- name: registry
port: 5000
clusterIP: None
selector:
app: registry
---
apiVersion: v1
kind: Service
metadata:
name: registry-ext
namespace: registry
labels:
app: registry
spec:
ports:
- name: registry
nodePort: 31500
port: 5000
targetPort: 5000
type: NodePort
selector:
app: registry
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: registry
namespace: registry
spec:
serviceName: registry-int
selector:
matchLabels:
app: registry
replicas: 1
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: "registry:2"
ports:
- containerPort: 5000
volumeMounts:
- name: registry-data
mountPath: /var/lib/registry
volumeClaimTemplates:
- metadata:
name: registry-data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 50Gi
Hi @shuhaowu, this is not related to this issue.
have you considered using the official guide https://kind.sigs.k8s.io/docs/user/local-registry/?
we also intend to ship a first class local registry soon, and there are already tracking issues for this.
Thanks. I'll open a new issue. TThe instructions about local registry runs the registry on the host alongside the KinD cluster, this doesn't quite work with my use case. It's best if the local registry can be built into the KinD cluster and also be an alias of gcr.io.
Issue created here: https://github.com/kubernetes-sigs/kind/issues/1580
KEP PR: https://github.com/kubernetes/enhancements/pull/1757
KEP tracking issue: https://github.com/kubernetes/enhancements/pull/1755
@BenTheElder one question that came up on another thread is if it's worth advertising other image-loading methods as part of this proposal. e.g.:
imageLoadCommand: "kind load docker-image $IMAGE_REF"
The answer might be that we're moving away from that, or that it's worth punting to a future KEP, but it's worth considering the question.
I intend to continue to support it since it can do arbitrary tags without re-tagging, but I'm not sure if it should be in the KEP either ...
@nicks one thing that came up ... do people run tilt push/pull inside a pod? (or possibly similar apps?)
in that case you might need a third value like:
or we might need to come up with some clever workaround on our end.
I strongly dislike the proxy service hack most seem to be using currently :/
@BenTheElder I don't think we have any Tilt users that are trying to run Tilt in pod in the same cluster that they're developing in. But I understand what the person is trying to do in that thread and think that's a legit use-case.
I like the distinction you're proposing (i.e., having a separate host for the container runtime vs pods) but I'm not totally sure what we would call it. Have you seen this enough that it's worth folding into the KEP?
This is the first I've seen it with kind and a local registry, but I think building and pushing images from within a cluster is unfortunately pretty common.
Technically many setups will be reachable via the same endpoint both via the container runtime on the nodes and from within a pod or similar, but anything leveraging the runtime mirror config won't be, and that seems fairly common, IIRC k3d does this for example.
I'm not sure what to name these fields either.
at least in the kind case (and k3d I think), the registry is reachable:
naming the second and third might be difficult to avoid confusion, but they are distinct and I think not totally uncommon.
Ya, I think the most precise way to specify this might be something like
HostFromContainerRuntime - for things that pull images through containerd
HostFromClusterNetwork - for things that push/pull images from inside the cluster
That feels like you have to know a lot about how this is implemented to use them correctly. But I guess the long-term goal of the KEP is to stop exposing these implementation distinctions to users, so maybe that's ok.
yay, thanks for bringing that up, I would have had a major facepalm moment if we had to update it after.
Last call for any major objections or things I overlooked before we merge the initial spec!
https://github.com/kubernetes/enhancements/pull/1757
https://github.com/kubernetes/enhancements/blob/c5b6b632811c21ababa9e3565766b2d70614feec/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry/README.md#design-details
Sorry I didn't get back to this yet, dropping a note that googlers are supposed to be out for the weekend now (mental health day + labor day...) I'm going to try to more or less take that time and get back to this Tuesday 馃槄
Enjoy the long weekend! We're aiming for June 4th to merge the KEP PR, so plenty of time. :blush:
/assign
/priority important-soon
this is in the registry guide but not in anything integrated yet.
it's always something ... right now investigating performance regressions in kubernetes startup. made some improvements but we took a bad hit since kind was started.
Most helpful comment
/priority important-soon