Linkerd2: Refactor `install` to allow more flexible configuration

Created on 19 Dec 2018  路  10Comments  路  Source: linkerd/linkerd2

What problem are you trying to solve?

There are a couple different ways that templates are showing up (and being rendered) in the cli. In addition, there are some issues around using helm to install the control plane (#452) and addons (#1891). It would be nice to aggregate how templating works, provide a chart for those wishing to use helm and unify the addons.

How should the problem be solved?

Change out the backend for install from raw gotmpl to helm. linkerd install behavior should not change in any way ... simply how the templates are stored and rendered.

I'm wondering if we could also use helm for inject, but I have a suspicion that won't work.

arecli arecontroller prioritP0 rfc

Most helpful comment

@ihcsim I was actually thinking of using helm as a library. They have a renderutil package that does everything we'd want. All you need to do is pass the chart object in (which there is also a loader for).

We've already got a templating system in the CLI (since we need it to render configuration), the helm one just comes with more features.

re. inject configuration, I'm thinking about what will happen when we start to introduce more secure concepts around mTLS. There are definitely use cases where I'd like the default to be mandatory but disable on some apps. Basically, I think it is important that we architect the system so that advanced users are able to "break glass in case of emergency" and do something that might not be 100% supported.

All 10 comments

@siggy what do you think? Feels like a nice improvement, but I'm not sure when (or if) it is the right time to do it.

@grampelberg Agree Helm template integration is a good thing. Prior to refactoring linkerd install, I'd like to see a working end-to-end implementation of Helm integration using a template. Once we see that working, it should provide better insight into a refactor of linkerd install.

Re: linkerd inject, if we can support Helm, that's great. We should be mindful that the autoinject code and cli inject code are currently different codepaths. We should probably resolve that prior to Helm integration for inject.

I updated the title a little bit to remove some of the implementation details. I'm wondering if https://github.com/kubernetes-sigs/kustomize might be a better solution. It doesn't sound reasonable to support all the configuration flags an end user would potentially want. There should be some sane defaults for most. For the rest, it makes sense to allow patching on top.

@ihcsim I'm particularly interested what the implications here are for the data plane and auto-injection. Ideally, I'd like to define a patch to the pod spec that gets applied at auto-injection time.

My first thought is that there are a very limited number of places where the patch can come from during auto-injection time. (I assume the patch is user configurations like resource limits, skipped ports etc.) The two that come to mind are config map and annotation. Config map can't provide the per-resource granular control some users might want. Annotation that takes JSON patch as its value might work, but can get messy and error prone. Also fwiw, we don't have much control over the request that the API Server sends to the webhook.

That's a really good point I wasn't thinking about, seems like we've got a couple potential options:

  • Create a new CRD that uses selectors to target patch/overlay to specific resources. This feels like the best solution overall, I'm just worried about CRD sprawl. There's the added problem that CRDs require more RBAC than most namespace admins have.
  • Use the ServiceProfile as a way to contain the patch/overlay. This feels like overloading the ServiceProfile and taking it in the wrong direction entirely.
  • Use a ConfigMap that is basically the same thing as a CRD (uses selectors). Not awful, just missing out on schema validation and a little error prone.
  • Shove it all into an annotation at the namespace/deployment/pod level. I have an extreme dislike of nesting JSON/YAML in antotations. It is impossible to read, reason about and check.

Thinking about this at the pod level, it starts to sound super similar to #1999. @klingerf have you thought much about the implementation of that?

@adleong I'd love your thoughts as well.

Ok, I'll take a swing at this :)

I'm not familiar with helm, but I talked this issue over with @grampelberg, and it seems like there are a few different stages of configuration captured here.

The first proposal is to reuse the existing linkerd install templates to produce templates that can be used in helm charts. I agree that it would be a drag to have to maintain separate CLI install templates and helm templates, and since our CLI installs are already templated, it seems like we could tweak them so that they could also be used in helm charts. 馃憤

There's also some discussion of inject configuration. In my mind, there are two types of inject configuration: settings that apply directly to the processes running in the linkerd-init and linkerd-proxy containers (e.g. skip inbound/outbound ports, set log level, enable tls, etc.); and settings that apply to the kubernetes configurations that are used to run those containers (e.g. resource requests, readiness/liveliness probes, image pull policy, etc.).

For the first type of inject configuration (process config), I recommend that we continue to use command line flags for setting these when running linkerd inject from the CLI, and that we add support for overriding them via annotations on pod specs. That will allow CLI users to set them via either the command line or the pod spec, and it will allow auto-inject users to set them via the pod spec.

For the second type of inject configuration (pod/container config), @grampelberg mentioned that tools such as kustomize exist for patching YAML configs, and they could help us here. Rather than adding CLI flags for every pod/container configuration, maybe we could provide sane defaults and a recommended way to override them via kustomize or something similar. We would need to figure out how that works with auto-inject, however.

And as @grampelberg mentions, we could also consider a similar approach for install -- command line flags for process configuration, and kustomize for pod/container configuration. We could even explore publishing kustomization files for popular configurations, such as HA installs.

A few additional notes: there's also the issue of using helm for inject, but I'm not familiar enough with how that would work to comment. And I think we should consider the issue of how the inject defaults are discovered by the CLI and the auto-injector (#1999) to be a separate issue entirely.

Does this seem like the right division of configuration issues? Maybe we could split off distinct pieces into separate issues, so as to keep this one focused on unifying our CLI install template with templates that can be used in helm charts?

I am not sure how we can tweak the CLI install template to make it helm-compatible. If the intent is to turn install.Template into a helm template, I suspect the template.Parse() in the current CLI code might fail with any helm-specific template functions, pipelines and flow controls. We might end up needing some kind of helm interpreter in the CLI. If possible, I prefer to insulate the LD2 code from any templating system, turning that into a build time dependency.

What about the idea of rolling an official LD2 helm chart, containing both the control plane and data plane (with sane defaults using config map or named template, if possible) YAML, and inject the output of helm template <ld2-chart> into the CLI during build time with something like go build -ldflags "-X install.Template=<chart-yaml>". (I think -ldflags -X might work with multi-line YAML.)

If we publish an official LD2 chart to a helm repo (maybe even the helm stable repo), then Helm users will not need to roll their own LD2 charts. (I like to think most people won't do linkerd install | kubectl apply -f - for production.) Then installing LD2 for these users will be either helm install linkerd2, helm template linkerd2 | kubectl apply -f -, or even including the LD2 chart as a subchart in their helm charts.

Many of the defaults can also be moved out of the code into the LD2 chart values.yml. Some of the CLI install unit tests will likely be simpler too.

I like @klingerf proposal on how to handle process config and pod/container config. During auto-injection, I don't think we need to worry about pod configs because our users will have control over the pre-inject YAML. On the other hand, the proxy container config might be something our users want to change. Then again, looking at the container API, I feel like any modifications to these container properties should apply to all proxies. IOW, I just don't see why one'd want to modify any of these properties (resource limits, volume mounts, ports, env vars) for just one instance of the proxy in the entire cluster.

For example, if I want to modify the proxy resource limit, I will imagine using kustomize to alter the data plane YAML generated from the LD2 helm chart (using helm template), and pass the modified YAML to either kubectl apply or helm install. The result is that all my proxies will use the same resource limit.

@ihcsim I was actually thinking of using helm as a library. They have a renderutil package that does everything we'd want. All you need to do is pass the chart object in (which there is also a loader for).

We've already got a templating system in the CLI (since we need it to render configuration), the helm one just comes with more features.

re. inject configuration, I'm thinking about what will happen when we start to introduce more secure concepts around mTLS. There are definitely use cases where I'd like the default to be mandatory but disable on some apps. Basically, I think it is important that we architect the system so that advanced users are able to "break glass in case of emergency" and do something that might not be 100% supported.

Ok, I opened #2129 to track the discussion from this issue about kustomize. Let's use this issue specifically for the work that we're doing to switch the install command to use helm templates and the helm renderer, which is being implemented in #2098.

Closed by #2098.

Was this page helpful?
0 / 5 - 0 ratings