Argo: Can I create pod in another cluster?

Created on 19 Jul 2020  路  12Comments  路  Source: argoproj/argo

Summary

Our k8s has several clusters, it would be best if we can only install Argo Workflow into only one cluster, but allow Workflow to create Workflow/Pod in the other clusters. Is it possible to achieve that?

The closest field I find is clusterName: https://argoproj.github.io/argo/fields/#fields_57

The name of the cluster which the object belongs to. This is used to distinguish resources with same name and namespace in different clusters. This field is not set anywhere right now and apiserver is going to ignore it if set in create or update request.

So seems this field will be ignored if I want to create a new Workflow, am I right?

Motivation

Why do you need to know this, any examples or use cases you could include?

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: hello-world-
  labels:
    workflows.argoproj.io/archive-strategy: false
  namespace: argo
  clusterName: another-cluster-i-want-to-run-this-workflow
spec:
  entrypoint: whalesay
  templates:
  - name: whalesay
    serviceAccountName: argo
    container:
      image: docker/whalesay
      imagePullPolicy: Always
      resources:
        requests:
          cpu: "100m"
          memory: "100Mi"
        limits:
          cpu: "500m"
          memory: "500Mi"
      command: [cowsay]
      args: ["hello world"]

^^ this is an example I want to run Workflow in another cluster

question wontfix

Most helpful comment

@jessesuen for my case, I need at least 3 clusters for development/staging/production, so it would be ideal to put Argo Workflow in the fourth cluster, then manage other 3 clusters... something like that

All 12 comments

This is not possible. Argo Workflows is single cluster only.

That's unfortunate, as I noticed argo-cd would support that, do we have any plan for this? As it makes sense for the whole company to use the same Argo Workflow, but create pods in different cluster and namespace

I've created a ticket for the suggestion.

@jessesuen has stated that you can run the controller in one cluster and manage another cluster, but only one cluster.

@jessesuen for my case, I need at least 3 clusters for development/staging/production, so it would be ideal to put Argo Workflow in the fourth cluster, then manage other 3 clusters... something like that

@luozhaoyu Cool! We have the same scenarios, and we don't want to install Argo pods in staging or production cluster, all we need is to install Arg things (server+runner) in the other cluster only, and allocate some steps of the workflows to the staging/production clusters.

Drive by idea (have not tried this):

Argo already has the possibility of sharding via instanceID, and I believe Argo can also be configured to use a separate kubeconfig or alternate credentials that could potentially point at an arbitrary cluster, not necessarily the one that you're in.

You could potentially do something like have N argo controllers with separate instance ids, each pointing to a separate cluster. You can then use the Workflow of Workflows pattern in order to assign a workflow to a specific controller which would then create pods in a specific cluster. Your overall workflow could be comprised of multiple sub-workflows, each running on a separate cluster.

Sounds like a fun experiment, going to give it a try and maybe write a blog post on it :)

Drive by idea (have not tried this):

Argo already has the possibility of sharding via instanceID, and I believe Argo can also be configured to use a separate kubeconfig or alternate credentials that could potentially point at an arbitrary cluster, not necessarily the one that you're in.

You could potentially do something like have N argo controllers with separate instance ids, each pointing to a separate cluster. You can then use the Workflow of Workflows pattern in order to assign a workflow to a specific controller which would then create pods in a specific cluster. Your overall workflow could be comprised of multiple sub-workflows, each running on a separate cluster.

Sounds like a fun experiment, going to give it a try and maybe write a blog post on it :)

Seems like if I have N clusters and M namespaces, then I need N*M Argo controllers. For me it does not matter where I put these controllers, but if we have options to consolidate NM Argo controllers to N or 1 controller, that would be great to reduce operational/administrative overhead

That depends on if you are using a cluster wide or namespaced install. If it's cluster wide, you would only need N controllers, one for each cluster. If it's namespaced, you're correct, it'd be N*M.

That depends on if you are using a cluster wide or namespaced install. If it's cluster wide, you would only need N controllers, one for each cluster. If it's namespaced, you're correct, it'd be N*M.

Cluster wide is great. But in practice, that adds some security concern that it can create pods in arbitrary namespace. Is there a way to limit the cluster wide installation to whitelisted namespaces?

It looks like Argo only supports passing a single namespace when using the namespaced install path, but I wonder what would happen if you user the cluster-wide install type and omitted the ClusterRoleBinding. In the individual namespaces that you want to manage, add an explicit RoleBinding that references the ClusterRole and gives permissions to the service account that Argo is running as in whatever namespace it's running in.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Was this page helpful?
0 / 5 - 0 ratings