Is your feature request related to a problem? Please describe.
A number of users would like something a bit more that app-of-apps. This would probably be served by auto-discovery of apps within a repo.
Describe the solution you'd like
Point Argo CD at a repo and let it fly.
Related issue: https://github.com/argoproj/argo-cd/issues/1431
argoproj.io/AppsSource - implements application auto-discovery. Instead of forcing user to create an application for each directory the argoproj.io/AppsSource should scan repo and create (delete obsolete apps) automatically.
How would an AppsSource determine the corresponding AppProject for autodiscovered apps? A user might want to be able to specify that certain directories belong to a permissive project that allows certain cluster-scoped resources, while other directories belong to a far more restrictive project.
Here is my current workaround for auto-generating apps-in-app from a kubernetes manifest monorepo. ./components is symlinked to openshift/templates/components. The general idea here is that the helm template for the parent app loops through components/* and creates an argo app for each folder on the fly. Then, I manually create a parent app for each deployment environment (ex: integration, staging, production) which each source a unique values-$ENVIRONMENT.yaml file from the parent helm template folder.
{{ range $path, $bytes := .Files.Glob "components/*/configuration.json" }}
{{- $app := base (dir $path) }}
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: {{ $app }}
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: {{ $.Values.namespace }}
server: {{ $.Values.server | default "https://kubernetes.default.svc" }}
project: {{ $.Values.project | default "default" }}
source:
path: openshift/templates/components/{{ $app }}/argotest
repoURL: [email protected]:ops-stuff/kube-templates
targetRevision: argocd
syncPolicy:
{{ end }}
I would imagine some folks keep their argo app yaml/k8s manifests in the same repo as their application code. Auto discovery of a configurable filename pattern under a certain Github org would probably work well (like the Jenkins github organization folder plugin).
@wmedlar I was thinking that user might add .argocd-app file into app directory and specify app specific settings like project, and optionally kustomize, ksonnet etc properties. The AppsSource might have default project value
It might also be nice if a list of folders could be provided to AppSource that would decide the order the applications would be applied in, since order is important, especially for infrastructure. Then it wouldn't try to auto-detect anything, it would just use that list, avoiding a bunch of Application files.
@alexmt how does .ardocd-app file is different from an application YAML?
Not quite auto-discovery, but I think its related enough to share. This is our current App-of-Apps approach:
kubectl apply to create a single Application CRD pointing at a base repo for that class of cluster.Application CRDs, each of which represents a team (or some other collection of related services) and points at a repo that team owns.Application CRDs that point to their service repos, which contain the manifests. Team's
Application
repos
+---+ +---+
| | | |
| A +------------> M |
Cluster | . | | |
bootstrap +---+ +-->+ . | +---+ +---+
repo | | | | . | | |
| A +------+ | A +---->+ M |
+---+ | . | | | | |
| | | . | +---+ +---+
| A +----->+ . |
| | | . | +---+ +---+
+---+ | . | | | | |
| A +------+ | A +------------> M |
| | | | . | | |
+---+ +-->+ . | +---+ +---+
| . | | |
Base repo for | A +---->+ M |
cluster class | | | |
+---+ +---+
Team's service
repos
Every repo is kustomized, and we use overlays to patch the spec.source.path to the correct overlay for the env, e.g. kustomize/overlays/<env> all the way down the tree.
This gives us a nice logical separation where:
Application in the cluster bootstrap repo rarely has to changeApplications in the base repo change infrequently, e.g. if we add/rename a teamApplications in the team's repos can change frequently, but they own these repos.At the moment, we're just missing:
Application CRDs in any namespaceApplications, and a display of this in the UI similar to the diagram above.Argo is looking awesome so far, thanks for all your work!
Describe the solution you'd like
But what is the real issue what we are solving?
I think it needs to work in that way:
So the things which we can discuss are:
Personally I feel path make it easy and it is common solution. No reason to make it more complex in code. It looks like a trap for maintenance for users also. But maybe somebody can show good working example from another software in that context?
If you feel you are repeating yourself in Application of Applications make helm Chart from it.
Only 1 issue which I have is: when creating new Application I copy manifest and sometimes I forget to change Application name or namespace :/ It could be a disaster in production. If you can prevent it by detecting possible mistake and pause to confirm or something like that it could be advantage, but it is not critical.
Going further maybe people needs good best practice helm Chart example for this purpose instead of this changes?
oh ok I see second things to improve:
I have to run manually my Application of Applications when start new cluster, after manually聽deploy Argo CD by:
kubectl --context=[cluster-name] apply -k k8s/argo-cd/production
kubectl --context=[cluster-name] apply -f manually/argo-cd-clusters-bootstraping/production.yaml
Generally I decided to set it separately聽from Argo CD Application, because when Applications are syncing Bootstrap Application also change status so I want to keep it alone to not make confusion about Argo CD Application status.
So when I want to make a change in Bootstrap Application manifest I have to do it manually by apply. I would like to keep it as GitOps part. But I keep Bootstrap Application separate because of this status confusion.
So maybe some kind of self upgrading without loop? Like from Bootstrap Application I can point to Bootstrap Application path in repo without creating Application to sync itself (loop)? It is hard to predict what issue it can bring now when looping it. So I prefer to do it manually, because this are very rare changes.
Visualisation of the issue:
I have to do manually update for Bootstrap Application:
Bootstrap Application -> App1, App2, App3
GitOps, but loop:
Bootstrap Application -> Bootstrap Application, App1, App2, App3
I currently use my app-of-apps setup to:
1) define logical groupings of related Applications (e.g. I might have 'infrastructure', 'front-end', 'back-end', etc or team-based groupings)
2) define common parameters to apply to a set of Applications (e.g. defining destination cluster/namespace, helm params or kustomize overlays, setting targetRevision, which project they belong to, etc). There's also multiple cases of overriding things... for instance,
a) some apps need to override ignoreDifferences to allow for HPA, others won't.
b) another odd case could be I've got N services within my project. some of them use Helm , the rest use Kustomize. For helm, I'd want to set an env-specific values override; for kustomize, I'd want to set an env-specific overlay.
3) In my setup, I've got a 'nonprod' Argo CD deployment. It manages multiple 'test' and 'staging' environments. I deploy the same microservices to many environments from a single Argo CD, so my app-of-apps setup also needs to create unique names for all the AppProject and Application CRDs.
Where do those use cases fit within application auto-discovery?
Hi,
This is the workaround we are using for argocd application auto-discovery.
First, a new plugin is created in argocd called "app-monitor":
#!/usr/bin/env bash
# Argocd plugin to print the "argocd.yaml" manifests found in the git repository, targeting the same branch
# The argocdapp app name (eg: project-xx-prod), using this plugin, is used to extract the "env name" to determine the branch (dev,stage,prod,master)
BRANCH=${ARGOCD_APP_NAME##*-}
for app in `find . -name argoapp.yaml`; do
APP_BRANCH=`yq r $app spec.source.targetRevision`
if [ "$BRANCH" == "$APP_BRANCH" ]; then
cat $app;
echo "---";
fi
done
Then, we have multiple "GitOps Git repo" which stores the applications definition and the related argoapp.yaml to deploy them. At that stage, we could just kubectl apply -f argoapp.yaml on each application in the repository to declare the App in ArgoCD.
This is what the app-monitor is actually doing for us. Each "GitOps Git repo" is associated with/monitored by a "repo-app-monitor" ArgoCD App (using the "app-monitor" plugin). When a new argoapp.yaml file is push to a "GitOps Git repo", the monitor app associated with that repo detects the new ArgoCD manifest and auto register it.
For instance:
When/If ArgoCD is reinstalled from scratch, the configuration setup the Repositories + Monitors. All the applications are automatically auto-discovered and redeployed. There is no state/config to backup/restore.
REM: I ignore the branch question in the description for readability.
Snippet:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: user-projects-monitor-master
namespace: argocd
spec:
destination:
namespace: argocd
server: https://kubernetes.default.svc
project: default
source:
path: /
plugin:
name: apps-monitor
repoURL: https://****/um/user-projects.git
targetRevision: master
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I've pinned this issue as it is interesting.
Not quite auto-discovery, but I think its related enough to share. This is our current App-of-Apps approach
I wanted to implement quite the same approach as @milesarmstrong in https://github.com/argoproj/argo-cd/issues/1766#issuecomment-503074509. Here is my concept for applications-of-applications: https://github.com/tschonnie/argocd-pipeline
However as @milesarmstrong mentioned, as long as every dev-team needs access to the argocd namespace and there is no project inheritance each dev-team can create also evil project definitions and point there team-applications to the evil project definition (e.g. destination namespace = "*").
@jannfis out of interest, how would the ApplicationSet practically solve this?
@TPX01 ApplicationSet is going to provide various generators. For example git file generator can scan repository and create app for each file that matches some pattern.
@alexmt I looked into last night and indeed this concept could be very powerful! Thanks for pointing this out..
Most helpful comment
Not quite auto-discovery, but I think its related enough to share. This is our current App-of-Apps approach:
kubectl applyto create a singleApplicationCRD pointing at a base repo for that class of cluster.ApplicationCRDs, each of which represents a team (or some other collection of related services) and points at a repo that team owns.ApplicationCRDs that point to their service repos, which contain the manifests.Every repo is kustomized, and we use overlays to patch the
spec.source.pathto the correct overlay for the env, e.g.kustomize/overlays/<env>all the way down the tree.This gives us a nice logical separation where:
Applicationin the cluster bootstrap repo rarely has to changeApplications in the base repo change infrequently, e.g. if we add/rename a teamApplications in the team's repos can change frequently, but they own these repos.At the moment, we're just missing:
ApplicationCRDs in any namespaceApplications, and a display of this in the UI similar to the diagram above.Argo is looking awesome so far, thanks for all your work!