Airflow doesn't sync the git repo as I provided in gitSync in values.yaml.
Here is the dag part for the values.yaml
dags:
##
## mount path for persistent volume.
## Note that this location is referred to in airflow.cfg, so if you change it, you must update airflow.cfg accordingly.
path: /usr/local/airflow/dags
##
## Set to True to prevent pickling DAGs from scheduler to workers
doNotPickle: false
##
## Configure Git repository to fetch DAGs
git:
##
## url to clone the git repository
url: https://gitlab.com/selin-gungor/airflow-kubernetes-helm
##
## branch name, tag or sha1 to reset to
ref: master
## pre-created secret with key, key.pub and known_hosts file for private repos
secret: ""
## The host of the repo so for example if a github repo put github.com (Only need if using ssh not https git sync)
repoHost: ""
## The name of the private key in your git sync secret (Only need if using ssh not https git sync)
privateKeyName: ""
gitSync:
## Turns on the side car container
enabled: true
## Image for the side car container
image:
## docker-airflow image
repository: alpine/git
## image tag
tag: 1.0.7
## Image pull policy
## values: Always or IfNotPresent
pullPolicy: IfNotPresent
## The amount of time in seconds to git pull dags
refreshTime: 3000
Version of Helm and Kubernetes:
Helm version:
version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}
Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-14T04:24:34Z", GoVersion:"go1.12.13", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.10-gke.17", GitCommit:"bdceba0734835c6cb1acbd1c447caf17d8613b44", GitTreeState:"clean", BuildDate:"2020-01-17T23:10:13Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}
~
Which chart:
stable/airflow
What happened:
I install my values.yaml with the command below to copy airflow dags from a public repository
helm install -f values.yaml "airflow" stable/airflow
Retrieves an error:
Error: Deployment.apps "airflow-web" is invalid: spec.template.spec.containers[0].volumeMounts[0].name: Not found: "git-clone"
What you expected to happen:
Deploy Airflow configuration to Kubernetes and copy the dags and mount into Airflow container.
How to reproduce it (as minimally and precisely as possible):
Make sure you have a connection to Kubernetes cluster
Copy the file below:
# Duplicate this file and put your customization here
##
## common settings and setting for the webserver
airflow:
extraConfigmapMounts: []
# - name: extra-metadata
# mountPath: /opt/metadata
# configMap: airflow-metadata
# readOnly: true
#
# Example of configmap mount with subPath
# - name: extra-metadata
# mountPath: /opt/metadata/file.yaml
# configMap: airflow-metadata
# readOnly: true
# subPath: file.yaml
##
## Extra environment variables to mount in the web, scheduler, and worker pods:
extraEnv:
# - name: AIRFLOW__CORE__FERNET_KEY
# valueFrom:
# secretKeyRef:
# name: airflow
# key: fernet_key
# - name: AIRFLOW__LDAP__BIND_PASSWORD
# valueFrom:
# secretKeyRef:
# name: ldap
# key: password
##
## You will need to define your fernet key:
## Generate fernetKey with:
## python -c "from cryptography.fernet import Fernet; FERNET_KEY = Fernet.generate_key().decode(); print(FERNET_KEY)"
## fernetKey: ABCDABCDABCDABCDABCDABCDABCDABCDABCDABCD
fernetKey: ""
service:
annotations: {}
sessionAffinity: "None"
sessionAffinityConfig: {}
type: LoadBalancer
externalPort: 8080
nodePort:
http:
##
## The executor to use.
##
executor: Celery
##
## set the max number of retries during container initialization
initRetryLoop:
##
## base image for webserver/scheduler/workers
## Note: If you want to use airflow HEAD (2.0dev), use the following image:
# image
# repository: stibbons31/docker-airflow-dev
# tag: 2.0dev
## Airflow 2.0 allows changing the value ingress.web.path and ingress.flower.path (see bellow).
## In version < 2.0, changing these paths won't have any effect.
image:
##
## docker-airflow image
repository: puckel/docker-airflow
##
## image tag
tag: 1.10.7
##
## Image pull policy
## values: Always or IfNotPresent
pullPolicy: IfNotPresent
##
## image pull secret for private images
pullSecret:
##
## Set schedulerNumRuns to control how the scheduler behaves:
## -1 will let him looping indefinitively but it will never update the DAG
## 1 will have the scheduler quit after each refresh, but kubernetes will restart it.
##
## A long running scheduler process, at least with the CeleryExecutor, ends up not scheduling
## some tasks. We still don鈥檛 know the exact cause, unfortunately. Airflow has a built-in
## workaround in the form of the `num_runs` flag.
## Airflow runs with num_runs set to 5.
##
## If set to a value != -1, you will see your scheduler regularly restart. This is its normal
## behavior under these conditions.
schedulerNumRuns: "-1"
##
## Set schedulerDoPickle to toggle whether to have the scheduler
## attempt to pickle the DAG object to send over to the workers,
## instead of letting workers run their version of the code.
## See the Airflow documentation for the --do_pickle argument: https://airflow.apache.org/cli.html#scheduler
schedulerDoPickle: true
##
## Number of replicas for web server.
webReplicas: 1
##
## Custom airflow configuration environment variables
## Use this to override any airflow setting settings defining environment variables in the
## following form: AIRFLOW__<section>__<key>.
## See the Airflow documentation: https://airflow.readthedocs.io/en/stable/howto/set-config.html?highlight=setting-configuration
## Example:
## config:
## AIRFLOW__CORE__EXPOSE_CONFIG: "True"
## HTTP_PROXY: "http://proxy.mycompany.com:123"
config: {}
##
## Configure pod disruption budget for the scheduler
podDisruptionBudgetEnabled: true
podDisruptionBudget:
maxUnavailable: 1
## Add custom connections
## Use this to add Airflow connections for operators you use
## For each connection - the id and type have to be defined.
## All the other parameters are optional
## Connections will be created with a script that is stored
## in a K8s secret and mounted into the scheduler container
## Example:
## connections:
## - id: my_aws
## type: aws
## extra: '{"aws_access_key_id": "**********", "aws_secret_access_key": "***", "region_name":"eu-central-1"}'
connections: []
## Add airflow variables
## This should be a json string with your variables in it
## Examples:
## variables: '{ "environment": "dev" }'
variables: {}
## Add airflow ppols
## This should be a json string with your pools in it
## Examples:
## pools: '{ "example": { "description": "This is an example of a pool", "slots": 2 } }'
pools: {}
##
## Annotations for the Scheduler, Worker and Web pods
podAnnotations: {}
## Example:
## iam.amazonaws.com/role: airflow-Role
extraInitContainers: []
## Additional init containers to run before the Scheduler pods.
## for example, be used to run a sidecar that chown Logs storage .
# - name: volume-mount-hack
# image: busybox
# command: ["sh", "-c", "chown -R 1000:1000 logs"]
# volumeMounts:
# - mountPath: /usr/local/airflow/logs
# name: logs-data
extraContainers: []
## Additional containers to run alongside the Scheduler, Worker and Web pods
## This could, for example, be used to run a sidecar that syncs DAGs from object storage.
# - name: s3-sync
# image: my-user/s3sync:latest
# volumeMounts:
# - name: synchronised-dags
# mountPath: /dags
extraVolumeMounts: []
## Additional volumeMounts to the main containers in the Scheduler, Worker and Web pods.
# - name: synchronised-dags
# mountPath: /usr/local/airflow/dags
extraVolumes: []
## Additional volumes for the Scheduler, Worker and Web pods.
# - name: synchronised-dags
# emptyDir: {}
##
## Run initdb when the scheduler starts.
initdb: true
scheduler:
resources: {}
# limits:
# cpu: "1000m"
# memory: "1Gi"
# requests:
# cpu: "500m"
# memory: "512Mi"
##
## Labels for the scheduler deployment
labels: {}
## Pod Annotations for the web deployment
podAnnotations: {}
##
## Annotations for the scheduler deployment
annotations: {}
## Support Node, affinity and tolerations for scheduler pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
nodeSelector: {}
affinity: {}
tolerations: []
flower:
enabled: true
##
## Set AIRFLOW__CELERY__FLOWER_URL_PREFIX
## Prefix should match Ingress configuration ingress.flower.path
urlPrefix: ""
resources: {}
# limits:
# cpu: "100m"
# memory: "128Mi"
# requests:
# cpu: "100m"
# memory: "128Mi"
##
## Labels for the flower deployment
labels: {}
##
## Annotations for the flower deployment
annotations: {}
service:
annotations: {}
type: ClusterIP
externalPort: 5555
## Support Node, affinity and tolerations for flower pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
nodeSelector: {}
affinity: {}
tolerations: []
extraConfigmapMounts: []
# Use-case example: adding a public certificate used for connection to a remote TLS endpoint
# - name: extra-cert
# mountPath: /etc/ssl/certs/extra-cert.pem
# configMap: extra-certificates
# readOnly: true
# subPath: extra-cert.pem
web:
##
## Set AIRFLOW__WEBSERVER__BASE_URL
## Path should match Ingress configuration
baseUrl: "http://localhost:8080"
resources: {}
# limits:
# cpu: "300m"
# memory: "1Gi"
# requests:
# cpu: "100m"
# memory: "512Mi"
##
## Labels for the web deployment
labels: {}
##
## Annotations for the web deployment
annotations: {}
## Pod Annotations for the web deployment
podAnnotations: {}
initialStartupDelay: "60"
initialDelaySeconds: "360"
minReadySeconds: 120
readinessProbe:
periodSeconds: 60
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 5
livenessProbe:
periodSeconds: 60
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 5
## Support Node, affinity and tolerations for web pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
nodeSelector: {}
affinity: {}
tolerations: []
##
## Directory in which to mount secrets on webserver nodes.
secretsDir: /var/airflow/secrets
##
## Secrets which will be mounted as a file at `secretsDir/<secret name>`.
secrets: []
##
## Workers configuration
workers:
enabled: true
##
## Number of workers pod to launch
replicas: 1
##
## Gracefull termination period
terminationPeriod: 30
##
## Custom resource configuration
resources: {}
# limits:
# cpu: "1"
# memory: "2G"
# requests:
# cpu: "0.5"
# memory: "512Mi"
##
## Labels for the Worker statefulSet
labels: {}
##
## Annotations for the Worker statefulSet
annotations: {}
##
## Annotations for the Worker pods
podAnnotations: {}
## Example:
## iam.amazonaws.com/role: airflow-Role
##
## Celery worker configuration
celery:
##
## number of parallel celery tasks per worker
instances: 1
##
## Gracefull termination of workers
## Wait for the worker to finish the current task within the gracefull period
gracefullTermination: false
##
## Directory in which to mount secrets on worker nodes.
secretsDir: /var/airflow/secrets
##
## Secrets which will be mounted as a file at `secretsDir/<secret name>`.
secrets: []
## Support Node, affinity and tolerations for worker pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
nodeSelector: {}
affinity: {}
tolerations: []
##
## Ingress configuration
ingress:
##
## enable ingress
## Note: If you want to change url prefix for web ui or flower even if you do not use ingress,
## you can change web.baseUrl and flower.urlPrefix
enabled: false
##
## Configure the Ingress webserver endpoint
web:
## NOTE: This requires an airflow version > 1.9.x
## For the moment (March 2018) this is **not** available on official package, you will have
## to use an image where airflow has been updated to its current HEAD.
## You can use the following one:
## stibbons31/docker-airflow-dev:2.0dev
##
## if path is '/airflow':
## - UI will be accessible at 'http://mycompany.com/airflow/admin'
## - Healthcheck is at 'http://mycompany.com/airflow/health'
## - api is at 'http://mycompany.com/airflow/api'
## NOTE: do NOT keep trailing slash. For root configuration, set and empty string
path: ""
##
## Ingress hostname for the webserver
host: ""
##
## Annotations for the webserver
## Airflow webserver handles relative path completely, just let your load balancer give the HTTP
## header like the requested URL (no special configuration neeed)
annotations: {}
##
## Example for Traefik:
# traefik.frontend.rule.type: PathPrefix
# kubernetes.io/ingress.class: traefik
##
## Configure the web liveness path.
## Defaults to the templated value `{{ ingress.web.path }}/health`
livenessPath:
tls:
## Set to "true" to enable TLS termination at the ingress
enabled: false
## If enabled, set "secretName" to the secret containing the TLS private key and certificate
## Example:
## secretName: example-com-crt
precedingPaths:
## Different http paths to add to the ingress before the default path
## Example:
## - path: "/*"
## serviceName: "ssl-redirect"
## servicePort: "use-annotation"
succeedingPaths:
## Different http paths to add to the ingress after the default path
## Example:
## - path: "/*"
## serviceName: "ssl-redirect"
## servicePort: "use-annotation"
##
## Configure the flower Ingress endpoint
flower:
##
## If flower is '/airflow/flower':
## - Flower UI is at 'http://mycompany.com/airflow/flower'
## NOTE: you need to have a reverse proxy/load balancer able to do URL rewrite in order to have
## flower mounted on other path than root. Flower only does half the job in url prefixing: it
## only generates the right URL/relative paths in the **returned HTML files**, but expects the
## request to have been be at the root.
## That's why we need a reverse proxy/load balancer that is able to strip the path
## NOTE: do NOT keep trailing slash. For root configuration, set and empty string
path: ""
##
## Configure the liveness path. Keep to "/" for Flower >= jan 2018.
## For previous version, enter the same path than in the 'path' key
## NOTE: keep the trailing slash.
livenessPath: /
##
## hostname for flower
host: ""
##
## Annotation for the Flower endpoint
##
## ==== SKIP THE FOLLOWING BLOCK IF YOU HAVE FLOWER > JANUARY 2018 =============================
## Please note their is a small difference between the way Airflow Web server and Flower handles
## URL prefixes in HTTP requests:
## Flower wants HTTP header to behave like there was no URL prefix, and but still generates
## the right URL in html pages thanks to its `--url-prefix` parameter
##
## Extracted from the Flower documentation:
## (https://github.com/mher/flower/blob/master/docs/config.rst#url_prefix)
##
## To access Flower on http://example.com/flower run it with:
## flower --url-prefix=/flower
##
## Use the following nginx configuration:
## server {
## listen 80;
## server_name example.com;
##
## location /flower/ {
## rewrite ^/flower/(.*)$ /$1 break;
## proxy_pass http://example.com:5555;
## proxy_set_header Host $host;
## }
## }
## ==== IF YOU HAVE FLOWER > JANUARY 2018, NO MORE NEED TO STRIP THE PREFIX ====================
annotations: {}
##
## NOTE: it is important here to have your reverse proxy strip the path/rewrite the URL
## Example for Traefik:
# traefik.frontend.rule.type: PathPrefix ## Flower >= Jan 2018
# traefik.frontend.rule.type: PathPrefixStrip ## Flower < Jan 2018
# kubernetes.io/ingress.class: traefik
tls:
## Set to "true" to enable TLS termination at the ingress
enabled: false
## If enabled, set "secretName" to the secret containing the TLS private key and certificate
## Example:
## secretName: example-com-crt
##
## Storage configuration for DAGs
persistence:
##
## enable persistance storage
enabled: false
##
## Existing claim to use
# existingClaim: nil
## Existing claim's subPath to use, e.g. "dags" (optional)
# subPath: ""
##
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
# storageClass: default
accessMode: ReadWriteOnce
##
## Persistant storage size request
size: 1Gi
##
## Storage configuration for logs
logsPersistence:
##
## enable persistance storage
enabled: false
##
## Existing claim to use
# existingClaim: nil
## Existing claim's subPath to use, e.g. "logs" (optional)
# subPath: ""
##
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
## A configuration for shared log storage requires a `storageClass` that
## supports the `ReadWriteMany` accessMode, such as NFS or AWS EFS.
# storageClass: default
accessMode: ReadWriteOnce
##
## Persistant storage size request
size: 1Gi
##
## Configure DAGs deployment and update
dags:
##
## mount path for persistent volume.
## Note that this location is referred to in airflow.cfg, so if you change it, you must update airflow.cfg accordingly.
path: /usr/local/airflow/dags
##
## Set to True to prevent pickling DAGs from scheduler to workers
doNotPickle: false
##
## Configure Git repository to fetch DAGs
git:
##
## url to clone the git repository
url: https://gitlab.com/selin-gungor/airflow-kubernetes-helm
##
## branch name, tag or sha1 to reset to
ref: master
## pre-created secret with key, key.pub and known_hosts file for private repos
secret: ""
## The host of the repo so for example if a github repo put github.com (Only need if using ssh not https git sync)
repoHost: ""
## The name of the private key in your git sync secret (Only need if using ssh not https git sync)
privateKeyName: ""
gitSync:
## Turns on the side car container
enabled: true
## Image for the side car container
image:
## docker-airflow image
repository: alpine/git
## image tag
tag: 1.0.7
## Image pull policy
## values: Always or IfNotPresent
pullPolicy: IfNotPresent
## The amount of time in seconds to git pull dags
refreshTime: 3000
initContainer:
## Fetch the source code when the pods starts
enabled: false
## Image for the init container (any image with git will do)
image:
## docker-airflow image
repository: alpine/git
## image tag
tag: 1.0.7
## Image pull policy
## values: Always or IfNotPresent
pullPolicy: IfNotPresent
## install requirements.txt dependencies automatically
installRequirements: true
##
## Configure logs
logs:
path: /usr/local/airflow/logs
##
## Enable RBAC
rbac:
##
## Specifies whether RBAC resources should be created
create: false
##
## Create or use ServiceAccount
serviceAccount:
##
## Specifies whether a ServiceAccount should be created
create: true
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the fullname template
name:
##
## Configuration values for the postgresql dependency.
## ref: https://github.com/kubernetes/charts/blob/master/stable/postgresql/README.md
postgresql:
##
## Use the PostgreSQL chart dependency.
## Set to false if bringing your own PostgreSQL.
enabled: true
##
## The name of an existing secret that contains the postgres password.
existingSecret:
## Name of the key containing the secret.
existingSecretKey: postgresql-password
##
## If you are bringing your own PostgreSQL, you should set postgresHost and
## also probably service.port, postgresqlUsername, postgresqlPassword, and postgresqlDatabase
## postgresHost:
##
## PostgreSQL port
service:
port: 5432
## PostgreSQL User to create.
postgresqlUsername: postgres
##
## PostgreSQL Password for the new user.
## If not set, a random 10 characters password will be used.
postgresqlPassword: airflow
##
## PostgreSQL Database to create.
postgresqlDatabase: airflow
##
## Persistent Volume Storage configuration.
## ref: https://kubernetes.io/docs/user-guide/persistent-volumes
persistence:
##
## Enable PostgreSQL persistence using Persistent Volume Claims.
enabled: true
##
## Persistant class
# storageClass: classname
##
## Access modes:
accessModes:
- ReadWriteOnce
## Configuration values for the Redis dependency.
## ref: https://github.com/kubernetes/charts/blob/master/stable/redis/README.md
redis:
##
## Use the redis chart dependency.
## Set to false if bringing your own redis.
enabled: true
##
## The name of an existing secret that contains the redis password.
existingSecret:
## Name of the key containing the secret.
existingSecretKey: redis-password
##
## If you are bringing your own redis, you can set the host in redisHost.
## redisHost:
##
## Redis password
##
password: airflow
##
## Master configuration
master:
##
## Image configuration
# image:
##
## docker registry secret names (list)
# pullSecrets: nil
##
## Configure persistance
persistence:
##
## Use a PVC to persist data.
enabled: false
##
## Persistant class
# storageClass: classname
##
## Access mode:
accessModes:
- ReadWriteOnce
##
## Disable cluster management by default.
cluster:
enabled: false
# Enable this if you're using https://github.com/coreos/prometheus-operator
# Don't forget you need to install something like https://github.com/epoch8/airflow-exporter in your airflow docker container
serviceMonitor:
enabled: false
interval: "30s"
path: /admin/metrics
## [Kube Prometheus Selector Label](https://github.com/helm/charts/tree/master/stable/prometheus-operator#exporters)
selector:
prometheus: kube-prometheus
# Enable this if you're using https://github.com/coreos/prometheus-operator
prometheusRule:
enabled: false
## Namespace in which the prometheus rule is created
# namespace: monitoring
## Define individual alerting rules as required
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#rulegroup
## https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/
groups: {}
## Used to pass Labels that are used by the Prometheus installed in your cluster to select Prometheus Rules to work with
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
additionalLabels: {}
extraManifests: []
##
## Example:
## - apiVersion: cloud.google.com/v1beta1
## kind: BackendConfig
## metadata:
## name: "{{ .Release.Name }}-test"
## spec:
## securityPolicy:
## name: "gcp-cloud-armor-policy-test"
Execute helm install -f values.yaml "airflow" stable/airflow
Anything else we need to know:
chart
I ran into this same issue, and traced it to the following:
https://github.com/helm/charts/blob/258cd28472554d84fc206243074bbc11da3b2316/stable/airflow/templates/deployments-web.yaml#L244
The volume "git-clone" is only created when "initContainer" is set to true. So having gitSync enabled but not initContainer results in a volumeMount for a volume that doesn't exist.
I was able to get this working by enabling both gitSync and initContainer
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
I am also having this issue. I don't see anywhere in the docs that initContainer would also need to be enabled.
Should be fixed by #20715
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
This issue is being automatically closed due to inactivity.
Most helpful comment
I ran into this same issue, and traced it to the following:
https://github.com/helm/charts/blob/258cd28472554d84fc206243074bbc11da3b2316/stable/airflow/templates/deployments-web.yaml#L244
The volume "git-clone" is only created when "initContainer" is set to true. So having gitSync enabled but not initContainer results in a volumeMount for a volume that doesn't exist.
I was able to get this working by enabling both gitSync and initContainer