Here is a reduced test case and README: https://github.com/gobengo/kustomize-issue-7f53131c-0253-45a9-87c4-c6ab2f4d55ea
If you run kustomize build . in the root directory, you get an error like:
Error: accumulating resources: recursed merging from path '2cbace6b-c9f0-4f56-aba7-b911c0c85d48': var 'MYSQL_SERVICE' already encountered
The var referenced in the error message is defined in the kustomization.yaml of the (remote) base that both top-level bases use. https://github.com/gobengo/etherpad-lite/blob/master/lib/kubedb-mysql-etherpad-lite/kustomization.yaml#L6
The root directory kustomization simply composes the two directories in here (which are identical and both use etherpad-lite as a base).
kustomize build in either of the base directories works fine and produces a stream of yaml output (try ls -d */ | xargs -L1 kustomize build).
Expected Behavior: I can kustomize build . in the root directory and there is no error and I see the same output as concatenating the outputs of the two bases (joined with '---').
It seems like this should work. My goal is just to have the same remote kustomization running in two different namespaces. I also want the remote kustomization to surive namePrefix so it can be used twice in the same namespace (which is why I'm using vars for the service name). Is this a bug or am I missing something about how vars should work?
EDIT:
Update 20190807: I tested with kustomize 3.1.0 and the same error happens: kubectl kustomize github.com/gobengo/kustomize-issue-7f53131c-0253-45a9-87c4-c6ab2f4d55ea.
@gobengo Please have look at: https://github.com/kubernetes-sigs/kustomize/pull/1253
There is still a big issue to address (variable pointing at name) but the behavior seems to come along.
When reproducing your setup here, this is the kustomize build output:
apiVersion: v1
kind: Namespace
metadata:
name: cstmr-72n16kk2an86kc4855ujzk9a9plo293274l
---
apiVersion: v1
kind: Namespace
metadata:
name: cstmr-zvyjvn35b81rkfkr87fznrpanw5op9x5yo0
---
apiVersion: v1
data:
settings.json: |
{
"skinName":"colibris",
"title":"Etherpad on Kubernetes w/ MySQL",
"dbType": "${ETHERPAD_DB_TYPE:mysql}",
"dbSettings": {
"database": "${ETHERPAD_DB_DATABASE}",
"host": "${ETHERPAD_DB_HOST}",
"password": "${ETHERPAD_DB_PASSWORD}",
"user": "${ETHERPAD_DB_USER}"
}
}
kind: ConfigMap
metadata:
labels:
k8s.permanent.cloud/appInstallation.id: add961a2-b5c7-4ccd-b3c7-66f7c03c9c6e
name: ai-zv58kz2nbox64fkrqptr94nurvqoxz88o52etherpad
namespace: cstmr-72n16kk2an86kc4855ujzk9a9plo293274l
---
apiVersion: v1
data:
init.sql: |
create database `etherpad_lite_db`;
use `etherpad_lite_db`;
CREATE TABLE `store` (
`key` varchar(100) COLLATE utf8mb4_bin NOT NULL DEFAULT '',
`value` longtext COLLATE utf8mb4_bin NOT NULL,
PRIMARY KEY (`key`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin;
kind: ConfigMap
metadata:
labels:
k8s.permanent.cloud/appInstallation.id: add961a2-b5c7-4ccd-b3c7-66f7c03c9c6e
name: ai-zv58kz2nbox64fkrqptr94nurvqoxz88o52etherpad-mysql-init
namespace: cstmr-72n16kk2an86kc4855ujzk9a9plo293274l
---
apiVersion: v1
data:
settings.json: |
{
"skinName":"colibris",
"title":"Etherpad on Kubernetes w/ MySQL",
"dbType": "${ETHERPAD_DB_TYPE:mysql}",
"dbSettings": {
"database": "${ETHERPAD_DB_DATABASE}",
"host": "${ETHERPAD_DB_HOST}",
"password": "${ETHERPAD_DB_PASSWORD}",
"user": "${ETHERPAD_DB_USER}"
}
}
kind: ConfigMap
metadata:
labels:
k8s.permanent.cloud/appInstallation.id: 45170c85-ec8b-4008-9d57-4524aa16f93f
name: ai-w613mmojuo0qqir4pvc1l4rsr96mm6110ymetherpad
namespace: cstmr-zvyjvn35b81rkfkr87fznrpanw5op9x5yo0
---
apiVersion: v1
data:
init.sql: |
create database `etherpad_lite_db`;
use `etherpad_lite_db`;
CREATE TABLE `store` (
`key` varchar(100) COLLATE utf8mb4_bin NOT NULL DEFAULT '',
`value` longtext COLLATE utf8mb4_bin NOT NULL,
PRIMARY KEY (`key`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin;
kind: ConfigMap
metadata:
labels:
k8s.permanent.cloud/appInstallation.id: 45170c85-ec8b-4008-9d57-4524aa16f93f
name: ai-w613mmojuo0qqir4pvc1l4rsr96mm6110ymetherpad-mysql-init
namespace: cstmr-zvyjvn35b81rkfkr87fznrpanw5op9x5yo0
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s.permanent.cloud/appInstallation.id: add961a2-b5c7-4ccd-b3c7-66f7c03c9c6e
name: ai-zv58kz2nbox64fkrqptr94nurvqoxz88o52etherpad
namespace: cstmr-72n16kk2an86kc4855ujzk9a9plo293274l
spec:
ports:
- name: web
port: 80
targetPort: web
selector:
app: etherpad
k8s.permanent.cloud/appInstallation.id: add961a2-b5c7-4ccd-b3c7-66f7c03c9c6e
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s.permanent.cloud/appInstallation.id: 45170c85-ec8b-4008-9d57-4524aa16f93f
name: ai-w613mmojuo0qqir4pvc1l4rsr96mm6110ymetherpad
namespace: cstmr-zvyjvn35b81rkfkr87fznrpanw5op9x5yo0
spec:
ports:
- name: web
port: 80
targetPort: web
selector:
app: etherpad
k8s.permanent.cloud/appInstallation.id: 45170c85-ec8b-4008-9d57-4524aa16f93f
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s.permanent.cloud/appInstallation.id: add961a2-b5c7-4ccd-b3c7-66f7c03c9c6e
name: ai-zv58kz2nbox64fkrqptr94nurvqoxz88o52etherpad
namespace: cstmr-72n16kk2an86kc4855ujzk9a9plo293274l
spec:
replicas: 1
selector:
matchLabels:
app: etherpad
k8s.permanent.cloud/appInstallation.id: add961a2-b5c7-4ccd-b3c7-66f7c03c9c6e
template:
metadata:
labels:
app: etherpad
k8s.permanent.cloud/appInstallation.id: add961a2-b5c7-4ccd-b3c7-66f7c03c9c6e
spec:
containers:
- env:
- name: ETHERPAD_DB_TYPE
value: mysql
- name: ETHERPAD_DB_HOST
value: ai-w613mmojuo0qqir4pvc1l4rsr96mm6110ymetherpad-mysql
- name: ETHERPAD_DB_DATABASE
value: etherpad_lite_db
- name: ETHERPAD_DB_USER
valueFrom:
secretKeyRef:
key: username
name: etherpad-mysql-auth
- name: ETHERPAD_DB_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: etherpad-mysql-auth
image: etherpad/etherpad:latest
name: etherpad
ports:
- containerPort: 9001
name: web
volumeMounts:
- mountPath: /opt/etherpad-lite/settings.json
name: config
subPath: settings.json
- mountPath: /opt/etherpad/settings.json
name: config
subPath: settings.json
volumes:
- configMap:
name: ai-zv58kz2nbox64fkrqptr94nurvqoxz88o52etherpad
name: config
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s.permanent.cloud/appInstallation.id: 45170c85-ec8b-4008-9d57-4524aa16f93f
name: ai-w613mmojuo0qqir4pvc1l4rsr96mm6110ymetherpad
namespace: cstmr-zvyjvn35b81rkfkr87fznrpanw5op9x5yo0
spec:
replicas: 1
selector:
matchLabels:
app: etherpad
k8s.permanent.cloud/appInstallation.id: 45170c85-ec8b-4008-9d57-4524aa16f93f
template:
metadata:
labels:
app: etherpad
k8s.permanent.cloud/appInstallation.id: 45170c85-ec8b-4008-9d57-4524aa16f93f
spec:
containers:
- env:
- name: ETHERPAD_DB_TYPE
value: mysql
- name: ETHERPAD_DB_HOST
value: ai-w613mmojuo0qqir4pvc1l4rsr96mm6110ymetherpad-mysql
- name: ETHERPAD_DB_DATABASE
value: etherpad_lite_db
- name: ETHERPAD_DB_USER
valueFrom:
secretKeyRef:
key: username
name: etherpad-mysql-auth
- name: ETHERPAD_DB_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: etherpad-mysql-auth
image: etherpad/etherpad:latest
name: etherpad
ports:
- containerPort: 9001
name: web
volumeMounts:
- mountPath: /opt/etherpad-lite/settings.json
name: config
subPath: settings.json
- mountPath: /opt/etherpad/settings.json
name: config
subPath: settings.json
volumes:
- configMap:
name: ai-w613mmojuo0qqir4pvc1l4rsr96mm6110ymetherpad
name: config
---
apiVersion: kubedb.com/v1alpha1
kind: MySQL
metadata:
labels:
k8s.permanent.cloud/appInstallation.id: add961a2-b5c7-4ccd-b3c7-66f7c03c9c6e
name: ai-zv58kz2nbox64fkrqptr94nurvqoxz88o52etherpad-mysql
namespace: cstmr-72n16kk2an86kc4855ujzk9a9plo293274l
spec:
init:
scriptSource:
configMap:
name: etherpad-mysql-init
storage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: default
storageType: Durable
terminationPolicy: WipeOut
version: 5.7.25
---
apiVersion: kubedb.com/v1alpha1
kind: MySQL
metadata:
labels:
k8s.permanent.cloud/appInstallation.id: 45170c85-ec8b-4008-9d57-4524aa16f93f
name: ai-w613mmojuo0qqir4pvc1l4rsr96mm6110ymetherpad-mysql
namespace: cstmr-zvyjvn35b81rkfkr87fznrpanw5op9x5yo0
spec:
init:
scriptSource:
configMap:
name: etherpad-mysql-init
storage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: default
storageType: Durable
terminationPolicy: WipeOut
version: 5.7.25
@jbrette Awesome! Thanks for working on this and helping me.
I see something slightly off in that output. Tell me if this seems right.
In the output there is a Deployment with metadata.name = ai-zv58kz2nbox64fkrqptr94nurvqoxz88o52etherpad .
It has a container with an env variable like
- name: ETHERPAD_DB_HOST
value: ai-w613mmojuo0qqir4pvc1l4rsr96mm6110ymetherpad-mysql
I would expect the value of this env variable (which was from the kustomize variable that used to error) to have the namePrefix ai-zv58kz2nbox64fkrqptr94nurvqoxz88o52 (the same as the Deployment that contains it). The namePrefix comes from here
The etherpad-lite kustomization is used twice, as a base in two kustomizations each with a different namePrefix. But in the output above, the namePrefix of one of those was applied in all (both) interpolations of the MYSQL_SERVICE kustomization variable. I'd expect each interpolation to use the namePrefix of the kustomization it's contained in.
Hope that makes sense or you can help me understand if I authored my sample kustomizations wrong.
I would very much like to be able to build multiple overlays simultaneously from a single file - here's a slightly more minimized example:
> kustomize build https://github.com/ediphy-azorab/var_clash_example/base 2019/09/11 16:46:47 well-defined vars that were never replaced: MY_ENV
apiVersion: v1
data:
MY_ENV: foo
kind: ConfigMap
metadata:
name: example-g9bf2mfm2t
> kustomize build https://github.com/ediphy-azorab/var_clash_example/overlay1
apiVersion: v1
data:
MY_ENV: bar
kind: ConfigMap
metadata:
annotations: {}
labels: {}
name: example-overlay1-d6httb926d
namespace: overlay1
2019/09/11 16:46:54 well-defined vars that were never replaced: MY_ENV
> kustomize build https://github.com/ediphy-azorab/var_clash_example/overlay2
apiVersion: v1
data:
MY_ENV: baz
kind: ConfigMap
metadata:
annotations: {}
labels: {}
name: example-overlay2-fm7c465842
namespace: overlay2
2019/09/11 16:46:57 well-defined vars that were never replaced: MY_ENV
> kustomize build https://github.com/ediphy-azorab/var_clash_example/
Error: accumulating resources: recursed merging from path './overlay2': var 'MY_ENV' already encountered
@ediphy-azorab Have a look at that test environment
The following PR seems to be working:
kustomize build produces
apiVersion: v1
data:
MY_ENV: bar
kind: ConfigMap
metadata:
annotations: {}
labels: {}
name: example-overlay1-d6httb926d
namespace: overlay1
---
apiVersion: v1
data:
MY_ENV: baz
kind: ConfigMap
metadata:
annotations: {}
labels: {}
name: example-overlay2-fm7c465842
namespace: overlay2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dep-overlay1
namespace: overlay1
spec:
template:
spec:
containers:
- env:
- name: SOME_ENV
value: bar
name: dep
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dep-overlay2
namespace: overlay2
spec:
template:
spec:
containers:
- env:
- name: SOME_ENV
value: baz
name: dep
What is the best workaround?
There isn't one gh-1620 contains the fix but we can't merge it unless we ignore the other issue it creates.
Our project is needing that feature. Also the PR had been left rotten for four months, like a lot of things we have to maintain the fork until a feature matching that need is actually implemented in kustomize.
So if you check here you will see that is actually work.
To gain access to the feature, just clone the allinone branch and run "make install".
@jbrette I'm still unable to get the use-case from gh-1600 running using your fork. Would you expect that to work? At the risk of being repetitive, here is again (it fails with the allinone branch @ b56479f34c670fe5d8658b6df68443397930b892):
mkdir test
cd test
mkdir -p projects/foo/manifests projects/bar/manifests environment
printf domain.com > environment/domain
printf dev > environment/name
printf -branch > environment/branch
cat <<EOF > environment/kustomization.yml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: environment
files:
- name
- domain
- branch
vars:
- name: ENV
objref:
apiVersion: v1
kind: ConfigMap
name: environment
fieldref:
fieldpath: data.name
- name: DOMAIN
objref:
apiVersion: v1
kind: ConfigMap
name: environment
fieldref:
fieldpath: data.domain
- name: BRANCH
objref:
apiVersion: v1
kind: ConfigMap
name: environment
fieldref:
fieldpath: data.branch
generatorOptions:
disableNameSuffixHash: true
EOF
cat <<EOF > projects/foo/kustomization.yml
namespace: foo
resources:
- ../../environment
- manifests/ingress.yml
EOF
cat <<EOF > projects/bar/kustomization.yml
namespace: bar
resources:
- ../../environment
- manifests/ingress.yml
EOF
cat <<'EOF' > projects/bar/manifests/ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: bar
spec:
rules:
- host: bar$(BRANCH).$(ENV).$(DOMAIN)
http:
paths:
- backend:
serviceName: bar
servicePort: http
EOF
cat <<'EOF' > projects/foo/manifests/ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo
spec:
rules:
- host: foo$(BRANCH).$(ENV).$(DOMAIN)
http:
paths:
- backend:
serviceName: foo
servicePort: http
EOF
cat <<EOF > kustomization.yml
resources:
- projects/foo
- projects/bar
EOF
kustomize build .
I'm currently out of ideas given that gh-1620 isn't a viable solution. I could probably dive in and find some way to rectify the issue but at this point I'm more inclined to find some way to dynamically generate kustomize.yml files using a scripting language.
This is solved here
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
This also happens when composing multiple different kustomizations (not just multiple identical bases) which use the same variable name. Specifically, when trying to apply kustomizations generated by https://github.com/kubeflow/kfctl by listing them as bases in a top-level kustomization.yaml, multiple of them include a variable called clusterDomain. Variables created in "sibling" kustomizations shouldn't interfere with each other.
@jbrette It looks like your "fix" https://github.com/keleustes/kustomize/tree/allinone/examples/issues/issue_1251_g only works in your custom forked version of kustomize, right?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.