Description
Unable to update configs when stack is redeployed
Steps to reproduce the issue:
Contents of stack.yml
version: "3.3"
services:
test:
image: effectivetrainings/runner
configs:
- source: config.yml
target: /my-config.yml
configs:
config.yml:
file: ./config.yml
Describe the results you received:
Error response from daemon: rpc error: code = InvalidArgument desc = only updates to Labels are allowed
Describe the results you expected:
Configs should be updated.
Additional information you deem important (e.g. issue happens only occasionally):
Output of docker version:
Client:
Version: 17.06.2-ce
API version: 1.30
Go version: go1.8.3
Git commit: cec0b72
Built: Tue Sep 5 20:12:06 2017
OS/Arch: darwin/amd64
Server:
Version: 17.09.0-ce
API version: 1.32 (minimum version 1.12)
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:40:56 2017
OS/Arch: linux/amd64
Experimental: true
Output of docker info:
Containers: 34
Running: 4
Paused: 0
Stopped: 30
Images: 8
Server Version: 17.09.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
NodeID: osdigy0m5vmld2txqjhse4uk7
Is Manager: true
ClusterID: laqdnfslcq3ecfbq66q3xmuwa
Managers: 1
Nodes: 3
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Root Rotation In Progress: false
Node Address: 192.168.33.49
Manager Addresses:
192.168.33.49:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
runc version: 3f2f8b84a77f73d38244dd690525642a72156c64
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-93-generic
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 992.1MiB
Name: worker-1
ID: GLS7:AHRQ:M3IM:FSLB:TE7N:DBVQ:5P7Y:IG4P:HSFF:G73W:VWUH:WVIY
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Additional environment details (AWS, VirtualBox, physical, etc.):
https://github.com/moby/moby/issues/33808
But Argumentation for secrets does not apply for configs in my mind.
cc @thaJeztah @dnephin
I guess configs are immutable as well? This seems like it was by design.
But the question is, should they really be. It is impractical to follow the suggested process to rename a config to be able to update it. I guess its technically immutable by default since its stored in the raft log?
Separating the discussion if configs should be immutable or not (I personally see value in both immutability, but also in having an _easier way_ to update configurations; I know in an early design for the secrets there was a built-in versioning mechanism, similar to how services keep a history if they are updated).
I'm trying to reproduce the error you received:
Error response from daemon: rpc error: code = InvalidArgument desc = only updates to Labels are allowed
But I'm not able to reproduce this; these are the steps I used to reproduce:
First, set up a test-directory and create config and compose file:
$ mkdir repro-35048 && cd repro-35048
$ cat > config.yml <<EOF
hello: world
EOF
$ cat > docker-compose.yml <<EOF
version: "3.3"
services:
test:
image: nginx:alpine
configs:
- source: config.yml
target: /my-config.yml
configs:
config.yml:
file: ./config.yml
EOF
Deploy the stack:
$ docker stack deploy -c docker-compose.yml repro-35048
Creating network repro-35048_default
Creating service repro-35048_test
Note: the output does not mention that a config is created; I'll open an issue for that
Check the config is created:
$ docker config ls
ID NAME CREATED UPDATED
7fdu1604z2wtqtpw5neklsn0h repro-35048_config.yml 23 seconds ago 23 seconds ago
Save the service's inspect and config's inspect for later checking:
$ docker service inspect -f '{{json .Spec}}' repro-35048_test | jq . > before.json
$ docker config inspect repro-35048_config.yml | jq . > config-before.json
Now, update the config file;
$ cat > config.yml <<EOF
hello: world-updated
EOF
Re-deploy the stack:
$ docker stack deploy -c docker-compose.yml repro-35048
Updating service repro-35048_test (id: t5yite4z6wk1ej6ewb3gjur9k)
Note: the output suggests the service is updated, but no changes were made. I'll open an issue for that
Save the service's inspect and config's inspect, and compare them with the "before updating" output:
$ docker service inspect -f '{{json .Spec}}' repro-35048_test | jq . > after.json
$ docker config inspect repro-35048_config.yml | jq . > config-after.json
$ diff before.json after.json
(empty)
$ diff config-before.json config-after.json
(empty)
Check that the service did indeed not re-deploy (because no change); in the output, there's no old / "rotated" tasks for the service:
$ docker service ps repro-35048_test
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
aol6aafeei8w repro-35048_test.1 nginx:alpine moby Running Running 2 minutes ago
Verify the configuration didn't change inside the container:
$ docker exec 82caa9a0e80e sh -c 'cat /my-config.yml'
hello: world
Cleanup, and remove the stack:
$ docker stack rm repro-35048
Removing service repro-35048_test
Removing config repro-35048_config.yml
Removing network repro-35048_default
(Note that, unlike the docker stack deploy output, when _removing_ the stack, the config _is_ actually mentioned in the output).
Interestingly, the error _should_ be there, but for some reason I'm not getting it (running Docker 17.09) https://github.com/docker/cli/blob/0d17ea257757ad7f7bc2e99d1080f05391bc9967/cli/command/stack/deploy_composefile.go#L247-L251
Opened https://github.com/docker/cli/pull/593 to add extra messages during docker stack deploy
I just switched to a versioned naming scheme for my configs as a workaround (sites_v1, sites_v2). Unfortunately, that means I have to manually "garbage collect" old configs (using docker config rm).
As swarm services are updated from time to time (and so are their configs), I am much in favor of updatable configs (and secrets as well).
They probably don't have to be _mutable_. I don't know the internals, but maybe just create a new config and update the reference to the config id in the service spec?
I brought this issue up a while back, and the initial design/implementations for secrets had an internal versioning (similar to the way services work, i.e., every update gets a new version); this introduced a lot of complexity and it was decided to drop that part in favour of manually having a versioning scheme.
I agree that from a UX perspective it's not ideal, but don't know the exact implications of introducing the internal versioning.
Unfortunately, that means I have to manually "garbage collect" old configs
Perhaps a docker config prune and docker secret prune command would be an enhancement (although I can see that being dangerous as well 馃槄)
@thaJeztah Im facing missing error message too. Docker Engine 17.09 does not show any errors at stack deploy when config exists. Should it be reported as separated issue?
@pszczekutowicz it should only show an error if the config _exists_ and the content of the config was _modified_. If there's no change to the config file, then the docker stack deploy just is a "no-op" (i.e., docker stack deploy should be idempotent if no changes are made to the stack).
If you _did_ make changes to the config, and those changes are silently ignored, could you open an issue with the exact steps to reproduce? (It's closely related to this issue, but perhaps good to start a clean discussion for your issue)
I'm not sure why we need to track the versioning of those items (apart from whatever the 'current' one is). These should be ephemeral from Docker's perspective, right?
I'm not sure why we need to track the versioning of those items (apart from whatever the 'current' one is). These should be ephemeral from Docker's perspective, right?
Let me try explaining why I think versioning _is_ important;
$ docker config create myconfig ./config.cnf
$ docker service create --config src=myconfig,target=/foo/config.cnf --name myservice myimage
$ docker config update myconfig ./config-new.cnf
$ docker service create --config src=myconfig,target=/foo/config.cnf --name myservice2 myimage
At this point:
myservice uses the old version of the configmyservice2 uses the new version of the configmyservice will failSimilar to the above:
$ docker service update --force myservice
Will update the service to use the new config; if the config happens to have an error, there's no way to roll back, i.e., attempting to recover the failing service;
$ docker service rollback myservice
Won't resolve the situation.
With _versioning_ or "pinning", something like this could be done:
Similar to how image-digests are resolved when reploying a service, "resolve" / "pin" to the current version of a config;
$ docker service create --config src=myconfig,target=/foo/config.cnf --name myservice myimage
$ docker service inspect myservice --format '{{ json .Spec.TaskTemplate.ContainerSpec.Configs}}' | jq
[
{
"File": {
"Name": "/foo/config.cnf",
"UID": "0",
"GID": "0",
"Mode": 292
},
"ConfigID": "vbc6o4k6xdct0oojky0hdpahw",
"ConfigName": "myconfig",
"Version": {
"Index": 1007
}
}
]
Pinning to a specific version would allow;
Some UX would have to be worked on;
config@version?)docker service update --force may not be ideal, as it will also update the _image_ that's used)Ok. Makes sense. But don鈥檛 the get a hash currently? I鈥檇 the issue then that we don鈥檛 correlate the hash with the services using them?
There's no hash stored/exposed; initially a hash was exposed, but during review this was removed out of security concerns (that was for secrets, but "configs" use the same mechanisms).
I'm also hitting this bug. Very annoying as i deploy to a swarm cluster from our automated build servers and managing configurations manually is a PITA.
I'm not sure I get why the rollback scenario is an issue. Docker swarm already has to keep track of which version of the image it needs to rollback to. Why cant the configuration and secrets associated with that deployed service be attached in the same way?
It's not a bug; it was part of the design to make configs and secrets immutable (i.e., to have a new version, the config/secret needs to get a new name).
I'm not sure I get why the rollback scenario is an issue. Docker swarm already has to keep track of which version of the image it needs to rollback to. Why cant the configuration and secrets associated with that deployed service be attached in the same way?
The service does not store the data from the config/secret itself, it only refers to the config's ID / name. To be able to roll back to a previous version of a service (including all the options that the previous version of the service used), the swarm would then have to keep the old version of the config/secret around (hence, a versioned storage being present).
For an _image_, the registry keeps all versions of an image around, so versioning is handled by the registry.
I get that it's not a bug, per se...however it does fundamentally break a very valid use case for configs in a typical devops lifecycle. Can we explore an enhancement request to track the config versions? Even in secrets, we can track the version without exposing the secret. @thaJeztah
For me, even the suggested config rotation approach does not work because it apparently requires distinct config targets:
Error response from daemon: rpc error: code = InvalidArgument desc = config references 'old.conf' and 'new.conf' have a conflicting target: '/config/current.conf'
So my container would have to check 2 locations for configs in order to make this work? This is impractical.
Also having to choose a different name for the updated config is impractical because my compose file still references the original config name, i.e. when I do docker deploy --compose-file docker-compose.yml after rotation it will fail because the original config will have been removed (except if the config is not external). So I would be forced to rotate once more before being able to use the compose file for updating my stack/service again.
@thaJeztah I'd like to revisit your earlier post:
Some UX would have to be worked on;
be able to explicitly specify the version of a config/secret (config@version?)
update a config/secret to the latest version (just docker service update --force may not be ideal, as it will also update the image that's used)
not directly related, but have a central command to update all services that use a config / secret, and update them all to the latest version (e.g., a key has been compromised, so rotating the key for all services that use it)
Could we not pin the config version to the service to which it is attached? I.E. when a config is created, it has some hash. When it is attached to a service, the service sees service.hash such that, if needed, it can be recaptured (unless the user manually prunes the configs, of course).
docker config update would simply add a new commit hash (similar to how a git commit works). Actually...could even use a concept similar to image where configs can be tagged and the default tag is latest? This way, updating service(s) would pull the latest config by default, but can also use a specific tag. Admittedly, managing SHA hashes is not intuitive, but if there's also the possibility of tagging the configs, it would help. Just thinking out loud...I'm tired of using Ansible and host files for configs. :-)
@ntwrkguru thanks for your constructive feedback
Could we not pin the config version to the service to which it is attached? I.E. when a config is created, it has some hash. When it is attached to a service, the service sees service.hash such that, if needed, it can be recaptured (unless the user manually prunes the configs, of course).
We _must_ pin it to a specific version/hash, otherwise rescheduling tasks would lead to different tasks running with a different configuration (consider a node going down, and docker deploying new instances on a different node, and those use a different configuration than the other tasks).
docker config update would simply add a new commit hash (similar to how a git commit works). Actually...could even use a concept similar to image where configs can be tagged and the default tag is latest?
Yes, this is a bit what I had in mind with:
config@version?)Swarm doesn't expose the "sha" of secrets (and configs) to prevent possible data leaking through the API, I was thinking of using a revision/version for that. (I'm also thinking out loud here; we'll have to verify if it would work from a technical perspective :smile:).
Admittedly, that wouldn't give you the option to _manually_ set the :tag, but when creating or updating a config, docker would print the revision. (I would personally not be against a :tag option, but should give it some thought)
This way, updating service(s) would pull the latest config by default, but can also use a specific tag. Admittedly, managing SHA hashes is not intuitive, but if there's also the possibility of tagging the configs, it would help. Just thinking out loud.
Using the "latest" revision if @version is omitted; perhaps it's an option, but (sorry, there's a "but":)
those are the parts that need to be thought out. The reason I think the config should not _automatically_ be updated is that it would make it difficult (or: impossible) to update a property of the service without _also_ updating the configuration. You ("the user") should remain in full control over what happens when you (re-)deploy the stack;
Think of a situation where you modified the local configuration file (perhaps you were debugging locally, or in the progress of updating the configuration), and you need to deploy an update to the stack (say: update the memory-limit). You update the compose file, and deploy the stack. Now both the memory-limit _and_ the configuration are updated.
So _even_ in the config@latest situation, this may not be desirable.
Perhaps something similar to --resolve-image=<always | changed | never >, but for configs (and secrets)?
To anyone who is also struggeling with the currently suggested config rotation approach:
The reason this approach was not working for me (see my earlier comment) was that I was passing the config ID to the --config-rm option instead of the config name. Using the config name worked for me, though this behavior is odd IMHO.
@thaJeztah
Perhaps something similar to --resolve-image=
Yes, that would be ok as well.
Any update on this?
I'm fairly new to Docker but I'm at a loss to the expected workflow here. I have a docker-compose.yml which I'm trying to deploy in CI to a staging server with a docker stack deploy. One of the containers has a config file mapped in, where the file is coming from the source repo. The first time I pushed a change updating the config file, I got the error mentioned by OP. Am I missing something, or is this use case (IMO fairly simple) not supported in an automated stack deploy setup?
@tecknicaltom, you nailed it...not supported in that manner. You'll have to do increment the filename to make it work. I have decided to use bind mounts until this is resolved.
@ntwrkguru but with bind mounts, you have to make sure that the file makes it to a known location on the host system, right? That doesn't make things much easier, and adds a requirement during deploying and dependency on the host's state.
Yes, and trust me...I know the pain! Ansible helps, but I really wish I could do it all with native Docker tooling.
I'm really surprised this conceptual problem is still not solved in Docker.
Volumes or bind-mounts aren't swarm-compatible.
Burning configs (forget about secrets) into images is painful workaround.
This is really a serious blocker for us, so we are starting to look at other orcherstrators.
@IvanBoyko I can't blame you; so are we. We use Ansible to create a Gluster multi-host volume and bind-mount to that, if it helps.
@IvanBoyko I find using versioned naming for your configs/secrets is at least a better workaround than baking them into your images.
@djmaze , possibly. It's just all workarounds, not solutions.
I don't like the fact that for update in configuration or a secret I have to change stack definition (docker-compose.yml). It feels unnatural.
You wouldn't design a C++ program that you have to recompile each time a user wants to pass new input parameters, would you?
It's just all workarounds, not solutions.
Exactly! That's my biggest gripe.
How about a flag for docker stack deploy... something like --update-configs and --update-secrets ? This lack of control of configs/secrets, is a real issue for us. Hope this will be tackled soon.
@gaui fixing this is more important for configs than secrets IMO. All of my configs are specified in my docker-compose.yml as file:, referencing files stored in the source tree and version controlled. Ideally, CI/CD will be able to seamlessly update the running stacks whenever a config file changes in version control. All of the secrets are specified as external and are manually pushed to the swarm and not stored in a file. I think (hope?) this is the normal use case.
@tecknicaltom configs are secrets without encryption, so updating one would update the other.
I've just discovered that in a subsequent deployment (update stack), I can pretend that my configs are external. They were created by the initial deployment, but if I change them in the compose file for my update so they are named stackprefix_configname, it works just fine. In my case, that's OK because I'm already editing the compose file to change the image version numbers for the update and I'm doing manual deployments and updates on just a few stacks. At this point. :-)
Just FYI.
I have made a script to be able to update configs automatically, especially when using a CI
If you want to try it:
https://gist.github.com/mastertheif/233edf1b25bee9ca4365434ba6548571
It is a bit crude but it works, at least for me, you could event modify it to handle secrets i think, it requires bash4
Basically it takes a compose file and suffix every config name by the hash of the actual file, doing so, if a file change it's name will be also changed.
Rollback still should works as the previous config is preserved on the other hand the rest of the specific stack config is pruned
Feel free to customize it for your needs
Cheers
I implemented the workaround like @mastertheif but used go and the docker compose parsing from docker/cli project.
https://github.com/InSitu-Software/docker-stack-redeploy
I think it should be possible to integrate this implementation in the docker cli-command, but hadn't had the time to dig deeper into the code. Maybe somebody else feels the urge for a PR ;)
Ok, so there are the following options when dealing with swarm service configuration:
Environment based configs always require container to be recreated so that configs takes effect:
you can't change neither cli nor environment variables for already running processes.
For file based configs to config being applied there are 3 possibilities:
So there's only one use case that justifies this complex workflow of swarm config rolling updates:
You want to change configuration without recreating the container. But even this use case is not possible with swarm configs: container is always recreated when you use --config-add.
So why exactly we don't have --config-update option and docker config update command? Well, with all due respect, explanation do sound artificial: do we really want rollbacks to revert to a previous config version? Configuration files behave a lot more like volumes rather than images. So it's expected that if you run docker config update it will restart all the services using that config. And it would not revert it automatically if config file is incompatible with these versions of services.
Would it make config update a more dangerous command than service update with --config-add and--config-rm options? Well, yes, if service update with --config-rm fails config will still be there, while --config-update permanently rewrites config file as there's only one version of config file stored. But shouldn't it be up to the user to take that risk? And isn't it a path forward for swarm configs be much more useful and easier to use than they are now?
It looks like kubernetes is a lot more powerful in this regard:
If ConfigMaps change the new files will be pushed to the running pods without needing a restart. So how do they achieve that? Apparently, they ignore automatic rollback scenario. If you changed config and container fails, it's your fault, and you should bring it back manually rather than relying on orchestration tools to save you from downtime. And even if users want to be cautious in kubernetes they still can create a new config and update service to use a new one.
So to be in par with k8s, easier config update scenario should be implemented. Ideally, user should also have a choice whether she wants containers to be recreated when updating configs, or config files to be updated "on the fly".
TL;DR currently there are 4 options:
embed config file into docker image (thus making it truly immutable)
with swarm config:
docker deploy command workflow As you can see, none of these are as elegant as docker config update. So, here's another bash script similar to the solutions above:
#!/bin/bash
set -e
set +x
config_update () {
config_name=$1
config_filepath=$2
service_name=$(docker config rm $config_name 2>&1 |grep -oP 'is in use by the following service: \K\w+' || true)
if [ -z $service_name ]; then
echo "There is no service using config $config_name, use docker deploy";
exit 0;
fi
docker service update --config-rm ${config_name}_temp $service_name || true
docker config rm ${config_name}_temp
docker config create ${config_name}_temp $config_filepath
mount_filepath=$(docker service inspect --format '{{(index .Spec.TaskTemplate.ContainerSpec.Configs 0).File.Name }}' $service_name)
docker service update --config-rm $config_name --config-add source=${config_name}_temp,target=${mount_filepath} $service_name
docker config rm $config_name
docker config create $config_name $config_filepath
docker service update --config-rm ${config_name}_temp --config-add source=${config_name},target=${mount_filepath} $service_name
}
config_name=$1
config_filepath=$2
if [ -z $config_name ] || [ -z $config_filepath ]; then
echo "Usage: $0 <config name> <config file path>"
exit -1
fi
config_update $config_name $config_filepath
I'm sure Docker can do better.
I agree with John Laurel, there should be a way to update the configs and Kubernetes proves that it is doable. None of the proposed alternatives are as clean as a proper docker config update mechanism.
Thanks you @Vanuan.
I'm agree at 100% with this suggestion.
Just my 2c.. this is really annoying for me primarily because I'm using docker-stack.yml to define my infrastructure. You can't dynamically name configs in docker-stack.yml - they must have fixed names. Which means when doing docker stack deploy --compose-file docker-stack.yml infrastructure to update my stack it will fail whenever the nginx config changes, prevent the rest of my stack from being updated as well.
Attaching configs after-the-fact feels messy too. Essentially you're asking me to maintain my own list of configs in use. I can definitely do that, but it feels jarring that this situation is not already handled by Docker.
If there was a way I could inject an environment variable into the docker-stack file, that would solve this issue nicely. Then, I could have something like this:
services:
nginx:
configs:
- source: nginx.conf-${env.CONFIG_VERSION}
target: /etc/nginx/nginx.conf
configs:
nginx.conf-${env.CONFIG_VERSION}:
local: etc/nginx/nginx.conf
CONFIG_VERSION=$(git rev-parse --short HEAD) docker stack deploy --compose-file docker-stack.yml infrastructure
@danpantry you can if you the name parameter. Doing so creates the secret with the given name (but not prefixed with the stack name);
version: '3.7'
services:
nginx:
image: nginx:alpine
configs:
- source: nginxconf
target: /etc/nginx/foobar.conf
configs:
nginxconf:
name: nginx.conf-${CONFIG_VERSION:-0}
file: ./nginx.conf
Without an env-var set (uses the 0 default value, as was specified in the compose file);
$ docker stack deploy -c docker-compose.yml mystack
Creating network mystack_default
Creating config nginx.conf-0
Creating service mystack_nginx
Deploying it with CONFIG_VERSION=1
$ CONFIG_VERSION=1 docker stack deploy -c docker-compose.yml mystack
Creating config nginx.conf-1
Updating service mystack_nginx (id: mjyzcchohvdak671lu9r581ba)
CONFIG_VERSION=2 (etc ...)
$ CONFIG_VERSION=2 docker stack deploy -c docker-compose.yml mystack
Creating config nginx.conf-2
Updating service mystack_nginx (id: mjyzcchohvdak671lu9r581ba)
Note that the configs were created as part of the stack, so will be labeled as being part of it. As a result, removing the stack will remove all versions of the config, not just the latest one that was used;
$ docker stack rm mystack
Removing service mystack_nginx
Removing config nginx.conf-2
Removing config nginx.conf-1
Removing config nginx.conf-0
Removing network mystack_default
oh! TIL! I didn't know you could interpolate like that. thanks!
Please notice that using name property does not prefix config name with stack name.
I faced the same issue, removing the stack didn't help but changing the new deployment's stack name solved the problem.
Any suggestion how to use the dynamic config name (I'm appending a git hash) without forcing a redeployment of the services defined in a stack?
Currently, I'm building from CI and each time I push a change the generated config name gets updated as it's a new git commit. This forces whatever services I have defined that use configs to get redeployed, even though the contents of their respective config files might not have changed.
I鈥檓 thinking the best way in your CI auto-deploy scenerio is a config-name with a hash, but not a git-commit based hash (which changes each commit) but rather a simple date/time eval on the config file. I talk a bit about that and a sample script here
https://youtu.be/oWrwi1NiViw
@tkgregory
Any suggestion how to use the dynamic config name (I'm appending a git hash) without forcing a redeployment of the services defined in a stack?
In my setup I create the compose file from a template with Ansible. I append a sha-256 sum (first 7 digits only) of the content of the config file to the config name in the compose file.
When the content of the file changes, the sha sum changes as well and the redeployment is triggered.
You have to make sure that there is no timestamp or something like that in the config file.
From time to time old unused configs should be purged.
@tkgregory
Well if you want to test it i made a bash script to happen the hash after the config name :) :
https://github.com/moby/moby/issues/35048#issuecomment-384315250
I honestly learn more about the inner workings of docker from reading these issue posts then using it in real life. Thanks for the help because I kept getting the error.
failed to update config tick_telegraf-config: Error response from daemon: rpc error:
Removing the telegraf-confg fixed my problem but its a temporary solution until I get my setup correct.
I have to make several different changes to my config files because of this TICK setup but maybe I am not setting up my configs correctly like @thaJeztah did in his post.
kapacitor:
image: kapacitor
networks:
- tick-net
configs:
- source: kapacitor-config
target: /etc/kapacitor/kapacitor.conf
deploy:
restart_policy:
condition: on-failure
placement:
constraints:
- node.role == manager
ports:
- "9092:9092"
depends_on:
- influxdb
configs:
telegraf-config:
file: $PWD/conf/telegraf/telegraf.conf
kapacitor-config:
file: $PWD/conf/kapacitor/kapacitor.conf
influx-config:
file: $PWD/conf/influx/influx.conf
Do I need to change how configs: is setup to use ${CONFIG_VERSION:-0}
just have faced the same issue. I solved this by mixing @thaJeztah and @BretFisher comments.
configs:
settings.yml:
name: settings-${SETTINGS_TIMESTAMP}.yml
file: foo.yml
SETTINGS_TIMESTAMP=$(date +%s) docker stack deploy...
I also made a quick tool to replace the docker stack deploy command using the same idea. The tool scans all the referenced configs/secrets defined on the compose file, calculate their hashes and use that for the variables that are passed directly to the real docker command, this way you don't need to remember those env variables anymore.
Hope it can be useful to somebody, i use it everyday: https://github.com/codestation/docker-deploy
I have a
docker-compose.ymlwhich I'm trying to deploy in CI to a staging server with adocker stack deploy. One of the containers has a config file mapped in, where the file is coming from the source repo. The first time I pushed a change updating the config file, I got the error mentioned by OP. Am I missing something, or is this use case (IMO fairly simple) not supported in an automated stack deploy setup?
Do you guys think we are ever going to get this goodie?
It's a very frustrating behavior. No point in keeping a docker-compose file if I need to manually update all my services on stack change (config included). I'm probably going to fall back to a cloud config management and just read configs right from the application on redeployment
Most helpful comment
@danpantry you can if you the
nameparameter. Doing so creates the secret with the given name (but not prefixed with the stack name);Without an env-var set (uses the
0default value, as was specified in the compose file);Deploying it with
CONFIG_VERSION=1CONFIG_VERSION=2(etc ...)Note that the configs were created as part of the stack, so will be labeled as being part of it. As a result, removing the stack will remove all versions of the config, not just the latest one that was used;