Salt: Inconsistent pillar data from git_pillar

Created on 15 Feb 2017  Â·  49Comments  Â·  Source: saltstack/salt

Description of Issue/Question

I am using git_pillar as the only pillar provider, with "dynamic" environments (__env__ instead of hardcoded branch name). The way I understand it, this setup allows you to fork a branch in your Git repo and change values in Pillar so that only minions in the same pillarenv see that change.

Unfortunately, pillar representation is flapping between different branches; here's a minion in environment: dev and pillarenv: dev to whose pillar I added a new key with a value that occasionally disappears, either when getting the key from master or from the minion. In the dev branch this key exists with the value you see in first two attempts; in another branch production the key doesn't exist.

(dev) root@node-3:~$ salt-call pillar.item ovs_bridges:br-ex-vdc                                                                                                                        
local:
    ----------
    ovs_bridges:br-ex-vdc:
        vlan430
(dev) root@node-3:~$ salt-call pillar.item ovs_bridges:br-ex-vdc
local:
    ----------
    ovs_bridges:br-ex-vdc:
        vlan430
(dev) root@node-3:~$ salt-call pillar.item ovs_bridges:br-ex-vdc
local:
    ----------
    ovs_bridges:br-ex-vdc:

I suspect this might be caused by a race between simultaneous pillar renderings on the master (since Git pillar data is saved on the master in the form of a single directory in which checkouts are performed), but have not verified it.

There is no pillar data merging configured, because I'd like to have pillar data between environments completely separated:

...
top_file_merging_strategy: same
pillar_source_merging_strategy: none
...

Maybe it would help if the Pillar repository on salt master got spread into different directories (one for each branch) instead of checking out inside one directory. Otherwise we have to be sure that all operations that can potentially perform git checkout obey a lock set by the process that is currently compiling the pillar.

Steps to Reproduce Issue

  • Install Salt 2016.11.2 on both master and minion, configure new-style Git ext_pillar with dynamic __env__ mapping. I'm using pygit2 right now, but the issue was identical while using gitpython:
git_pillar_global_lock: False
git_pillar_provider: pygit2
git_pillar_privkey: /root/.ssh/id_rsa
git_pillar_pubkey: /root/.ssh/id_rsa.pub

ext_pillar:
  - git:
    - __env__ ssh://[email protected]/group/salt.git:
      - root: pillar
  • Configure the minion to use environment and pillarenv options, both initially set to the default environment base corresponding to master branch.
  • In the default branch, create a Top file for pillar that looks like this:
{{ saltenv }}:
  node-3:
    - foo
  • In the default branch, create a Pillar file foo.sls containing some data
  • Fork another branch, for example dev, off the default one, and modify foo.sls so that it has a superset of default's data (to rule out merging issues).
  • Spawn another minion, this time with environment and pillarenv set to dev
  • Run salt-call pillar.items on both of them, preferably in a "for" loop to achieve concurrency. The results should be inconsistent between calls on the same minion.

Versions Report

Master:

Salt Version:
           Salt: 2016.11.2

Dependency Versions:
           cffi: 1.5.2
       cherrypy: Not Installed
       dateutil: 2.4.2
          gitdb: 0.6.4
      gitpython: 1.0.1
          ioflo: Not Installed
         Jinja2: 2.8
        libgit2: 0.24.0
        libnacl: Not Installed
       M2Crypto: Not Installed
           Mako: 1.0.3
   msgpack-pure: Not Installed
 msgpack-python: 0.4.6
   mysql-python: 1.3.7
      pycparser: 2.14
       pycrypto: 2.6.1
         pygit2: 0.24.0
         Python: 2.7.12 (default, Nov 19 2016, 06:48:10)
   python-gnupg: Not Installed
         PyYAML: 3.11
          PyZMQ: 15.2.0
           RAET: Not Installed
          smmap: 0.9.0
        timelib: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.1.4

System Versions:
           dist: Ubuntu 16.04 xenial
        machine: x86_64
        release: 4.4.0-24-generic
         system: Linux
        version: Ubuntu 16.04 xenial

Minion:

Salt Version:
           Salt: 2016.11.2

Dependency Versions:
           cffi: Not Installed
       cherrypy: Not Installed
       dateutil: 2.4.2
          gitdb: Not Installed
      gitpython: Not Installed
          ioflo: Not Installed
         Jinja2: 2.8
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: Not Installed
           Mako: 0.9.1
   msgpack-pure: Not Installed
 msgpack-python: 0.4.7
   mysql-python: 1.2.3
      pycparser: Not Installed
       pycrypto: 2.6.1
         pygit2: Not Installed
         Python: 2.7.6 (default, Jun 22 2015, 17:58:13)
   python-gnupg: Not Installed
         PyYAML: 3.11
          PyZMQ: 14.0.1
           RAET: Not Installed
          smmap: Not Installed
        timelib: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.0.4

System Versions:
           dist: Ubuntu 14.04 trusty
        machine: x86_64
        release: 3.19.0-58-generic
         system: Linux
        version: Ubuntu 14.04 trusty
Bug Pillar fixed-pending-your-verification severity-medium

Most helpful comment

OK, I'm fairly certain I know what is going on. The checkout happens here, while the Pillar compilation happens later (here). The checkout lock is only preventing a simultaneous attempt at checking out the repo, but once the checkout is complete, the lock is removed. So minion A checks out its branch, but between when minion A checks it out and compiles the pillar SLS, minion B has separately checked out a different branch.

To fix this, the locking must be more intelligent and last until Pillar compilation is complete. This means we need to have a way for minions which need the same branch to be able to keep the lock alive so that we're not having each invocation of git_pillar block. So essentially, if any minions need the master branch at a given time, the lock must remain in place. This would probably entail creating lock files named in the format checkout.<branch_or_tag_name>.<minion_id>.lk instead of checkout.lk. The contextmanager which obtains the checkout lock should also be invoked from within the git_pillar code and not the checkout() func in salt.utils.gitfs.

All 49 comments

If you use

{{pillarenv}}:
  node-3:
    - foo

instead of saltenv, does the problem go away?

I think @terminalmage eventual goal is to remove environment entirely, and just have saltenv and pillarenv.

Thanks,
Daniel

{{ pillarenv }} is not a thing. Since we use shared code to parse top files, the jinja context var is still called {{ saltenv }}, even in git_pillar.

This may be due to one minion's git_pillar execution trying to check out a different branch while a checkout lock is in place. The locking code is older than the dynamic pillar code in the "new" git_pillar, and now that I think of it there is no code to wait for a lock to be released, because something like this was never necessary before I added the dynamic pillar support.

You should have errors in your master log if this is the case. Can you check the master log?

Actually, it turns out we do indeed have code there to wait for a checkout lock to be removed. There should still be errors in the master log, however, when a checkout lock times out.

There is this error in the master log, but not explicitly tied to gitfs locking - although suspicious enough:

2017-02-15 23:07:46,460 [salt.template    ][ERROR   ][27564] Template does not exist:

The other is a warning, presumably because git_pillar_global_lock is set to False (but setting it to True changes nothing in the behavior):

2017-02-15 23:07:44,920 [salt.utils.gitfs ][WARNING ][27568] git_pillar_global_lock is enabled and checkout lockfile /var/cache/salt/master/git_pillar/b4092923fbdfd694387b13fcfbb90712d13692542d47e70913049848f672b8a6/.git/checkout.lk is present for git_pillar remote '__env__ ssh://[email protected]/group/salt.git'. Process 27585 obtained the lock

Besides that, I ran some tests on two minions:

  • minion api-1 is in environment dev and has the pillar item role: internal set
  • minion api-2 is in environment dev2 and has the pillar item role: foobar set
  • pillar has been synced from master with saltutil.refresh_pillar

These are the results:

(dev) root@api-1:~$ for i in {1..20}; do salt-call pillar.get role | grep -q internal && echo 'Pillar correct' || echo 'Race lost!'; done
Pillar correct
Pillar correct
Pillar correct
Pillar correct
Pillar correct
Pillar correct
Pillar correct
Race lost!
Pillar correct
Pillar correct
Pillar correct
Pillar correct
Pillar correct
Race lost!
Pillar correct
Pillar correct
Pillar correct
Pillar correct
Pillar correct
Pillar correct

```bash
(dev2) root@api-2:~$ for i in {1..20}; do salt-call pillar.get role | grep -q foobar && echo 'Pillar correct' || echo 'Race lost!'; done
Pillar correct
Pillar correct
Pillar correct
Pillar correct
Pillar correct
Race lost!
Race lost!
Race lost!
Pillar correct
Race lost!
Pillar correct
Pillar correct
Pillar correct
Pillar correct
Pillar correct
Pillar correct
Pillar correct
Pillar correct
Pillar correct
Pillar correct

So that definitely looks like a race. Now get this:
```bash
root@saltmaster:~$ salt 'api-*' saltutil.refresh_pillar
api-2:
    True
api-1:
    True
root@saltmaster:~$ salt 'api-*' pillar.item role  # this one is correct
api-1:
    ----------
    role:
        internal
api-2:
    ----------
    role:
        foobar
root@saltmaster:~$ salt 'api-*' saltutil.refresh_pillar
api-2:
    True
api-1:
    True
root@saltmaster:~$ salt 'api-*' pillar.item role
api-1:
    ----------
    role:
        foobar
api-2:
    ----------
    role:
        foobar

Unfortunately I don't have an in-depth knowledge of how the master handles simultaneous pillar compilation for more minions, but my best guess is that some stray checkout doesn't adhere to the lock you mentioned. I'll do my best to look into git_pillar implementation in the next few days and see if we can easily mimick the behavior of gitfs for state files; that is having a separate directory for each dynamic environment. Thanks for any help until then.

It appears that reverting to legacy git_pillar hot-fixes the issue, possibly because of one-branch-per-directory approach. Just a heads-up for anyone hitting the same bug.

Both legacy and new git_pillar use one branch directory per entry.

That's interesting, because after enabling legacy pillar, a new directory pillar_gitfs appeared in the master's cache directory and it contains one directory per branch (see the mtimes):

(production) root@saltMaster:/var/cache/salt/master$ ls -l
total 52
drwxr-xr-x   3 root root  4096 Feb  8 19:29 file_lists
drwx------   3 root root  4096 Feb  8 19:33 files
drwxr-xr-x   5 root root  4096 Feb 16 01:21 gitfs
drwxr-xr-x   3 root root  4096 Feb  8 22:44 git_pillar
drwxr-xr-x 212 root root  4096 Feb 16 14:39 jobs
drwxr-xr-x 329 root root 12288 Feb 15 18:47 minions
drwxr-xr-x   8 root root  4096 Feb 16 01:40 pillar_gitfs
drwxr-xr-x   2 root root  4096 Feb  8 19:28 proc
drwxr-xr-x   2 root root  4096 Feb  8 19:28 queues
drwxr-xr-x   2 root root  4096 Feb  8 19:28 syndics
drwxr-xr-x   2 root root  4096 Feb  8 19:28 tokens

Each directory inside pillar_gitfs is checked out to a different branch:

(production) root@saltMaster:/var/cache/salt/master$ ls -l pillar_gitfs/
total 24
drwxr-xr-x 5 root root 4096 Feb 16 01:22 2510c900cafb5389f675bf61773795d00857ed32ddf8cb9289a821a5755b883d
drwxr-xr-x 5 root root 4096 Feb 16 01:23 820a11463b15ca8ddfa13f83fe415008db8e822781f60ed74d4ca7f168f4c28f
drwxr-xr-x 5 root root 4096 Feb 16 01:21 8253617aa88a74b76aa031902d436c925b9795c46a6876c627482a3e4436223f
drwxr-xr-x 5 root root 4096 Feb 16 01:22 9691e780da23524193bdc9dec466a1f809e7877e23cf07b8bee8e9a8c5fe841b
drwxr-xr-x 5 root root 4096 Feb 16 01:22 b158980f40ce3f6842cc7f40a19fb05c6912098fd3a0794fbf333ef2985f5b83
drwxr-xr-x 5 root root 4096 Feb 16 01:40 fa75935f8f6780fe3292c572db84c184ce49638db95de7ed51aeefdc8a1e0cf8
(production) root@saltMaster:/var/cache/salt/master$ for i in pillar_gitfs/*; do (cd $i; git branch -v); done
* (HEAD detached at origin/w13) dd91594 some_comment
* (HEAD detached at origin/vdc-ext-ip-range) f53aaa9 some_comment
* (HEAD detached at origin/master) 4223114 some_comment
* (HEAD detached at origin/dev) 3ae9825 some_comment
* (HEAD detached at origin/production) 5b4a465 some_comment
* (HEAD detached at origin/saas) 5926ac6 some_comment

The other directory, git_pillar, should correspond to new-style Git pillar. I've deduced this from the fact that it only contained one directory and after removing it and refreshing minions' pillars, it hasn't appeared again. This is its content before removal:

(production) root@saltMaster:/var/cache/salt/master$ ls -l git_pillar/
total 8
drwxr-xr-x 5 root root 4096 Feb 15 20:05 b4092923fbdfd694387b13fcfbb90712d13692542d47e70913049848f672b8a6
-rw-r--r-- 1 root root  181 Feb  8 22:41 remote_map.txt

Perhaps an "entry" in legacy pillar corresponds to a branch no matter whether a static or a dynamic one; but in new-style pillar the remote_map.txt clearly considers the "virtual" environment a single branch:

(production) root@saltMaster:/var/cache/salt/master$ cat git_pillar/remote_map.txt
# git_pillar_remote map as of 15 Feb 2017 22:54:31.744120
b4092923fbdfd694387b13fcfbb90712d13692542d47e70913049848f672b8a6 = __env__ ssh://[email protected]/group/salt.git

It's possible then that dynamic git_pillar in legacy checks out separate directories, I'd have to look at the code to be sure. The way in salt.utils.gitfs (the shared codebase used for gitfs and new git_pillar) that we decide what directory to use is that we hash the config line (i.e. __env__ ssh://[email protected]/group/salt.git). The legacy code might be doing hashing based on what __env__ resolves to.

I'll do some digging, thanks for the info.

OK, it looks like legacy git_pillar does map the dynamic branch every time ext_pillar is invoked. The good thing about this is that it ensures that you have a separate directory for each branch. The bad thing though is that it means that the repo must be fetched every time it is checked out, rather than once every ~60 seconds during the maintenance loop.

OK, I'm fairly certain I know what is going on. The checkout happens here, while the Pillar compilation happens later (here). The checkout lock is only preventing a simultaneous attempt at checking out the repo, but once the checkout is complete, the lock is removed. So minion A checks out its branch, but between when minion A checks it out and compiles the pillar SLS, minion B has separately checked out a different branch.

To fix this, the locking must be more intelligent and last until Pillar compilation is complete. This means we need to have a way for minions which need the same branch to be able to keep the lock alive so that we're not having each invocation of git_pillar block. So essentially, if any minions need the master branch at a given time, the lock must remain in place. This would probably entail creating lock files named in the format checkout.<branch_or_tag_name>.<minion_id>.lk instead of checkout.lk. The contextmanager which obtains the checkout lock should also be invoked from within the git_pillar code and not the checkout() func in salt.utils.gitfs.

Nice progress, but I'm a bit concerned about how this will scale. If you're determined to keep the git_pillar in a single directory, it's only a matter of time until the number of minions in a single environment reaches a critical threshold and won't let the lock go indefinitely (a simple schedule that triggers pillar compilation is sufficient). And now we're talking schedulers :)

Other thing that crossed my mind, although probably several layers under pillar abstractions, is pulling the pillar files' contents directly from a Git ref - e.g. git show production:init.sls. This doesn't require a checkout and only needs one directory; but will probably be cumbersome to implement.

@peter-slovak we actually do this already for gitfs, using the Python bindings rather than the CLI. The trouble with this is that git_pillar still invokes the salt.pillar.Pillar class (used also for the conventional pillar_roots data) to parse the top file and load the pillar matches. A hybrid option, where the top file is parsed first, and then the pillar matches are extracted directly from the refs, may work. But doing this separately for each minion may cause its own performance issues.

I'll give this some more thought, and discuss with my fellow engineers.

One potential option:

/var/cache/salt/master/git_pillar/<unique_hash>/gitdir - actual clone

/var/cache/salt/master/git_pillar/<unique_hash>/refs/<SHA1>/foo.sls - extracted ref at a specific SHA1

After parsing the top file, we would extract the top.sls and other pillar SLS files into a unique directory underneath that refs dir. This would give us a physical dir we could then feed to salt.pillar.Pillar so that we can compile pillar data.

This option would prevent us from needing to make so many calls to the Python bindings to extract files, since if they already exist under that SHA1's directory we assume they were already extracted.

A separate function could be added to the maintenance loop, or enabled via the scheduler, to purge files with a modified time > N days, in order to keep the cache from growing too large.

Keep in mind those are just placeholder names, I know gitdir typically refers to the .git dir in a repo. We'd probably need to come up with something better.

Also, we don't have to move things down a directory level, we'd _probably_ be safe just putting all of the refs subdirs under /var/cache/salt/master/git_pillar/refs or something like that.

That sounds pretty cool. Though I remember reading something about a movement of getting git_pillar on par with gitfs, configuration- and logic-wise. Does that still hold? If so, does your idea fall into that puzzle?

I don't mean to question you, I'd just hate to see another good fix reverted in a few releases, forgetting why it had been implemented in the first place (and oh did that happen to git_pillar in the past). Running on legacy until the greater plan is completed is also an option. In any case, thank for the good work :+1:

I don't know what you mean about getting it on par, that was basically why I rewrote git_pillar in the first place, so that it can use a shared codebase for a large portion of it. Both git_pillar and gitfs use classes defined in salt.utils.gitfs.

You could perhaps be remembering comments from a while back.

I doubt it's related, but I was getting a Template does not exist error because pillar/__init__.py was computing the top cache as an empty string while using pillarenv in the minion config. I submitted a fix as #39516. Figured I'd mention it as this is the only sortof recent issue with "template does not exist" in it.

Funny that I have the last comment on this issue. I just came back to it because I'm now also encountering the race with pillar_source_merging_strategy: none and a couple pillar environments.

OK, I'm fairly certain I know what is going on. The checkout happens here, while the Pillar compilation happens later (here). The checkout lock is only preventing a simultaneous attempt at checking out the repo, but once the checkout is complete, the lock is removed. So minion A checks out its branch, but between when minion A checks it out and compiles the pillar SLS, minion B has separately checked out a different branch.

@terminalmage I don't think this is quite right. I did some serious digging for a couple hours... and finally determined that you're on the right track here, but it can't quite be right because, I'm using this:

```
git_pillar_root: pillar
git_pillar_base: master
pillar_source_merging_strategy: none
ext_pillar:

The name: gives me the (advantage?) of having a completely separate checkout for each branch/env; with it's own (duplicate) set of refs and I still get the race. It seems to be completely random results for both pillarenv:base and pillarenv:dev minions. It's really a matter of who gets there first or something? maybe dictionary key order (which is somewhat random)? I can't figure it out, but it's probably not because one minion checks out before another since mine are completely separate.

I have a feeling it has something to do with the fileclient.cache_file() tree of code, but it's too much archaeology for me tonight.

Are there any other details I can provide that would help to reproduce it or otherwise figure it out or did you already have a handle on that?

Since I don't know what version you're running, the version of the master and minion would be helpful. In addition, for the minions it would also be helpful to know what the environment and pillarenv are both set to. There have been a few fixes in recent versions involving incorrect behavior when both environment and pillarenv are set, and it would be helpful to be able to rule this out as a possible cause.

OK, sure, that makes sense.

I'm running: https://github.com/jettero/salt/tree/my-deploy exactly, which is not significantly different from 2016.11.1; but has my fix #39516 rolled in.

Salt Version:
           Salt: 2016.11.1-1-g7b416c8

Dependency Versions:
           cffi: 1.9.1
       cherrypy: Not Installed
       dateutil: 2.6.0
          gitdb: Not Installed
      gitpython: Not Installed
          ioflo: 1.6.6
         Jinja2: 2.9.5
        libgit2: 0.24.3
        libnacl: 1.5.0
       M2Crypto: 0.24.0
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.4.8
   mysql-python: Not Installed
      pycparser: 2.17
       pycrypto: 2.6.1
         pygit2: 0.24.2
         Python: 2.7.13 (default, Dec 21 2016, 07:16:46)
   python-gnupg: 0.3.8
         PyYAML: 3.12
          PyZMQ: 16.0.2
           RAET: 0.6.6
          smmap: Not Installed
        timelib: 0.2.4
        Tornado: 4.4.2
            ZMQ: 4.2.0

System Versions:
           dist:   
        machine: x86_64
        release: 4.9.1-1-ec2
         system: Linux
        version: Not Installed

All the minions have environment: base / pillarenv: base — or environment: dev / pillarenv: dev

Is there a master setting for that? I'm not sure.

Same issue,
random pillar key/value with a conf like :

ext_pillar:
  - git:
    - __env__ ssh://[email protected]/group/salt.git

@terminalmage do you have a patch for multienv git pillar ?

Oh, I forgot about this. I don't really use the dual environments for anything and I forgot how to even do it, much less where the archaeology left off. I wonder if I could reproduce it in the 2018.x trees.

@aarnaud no.

@jettero Yes it could reproduce it in the 2018.x
I think these is when i mix pillar_roots and ext_pillar
I tried all pillar_source_merging_strategy options no change.

The pillar value is sometimes present and sometime missing

pillar_roots:
  __env__:
    - /srv/pillar

ext_pillar:
  - git:
    - __env__ ssh://[email protected]/group/salt.git

In fact, i remove pillar_roots to only use ext_pillar with git. Since, i didn't have any issues

This one and #32245 are somewhat related. Duplicates?

@terminalmage I have forked and created a patch to allow me to work around this issue for the use case desribed by @tkwilliams in issue #32245 by adding a whitelist for branchnames to be considered when the environment is set to __env__.
This allows me to create a separate git pillars for each of the long lived branches (thus creating independent git_pillar checkout directories for those) and then only use the __env__ pillar cache for shortlived development branches, which will always be deployed overriding the pillarenv to the development branch name thus avoiding the race condition in this particular case (except if two development branches simultaneously get deployed, but that should be reasonably rare, I hope).

If you would be interested in such a solution, then I could create a pull request for this workaround (My fork is in [email protected]:hakanf/salt.git)...

Why not use git worktrees to solve this problem ?

Each branch could be checked out in its own work tree and no conflict would occur.

This issue is really painful for me. I have a salt cron that runs every hour for about 40 minutes. During this time, it's nearly impossible to apply another pillarenv because of the race condition.

Worktrees weren't the way to go. Adding a global lock on the ext_pillar function to make it atomic solves the problem.

We have this issues very often because of our env-based work-flow (and by that i mean everyday, multiple time a day).

This issues both break our daily high-state diff report, but also made us impossible to predict what a highstate will actually do. At any time we could end up doing it with master pillar instead of our env pillar, breaking prod and creating outage. There is also a security point to be made, since a server could randomly receive sensitive pillar it shouldn't.

Because we are still running salt 2017, we had to dirty merge @gmsoft-tuxicoman proposal. But i'm happy to say it work well, and we didn't experience any performance drawback (a highstate test=true on * took 5531 with and 5529 without so way in margin)

I have no idea if this issue is still present in 2019-2, but i don't see why it wouldn't. I feel like this should be set with higher priority because it make Salt not trustworthy, and force us to check everything behind it. Kind of sad for an automation tool.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.

Can we get a Confirmed status on this ? Maybe someone from salt to check on #54097 who work flawlessly with us for now ?
Having infrastructure breakdown because the pillars can't be trusted is very very troublesome ...

Thank you for updating this issue. It is no longer marked as stale.

It's been working for me as well flawlessly. I can't use salt without this patch since pillar will be pulled from random branches,

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.

Is there any way to get eyes on this ? Oo
because the fix add a function, there is no way to make it a simple module extension, and we need to edit the file by hand. This freeze update and is not sustainable.
Git pillar is broken, dangerous and we need this merged !

Thank you for updating this issue. It is no longer marked as stale.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.

Thank you for updating this issue. It is no longer marked as stale.

Guess what ! I'm here to un-stale it again ! I'm starting to belived env are juste not used ...

No @poofyteddy you're not alone 😃.

@terminalmage @dwoz Can someone permanently unstale it ?

I've created #57540 to fix this (without using a global lock). Please test...

Thank you for working on this @sathieu. Sadly i won't be able to test it because i am still running 2017 at work, and converting your code in this version will take a while.
we have an upgraded planed to 2019, so that will be a good time to check how your fix compare to #54097. I'll keep you posted.
Until then, maybe @gmsoft-tuxicoman is in a better place to test this ?
We are official replacing salt here :'(

@poofyteddy I've ported #57540 to 2019.2.x as #57597.

i am having issue with "import_yaml" in pillar while running this fix (can't find the file).
I can't easily stop and restart salt because other use it, but i'll try to give you a debug log tonight

There is a PR linked, but that PR still needs some work before merging it and therefore removing it from the Magnesium scope. I will remind myself to review this as we start the next release planning cycle in the next few weeks.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

leonhedding picture leonhedding  Â·  57Comments

Deshke picture Deshke  Â·  60Comments

xiaopanggege picture xiaopanggege  Â·  158Comments

sumeetisp picture sumeetisp  Â·  54Comments

chrismoos picture chrismoos  Â·  54Comments