I haven't quite figured out the circumstances that make this break, but hopefully this will provide some insight. It seems to be related to calling grains.get_or_set_hash('a:a') and grains.get_or_set_hash('a:b') in the same run.
First example state file:
test:
w.p:
- test1: '{{ salt['grains.get_or_set_hash']('test:test1') }}'
produces the expected result from show_sls:
local:
----------
test:
----------
__env__:
base
__sls__:
test1
w:
----------
- test1:
y0m_e0f%
- p
----------
- order:
10000
and /etc/salt/grains contains
test:
test1: y0m_e0f%
Changing the state file to:
test:
w.p:
- test1: '{{ salt['grains.get_or_set_hash']('test:test1') }}'
- test2: '{{ salt['grains.get_or_set_hash']('test:test2') }}'
and show_sls produces:
local:
- Rendering SLS test1 failed, render error: while parsing a block mapping
- in "<unicode string>", line 4, column 7:
- - test2: '{'test2': 'd@nw6dd^'}'
/etc/salt/grains now contains only
test:
test2: d@nw6dd^
Note that test:test1 is missing, despite being there in the previous run.
This is with the Ubuntu package of 2014.1.10 (2014.1.10+ds-1trusty1).
I believe the problem here is that the quotes confuse the YAML renderer once the Jinja has been interpolated.
See if this works better:
test:
w.p:
- test1: {{ salt['grains.get_or_set_hash']('test:test1') }}
- test2: {{ salt['grains.get_or_set_hash']('test:test2') }}
I see why you would say that, but the quotes only turn incorrect output from the function into badly formatted YAML -- I wouldn't have noticed the issue without them. This is what show_sls produces with your state file:
local:
----------
test:
----------
__env__:
base
__sls__:
test1
w:
----------
- test1:
----------
test1:
$z92hz3x
----------
- test2:
----------
test1:
$z92hz3x
- p
----------
- order:
10000
Note that test1 and test2 both have the same value, and they are maps rather than strings: {'test1': '$z92hz3x'}.
Just hit the same thing, upvoting. :+1:
Thanks @mpasternak. If you have another test case that illustrates this problem, it's always useful to have additional info so don't hesitate to include it if you feel we might find it useful. Thanks! We'll try to get this solved.
Well, I had something like this:
app_name:
setting1: value1
setting2: value2
Then, I tried to override only setting1 via command-line and found out, that it is not possible. Also, I tried doing something like "extend" and also found out, that this is not possible.
Oh well. I ended up doing various simpler configuration structures, with prefixes (instead of nested app-name/setting1 I used app_name_setting_1 key). This way those settings are easier to override from command-line or to replace with external pillar.
I am not sure if salt actually needs more logic when it comes to pillars - because doing "extend" on configuration may result in overcomplicated files. Perhaps it is better to keep it simple & stupid.
This is probably a duplicate of (or related to) issues I have filed some time ago: #15023 and #14991
As I mentioned earlier in #15023, this is now working as it should with the changes done in #22245.
Thanks for the updates @syphernl!
@djs52 Can you verify that this issue is fixed with the changes that @achernev made in the pull request referenced above?
Since we haven't heard back and we've gotten a confirmation here from @syphernl, I am going to close this out.
If this pops up again, please leave a comment and we will happily re-address this issue. Thanks!
Starting with empty /etc/salt/grains.
master# salt 'minion' grains.get_or_set_hash 'mysql:auth:root' 8
minion# cat /etc/salt/grains
mysql:
auth:
root: n+^m*v^i
master# salt 'minion' grains.get_or_set_hash 'mysql:auth:user' 8
minion# cat /etc/salt/grains
mysql:
auth:
user: w8dyuro5
Where is the root item ???
master# salt 'minion' grains.get_or_set_hash 'mysql:auth:root' 8
minion# cat /etc/salt/grains
mysql:
auth:
root: k75f646f
Now user item has gone.
After manual editing:
minion# cat /etc/salt/grains
level1: value
mysql:
nested: value
auth:
root: k75f646f
master# salt 'minion' grains.get_or_set_hash 'mysql:auth:user' 8
minion# cat /etc/salt/grains
level1: value
mysql:
auth:
user: o!u+bs9t
Nested and root items have gone. Seems it recreates whole branch affected by the change.
salt-2016.3.3 (both master and minion)
Suggest a patch:
--- modules/grains.py 2016-11-01 13:01:36.983000000 +0200
+++ modules/grains.py 2016-11-01 13:49:30.030000000 +0200
@@ -601,15 +601,7 @@
if ret is None:
val = ''.join([random.SystemRandom().choice(chars) for _ in range(length)])
-
- if DEFAULT_TARGET_DELIM in name:
- root, rest = name.split(DEFAULT_TARGET_DELIM, 1)
- curr = get(root, _infinitedict())
- val = _dict_from_path(rest, val)
- curr.update(val)
- setval(root, curr)
- else:
- setval(name, val)
+ set(name, val)
return get(name)
Updated to latest 2016.3.4 (Boron) - result is the same, problem still exists.
@Z9n2JktHlZDmlhSvqc9X2MmL3BwQG7tk thanks re-opening now
tl;dr: not fixed in 2016.11.1
We have strange behaviour on 2016.11.1 using get_or_set_hash in jinja templates:
{{ salt['grains.get_or_set_hash'](name="cron:hourly:minute",chars="012345",length=2) }} * * * * root cd / && run-parts --report /etc/cron.hourly
{{ salt['grains.get_or_set_hash'](name="cron:daily:minute",chars="012345",length=2) }} {{ salt['grains.get_or_set_hash'](name="cron:daily:hour",chars="012345",length=1) }} * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
{{ salt['grains.get_or_set_hash'](name="cron:weekly:minute",chars="012345",length=2) }} {{ salt['grains.get_or_set_hash'](name="cron:weekly:hour",chars="012345",length=1) }} * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
{{ salt['grains.get_or_set_hash'](name="cron:monthly:minute",chars="012345",length=2) }} {{ salt['grains.get_or_set_hash'](name="cron:monthly:hour",chars="012345",length=1) }} 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
Running salt-minion 2016.11.1 (also master on 2016.11.1), we find an /etc/crontab with this after the first state.highstate:
42 * * * * root cd / && run-parts --report /etc/cron.hourly
{} {} * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
{} {} * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
{} {} 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
(in /etc/salt/grains, only one value was set)
Followed by this after another state.highstate...
42 * * * * root cd / && run-parts --report /etc/cron.hourly
11 {} * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
10 {} * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
03 {} 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
And now, perversely, /etc/salt/grains contains:
cron:
daily:
hour: {}
minute: '11'
hourly:
minute: '42'
monthly:
minute: '03'
weekly:
hour: {}
minute: '10'
If I edit /etc/salt/grains so that all values are set to something sensible, then I get something acceptable out for my /etc/crontab in the end.
Sounds like a race condition somewhere?
Still present in 2016.11.3.
Still present in 2016.11.4.
Still present in 2017.7.3.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.
If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.
Still present in 2019.2.0.
Thank you for updating this issue. It is no longer marked as stale.
... like maznu run into this problem with actual 2019.5.20.
And gratulations afterwards to getting already 5 years old for this bug ;)
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.
Thank you for updating this issue. It is no longer marked as stale.
I've just hit this bug as well and it is present in 3000.1
If the /etc/salt/grains files is empty or does not exist on the the last grain (alphabetically) is created. ie:
test:
test3: <generated value 3>
If salt is then run again, the other grains are created correctly.
test:
test1: <generated value 1>
test2: <generated value 2>
test3: <generated value 3>
Similarly if the initial /etc/salt/grains file contains an empty dict, ie:
test: {}
Runnig salt seems to generate all the grains fine:
test:
test1: <generated value 1>
test2: <generated value 2>
test3: <generated value 3>
Is there some sort of race condition that exists when creating the parent key, where the last (alphabetically) grain "wins"? Then on further runs the race condition doesn't exist because the parent key already exists?
Have managed to repeatedly reproduce this issue by starting with just the following in /etc/salt/grains:
key1:
key2: value2
And then trying to do salt['grains.get_or_set_hash']('key1:key3:',…) in a jinja template.
The end result is that key2 has vanished:
key1:
key3: xXxgibberishxXx
Suggest a patch:
--- modules/grains.py 2016-11-01 13:01:36.983000000 +0200 +++ modules/grains.py 2016-11-01 13:49:30.030000000 +0200 @@ -601,15 +601,7 @@ if ret is None: val = ''.join([random.SystemRandom().choice(chars) for _ in range(length)]) - - if DEFAULT_TARGET_DELIM in name: - root, rest = name.split(DEFAULT_TARGET_DELIM, 1) - curr = get(root, _infinitedict()) - val = _dict_from_path(rest, val) - curr.update(val) - setval(root, curr) - else: - setval(name, val) + set(name, val) return get(name)
That's exactly what I just tried, and it works.
Submitting a PR.