Salt:
Salt: 2014.7.2
Python: 2.7.9 (default, Apr 2 2015, 15:33:21)
Jinja2: 2.7.3
M2Crypto: 0.21.1
msgpack-python: 0.4.2
msgpack-pure: Not Installed
pycrypto: 2.6.1
libnacl: Not Installed
PyYAML: 3.11
ioflo: Not Installed
PyZMQ: 14.4.1
RAET: Not Installed
ZMQ: 4.0.5
Mako: 1.0.0
Debian source package: 2014.7.2+ds-1utopic2
OS: Ubuntu 15.04, using 14.10 Salt packages
When declaring a requisite like
include:
- apache.modules
[鈥
- require_in:
- sls: apache.modules
the state will fail with a message like this:
Cannot extend ID 'apache.modules' in 'base:config'. It is not part of the high state.
This is likely due to a missing include statement or an incorrectly typed ID.
Ensure that a state with an ID of 'apache.modules' is available
in environment 'base' and to SLS 'config'
Using just apache
as include/requisite, everything will work just fine, but once a "sub-SLS" is declared, it will fail as described above.
EDIT: A minimal working example:
foo/bar.sls
some-test-stuff:
test.succeed_with_changes
config.sls
include:
- foo.bar
another-test-state:
test.succeed_with_changes:
- require_in:
- sls: foo.bar
Result:
salt-call state.sls config
[WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate.
[WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate.
[INFO ] Loading fresh modules for state activity
[INFO ] Fetching file from saltenv 'base', ** done ** 'config.sls'
[INFO ] Fetching file from saltenv 'base', ** skipped ** latest already in cache 'salt://foo/bar.sls'
local:
Data failed to compile:
----------
Cannot extend ID 'foo.bar' in 'base:config'. It is not part of the high state.
This is likely due to a missing include statement or an incorrectly typed ID.
Ensure that a state with an ID of 'foo.bar' is available
in environment 'base' and to SLS 'config'
root@moria:~/salt-teststates# cat config.sls
include:
- foo.bar
another-test-state:
test.succeed_with_changes:
- require_in:
- sls: foo.bar
This seems to only happen with require_in
requisites, but works fine with require
requisites.
Interesting. I'm sure it's some interaction with the fact that .
is a directory separator for sls files. The strangest part is that it is only _in
requisites. Anyway, we'll look into this when we get time.
Looks like a dot in the name isn't even required anymore. The situation on 2014.7.5
:
Versions Report
Salt: 2014.7.5
Python: 2.7.6 (default, Mar 22 2014, 22:59:56)
Jinja2: 2.7.2
M2Crypto: 0.21.1
msgpack-python: 0.3.0
msgpack-pure: Not Installed
pycrypto: 2.6.1
libnacl: Not Installed
PyYAML: 3.10
ioflo: Not Installed
PyZMQ: 14.0.1
RAET: Not Installed
ZMQ: 4.0.4
Mako: 0.9.1
Debian source package: 2014.7.5+ds-1ubuntu1
hello.sls
hello-teststate:
test.succeed_with_changes
world.sls
include:
- hello
world-teststate:
test.succeed_with_changes:
- require_in:
- sls: hello
Output
salt-call state.sls world test=true
[INFO ] Loading fresh modules for state activity
[INFO ] Fetching file from saltenv 'base', ** done ** 'world.sls'
[INFO ] Fetching file from saltenv 'base', ** skipped ** latest already in cache 'salt://hello.sls'
local:
Data failed to compile:
----------
Cannot extend ID 'hello' in 'base:world'. It is not part of the high state.
This is likely due to a missing include statement or an incorrectly typed ID.
Ensure that a state with an ID of 'hello' is available
in environment 'base' and to SLS 'world'
So I just re-read this issue and I bet it's actually expected behavior. I bet that when sls-wide requisites were coded, _in
requisites were not taken into account. The original idea was to require the whole sls file, not to inject requisites into the whole file. I'm guessing that the sls expansion isn't working for requisite injection. Did you ever have this working?
I didn't have this working before, as I just wanted to start using requisites this way because I ended up in a few situations while writing complex formulas where there was no other way to express requisites when the require_in
SLS was an external one (e.g. formula for a web application which makes abstracted use of webserver or DB formulas which are provided through pillars).
Thanks, that confirms my suspicions. We should definitely make this work, but it may take some work.
Any update on this one?
No update yet, unfortunately.
This is breaking things for me too.
I had errors similar to this, but it was working before I corrected what I thought was a typo. The code that was working before was 'watch:in:' I corrected the syntax to 'watch_in:' which broke the state and threw the state errors.
In the end, this is working:
/opt/filedir/file.name:
file.manged:
- source: salt://filedir/files/file.name
- user: user1
- user: group1
- watch:in:
- service: httpd
I assume the file.manged
typo in your state was from copying it over? That would definitely cause problems.
watch:in:
wouldn't throw errors because the state compiler would just ignore it as invalid. (this behavior will change in the carbon release of salt, if I remember correctly.)
Anyway, as I stated above, this probably should have been classified as a feature, since I think the _in requisites just weren't taken into account properly when the ability to require an entire SLS was added.
:+1:
I'd just like to throw another vote in here that it'd be really, really useful to be able to require_in
sls files.
I'm working on something today that would be so easy if I could do that, but since I can't I'm having to use a lot of workarounds.
ZD-926
I'd like to raise my hand, too, to help this to get a way into the next release.
I also just ran into that problem.
Like eliasp, that would be the perfect solution to my current problem...
Same situation here. Significantly increases the complexity of my setup with this being unimplemented.
This work is under consideration for the Spring Feature-Release of Salt.
Final decisions regarding what will be included in the Spring release will be made in January.
Is there a reasonable way to work around this until the next release is out?
Not really, this is a complicated addition. You could patch your minion install to use this since it is isolated to just the state engine.
Closed via #39399
Is it possible to achieve something like this?
some_kubernetes_masterstate.sls:
kubeadm_init:
cmd.run:
- name: kubeadm init --pod-network-cidr {{ kubernetes_network.network.cidr }}
- require:
- pkg: kubeadm
- require_in:
- sls: kubernetes.network
kubernetes/network.sls:
include:
- kubernetes.network.{{ kubernetes.network.provider }}
Aforementioned example doesn't work as included kubernetes.network.{{ kubernetes.network.provider }}
is not the same sls as kubernetes.network
(though include should mean it is the part of the same sls...)
Currently to make this work I have to duplicate:
kubeadm_init:
cmd.run:
- name: kubeadm init --pod-network-cidr {{ kubernetes_network.network.cidr }}
- require:
- pkg: kubeadm
- require_in:
- sls: kubernetes.network.{{ kubernetes.network.provider }}
Which I don't like.
Do I have some options (or do I do something terribly wrong)?
Most helpful comment
I'd just like to throw another vote in here that it'd be really, really useful to be able to
require_in
sls files.I'm working on something today that would be so easy if I could do that, but since I can't I'm having to use a lot of workarounds.