Creation of LXC with network_profile with LXC 3.0+ results to unusable containers.
_The problem is that salt doesn't support the new config keys supporting nic settings_:
lxc.net.[i].type
lxc.net.[i].flags
lxc.net.[i].link
lxc.net.[i].mtu
lxc.net.[i].name
lxc.net.[i].hwaddr
lxc.net.[i].ipv4.address
lxc.net.[i].ipv4.gateway
lxc.net.[i].ipv6.address
lxc.net.[i].ipv6.gateway
lxc.net.[i].script.up
lxc.net.[i].script.down
salt/lxc.sls
create-lxc-foo:
lxc.present:
- name: foo
- profile: ubuntu1804
- network_profile: foo_network
pillar/profile.sls
lxc.container_profile:
ubuntu1804:
template: download
options:
dist: ubuntu
release: bionic
arch: amd64
pillar/network.sls
lxc.network_profile:
foo_network:
eth1:
link: lxcbr0
type: veth
flags: up
Salt Version:
Salt: 2017.7.8
Dependency Versions:
cffi: Not Installed
cherrypy: Not Installed
dateutil: 2.4.2
docker-py: Not Installed
gitdb: 0.6.4
gitpython: 1.0.1
ioflo: Not Installed
Jinja2: 2.8
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: Not Installed
Mako: 1.0.3
msgpack-pure: Not Installed
msgpack-python: 0.4.6
mysql-python: Not Installed
pycparser: Not Installed
pycrypto: 2.6.1
pycryptodome: Not Installed
pygit2: Not Installed
Python: 2.7.12 (default, Nov 12 2018, 14:36:49)
python-gnupg: Not Installed
PyYAML: 3.11
PyZMQ: 15.2.0
RAET: Not Installed
smmap: 0.9.0
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.1.4
System Versions:
dist: Ubuntu 16.04 xenial
locale: UTF-8
machine: x86_64
release: 4.4.0-139-generic
system: Linux
version: Ubuntu 16.04 xenial
After reading lxc module in 2018.3.3
and develop
branches, I see that module only supports old keys.
looks like we will need to get this added. Did you want to try to give it a go at a PR?
Do both syntaxes need to be supported for now? The new way was only introduced in LXC 2.1, in Sep 2017: https://discuss.linuxcontainers.org/t/lxc-2-1-has-been-released/487
yeah i think it would be good to add some detection to see which version you are on
as temporary workaround you can set empty network profile:
lxc.network_profile:
default:
eth0:
disable: true
and use /etc/lxc/default.conf
as network configurator
as temporary workaround you can set empty network profile:
lxc.network_profile: default: eth0: disable: true
and use
/etc/lxc/default.conf
as network configurator
Thanks skob! I will try that today.
As a temporary workaround, you could also (but probably shouldn't) do:
sed -i 's/lxc.network/lxc.net.0/g' /usr/lib/python*/dist-packages/salt/modules/lxc.py
as temporary workaround you can set empty network profile:
lxc.network_profile: default: eth0: disable: true
and use
/etc/lxc/default.conf
as network configurator
So I finally got around to trying this, and it doesn't seem to work for me?
I have this .sls file in pillar, assigned to the lxc hosts:
# Work around Salt's lack of LXC 3.0+ support
# https://github.com/saltstack/salt/issues/50679#issuecomment-458072894
lxc.network_profile:
default:
eth0:
disable: true
And
salt-run lxc.init mc-3015-nfsgateway-1804 host=mc-3015-201 template=salt-image
returns the same errors, most relevantly:
parse.c: lxc_file_for_each_line_mmap: 142 Failed to parse config file "/tmp/tmppNKAFW" at line "lxc.network.type = veth"
Am I missing something?
EDIT: Ok, so all I had to do after this was add network_profile=default
, which is odd, since I would think 'default' is the default, but perhaps it's just a name?
Ok, if I use network_profile=default, the container is created, but it has no eth0, even though /etc/lxc/default.conf does define net.0:
lxc.net.0.type = veth
lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx
@skob, what are you doing differently?
testzone-03:~# cat /etc/lxc/default.conf
lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx
lxc.start.auto = 1
lxc.net.0.type = veth
# lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
saltstack-01:~# salt testzone-03 lxc.create container1 profile=default network_profile=default template=download options='{dist: ubuntu, release: xenial, arch: amd64}'
testzone-03:
----------
result:
True
state:
----------
new:
stopped
old:
None
md5-98fcb562de1c37fdb2064094e5c40bc7
saltstack-01:~# salt testzone-03 pillar.get lxc.network_profile
testzone-03:
----------
default:
----------
eth0:
----------
disable:
True
md5-98fcb562de1c37fdb2064094e5c40bc7
testzone-03:~# cat /var/lib/lxc/container1/config
...
lxc.uts.name = container1
# Network configuration
lxc.net.0.hwaddr = 00:16:3e:e4:6a:d5
lxc.net.0.type = veth
# lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
md5-98fcb562de1c37fdb2064094e5c40bc7
container1:/# ip a sh dev eth0
103: eth0@if104: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:e4:6a:d5 brd ff:ff:ff:ff:ff:ff link-netnsid 0
Hi
The workaround is fine as long as there is only one bridge on the host (vlans) and every container has the same network interface and corresponding bridge.
As soon as you want to specify different bridges for different containers things get ugly as you have to create a file state to modify the container config that way. Or is there a better way?
Is there any information on when this will be fixed/enhanced?
Still there in 2019.2.1.
I tried to make it work tonight, quick and dirty. I got something working for my needs, here is the gist of modules/lxc.py https://gist.github.com/AdrienR/9ec35a4275d458db8ac43bf2f93ddb10 This is from salt 2019.2.2
I would like to make it clean and try to get it merged in salt. Here is my proposed changes :
replace all 'lxc.network...' string with a function returning either lxc.network.. or lxc.net.x... checking the lxc version. Look not so complicated for an eventual first contribution. Any advice ?
Also, if i want to try : is this a bug because lxc3 is breaking the lxc module of salt, or is this a feature because the lxc module will support lxc3 ?
I don't have any others issues using the current module with lxc3.
I tried to make it work tonight, quick and dirty. I got something working for my needs, here is the gist of modules/lxc.py https://gist.github.com/AdrienR/9ec35a4275d458db8ac43bf2f93ddb10 This is from salt 2019.2.2
I would like to make it clean and try to get it merged in salt. Here is my proposed changes :
replace all 'lxc.network...' string with a function returning either lxc.network.. or lxc.net.x... checking the lxc version. Look not so complicated for an eventual first contribution. Any advice ?Also, if i want to try : is this a bug because lxc3 is breaking the lxc module of salt, or is this a feature because the lxc module will support lxc3 ?
I don't have any others issues using the current module with lxc3.
We could change _network_profile_ from a dict, to a dict of dicts, and usethe first key to satisfy the lxc.net.[virtualised_network_index].* requirement.
Example;
Or a dict of dicts with integers as the key.
network_profile:
0:
lxcbr:
eth0:
flags: up
link: virbr0
type: veth
ipv4: 192.168.112.2/24
Thoughts?
Or a dict of dicts with integers as the key.
Or a list?
Or a dict of dicts with integers as the key.
The key would be redundant with the interface name ? Dunno if it's an issue or not.
I would vote for the list of dict instead.
Or a dict of dicts with integers as the key.
The key would be redundant with the interface name ? Dunno if it's an issue or not.
I would vote for the list of dict instead.
Yeah. I gave it some thought and think the dict approach is required.
Ability to target the vnet arbitrarily is important to many orchestration use cases.
On the question of using the network name as the key, thats not how lxc v3.0.3 seems to be setup.
lxc.net.0.name = foo
lxc.net.1.name = foo
As far as I know, both of these are valid because they're on separate vnets.
Let me know if you think I'm missing something here. I'm troubleshooting this on the side at work, so may not be catching everything 100%.
@adam-codeberg: since you are working on the modules/lxc.py, here is a "quick fix" I personnaly had to do: https://github.com/saltstack/salt/issues/52219#issuecomment-535421741
Created a fork and branch for this issue here https://github.com/adam-codeberg/salt/blob/fix-issue-50679/salt/modules/lxc.py
I have not included backwards compatible. with LXC Works with. Edit: Needed to increase the bootstrap_delay to accommodate extra networks, but is working well.lxc.network_profile:
lxcbr:
eth0:
flags: up
link: virbr0
type: veth
ipv4:
address: 192.168.112.1/24
eth1:
flags: up
link: virbr0
type: veth
ipv4:
address: 10.0.0.1/24
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.
Thank you for updating this issue. It is no longer marked as stale.
Most helpful comment
Still there in 2019.2.1.