This is more a feature request.
In my deployment I use containers as host slaves, and fill /etc/hosts with host and containers' hostnames in containers (so container can access other containers and host by their hostname).
Would be great, if this was a container configration option, and not manually crafted /etc/hosts.
Kind of DNS, but for containers.
If we're using nspawn and machined, then the nss-mymachines module should do this automatically.
All my containers are declarative. I use
privateNetwork = true;
hostAddress = ...
localAddress = ...
Here is sample of session:
danbst@iron ~$
$ nixos-container list
d-live
d-test
dashboard
pvk-live
ttt
danbst@iron ~$
$ sudo nixos-container run d-test -- ping d-live
ping: unknown host d-live
I'd check that your /etc/nsswitch.conf has mymachines in it, and that nscd is running.
In case you talk about host, yes, it "has mymachines in it, and that nscd is running."
As for containers, yes, too:
$ sudo nixos-container run d-live -- cat /etc/nsswitch.conf | grep myma
hosts: files dns myhostname mymachines
$ sudo nixos-container run d-live -- systemctl status nscd
● nscd.service - Name Service Cache Daemon
Loaded: loaded (/nix/store/kq1cjmi618g4rc8nniknjk21z05hlcdv-unit-nscd.service/nscd.service; bad; vendor preset: enabled)
Active: active (running) since Wed 2016-07-06 12:58:21 UTC; 3h 14min ago
Main PID: 238 (nscd)
CGroup: /system.slice/system-container.slice/[email protected]/system.slice/nscd.service
└─238 nscd -f /nix/store/jy3ghxvh67r4f7cz5x0rwrda91dsyrk7-nscd.conf
Jul 06 12:58:21 d-live nscd[238]: 238 monitoring directory `/etc` (2)
Jul 06 12:58:21 d-live nscd[238]: 238 monitoring file `/etc/resolv.conf` (5)
Jul 06 12:58:21 d-live nscd[238]: 238 monitoring directory `/etc` (2)
Jul 06 15:25:17 d-live nscd[238]: 238 monitored file `/etc/hosts` was moved into place, adding watch
Jul 06 15:25:17 d-live nscd[238]: 238 monitored file `/etc/group` was moved into place, adding watch
Jul 06 15:25:17 d-live nscd[238]: 238 monitoring file `/etc/group` (7)
Jul 06 15:25:17 d-live nscd[238]: 238 monitoring directory `/etc` (2)
Jul 06 15:25:17 d-live nscd[238]: 238 monitored file `/etc/passwd` was moved into place, adding watch
Jul 06 15:25:17 d-live nscd[238]: 238 monitoring file `/etc/passwd` (8)
Jul 06 15:25:17 d-live nscd[238]: 238 monitoring directory `/etc` (2)
$ sudo nixos-container run d-live -- cat /etc/hosts
127.0.0.1 localhost
192.168.102.1 iron
192.168.102.1 iron_local
192.168.102.2 d-live
The hosts in /etc/hosts are manually added.
Interesting; I think our mymachines configuration is wrong, and that it should come before dns. Perhaps try editing that and see if it helps? The idea is that mymachines is a "virtual DNS" that uses dbus to speak to machined and figure out what to resolve the machine name to. It should definitely not be necessary to manually modify /etc/hosts.
For the record, I tried to put mymachines before dns and it didn't help:
@ericsagnes Confirmed
As far as I understand mymachines - it is for host to detect containers without entries in /etc/hosts. It does nothing for containers themself.
(BTW According to docs, the config is slightly misplaced
It is recommended to place "mymachines" after the "files" or "compat" entry of the
/etc/nsswitch.conf lines to make sure that its mappings are preferred over other
resolvers such as DNS, but so that /etc/hosts, /etc/passwd and /etc/group based mappings
take precedence.
https://www.freedesktop.org/software/systemd/man/nss-mymachines.html
a bit related to https://github.com/NixOS/nixpkgs/pull/6004 )
Another approach, referring to https://github.com/systemd/systemd/issues/456#issuecomment-119077631, it sounds possible to have bidirectional access using services.resolved.enable with the right configuration.
good find. Seems like Lennart is talking about these files: https://github.com/systemd/systemd/tree/master/network
and they are absent in NixOS
I think nss-mymachines simply doesn't work on NixOS (and I don't know why). The above workaround with services.resolved.enable is related to nss-resolve.
And actual answer to my question is in https://github.com/systemd/systemd/issues/3308#issuecomment-220606075 + proper resolved enabled on both sides.
It still needs to be implemented. Unfortunately, this requires bypassing most of container creation logic...
@danbst are you using nscd? The nsswitch.conf stuff for mymachines won't work without it.
Oh never mind, you addressed that further up. Not sure what's wrong, then.
I've made a pull request #20869 which seems to be relevant to this issue.
Can this be closed?
@srghma
it is still non-trivial to make containers see each other's names, so no, it shouldn't be closed yet
To be clear, I've tested the following configuration:
containers.test2 = {
autoStart = true;
privateNetwork = true;
hostAddress = "10.0.5.1";
localAddress = "10.0.5.2";
config = {
};
};
containers.test3 = {
autoStart = true;
privateNetwork = true;
hostAddress = "10.0.5.1";
localAddress = "10.0.5.3";
config = {
};
};
I can ping test2 and test3 from host machine, I can ping 10.0.5.2 and 10.0.5.3 from both containers, but I can't ping test2 and test3 from containers.
To overcome this issue, I've created an abstraction over NixOS containers:
{ pkgs, lib, config, ... }:
with lib;
with builtins;
let
cfg = config.app-containers;
defaultSubnet = "192.168.0";
bridgeName = "br0";
# This doesn't allow merge values, but allows duplicates
customStrType = with import <nixpkgs/lib>; mkOptionType {
name = "str";
check = isString;
merge = mergeEqualOption;
};
commonContainerHosts = concatMapStringsSep "\n"
(x: "${x.ip} ${x.containerName}")
(attrValues cfg);
appOptions = { name, config, ... }: {
options = {
containerName = mkOption {
default = name;
};
ip = mkOption {
type = customStrType;
};
deployment = mkOption { default = {}; };
extraConfig = mkOption { default = {}; };
subnet = mkOption {
type = customStrType;
default = defaultSubnet;
};
};
config = {
containerName = mkDefault name;
};
};
genContainer = parentCfg: {
autoStart = true;
privateNetwork = true;
hostBridge = bridgeName;
localAddress = parentCfg.ip + "/24";
config = lib.mkMerge [
({
imports = [
<nixpkgs/nixos/modules/profiles/headless.nix>
<nixpkgs/nixos/modules/profiles/minimal.nix>
];
networking.defaultGateway = "${parentCfg.subnet}.1";
networking.extraHosts = commonContainerHosts + "\n" + ''
${parentCfg.subnet}.1 ${config.networking.hostName}
'';
})
parentCfg.extraConfig
];
} // parentCfg.deployment;
in {
options = {
app-containers = mkOption {
type = with types; attrsOf (submodule appOptions);
};
};
config = mkIf (attrNames cfg != []) {
# Each container takes at least 4 inotify file handles. We start MANY of them
boot.kernel.sysctl."fs.inotify.max_user_instances" = 2048;
# Containers should be able to access Internet
networking.nat.enable = true;
networking.nat.internalInterfaces = [ "ve-+" "vb-+" "br0" ];
assertions =
let allIPs = foldl' (a: b: a // { "${b.ip}" = if hasAttr "${b.ip}" a then "non-unique" else "";}) {} (attrValues cfg);
in [ {
assertion = all (x: x == "") (attrValues allIPs);
message = "You should specify unique IPs! " + concatStringsSep ", " (mapAttrsToList (n: v: "${n} ${v}") allIPs);
} ];
networking.bridges.${bridgeName}.interfaces = [];
networking.interfaces.${bridgeName}.ip4 = [ { address = "${defaultSubnet}.1"; prefixLength = 24; } ];
networking.extraHosts = commonContainerHosts;
containers =
foldl'
(a: x: a // { "${x.containerName}" = genContainer x; })
{ }
(attrValues cfg);
};
}
Which is then used like this:
app-containers.test4 = {
ip = "192.168.0.2";
};
app-containers.test5 = {
ip = "192.168.0.3";
};
The trick here is to populate /etc/hosts in each container, so each new added container makes static changes to all containers:
[root@test4:~]# cat /etc/hosts
127.0.0.1 localhost
::1 localhost
192.168.0.2 test4
192.168.0.3 test5
192.168.0.1 station
Thank you for your contributions.
This has been automatically marked as stale because it has had no activity for 180 days.
If this is still important to you, we ask that you leave a comment below. Your comment can be as simple as "still important to me". This lets people see that at least one person still cares about this. Someone will have to do this at most twice a year if there is no other activity.
Here are suggestions that might help resolve this more quickly: