Nixpkgs: Support network-namespace based wireguard vpn setup [feature request]

Created on 17 Dec 2018  路  22Comments  路  Source: NixOS/nixpkgs

At https://www.wireguard.com/netns/, the wireguard documentation recommends as the top option a vpn setup based on a network user-namespace, rather than routing. It would probably require some integration with other services - for example, variants of the wpa_supplicant and dhcpcd services that run inside the network namespace.

Most helpful comment

my implementation is improved at https://github.com/anderspapitto/nixos-configuration/blob/master/wireguard-client.nix (no longer requiring the separate tweak to nixpkgs), and discussed at https://reflexivereflection.com/posts/2018-12-18-wireguard-vpn-with-network-namespace-on-nixos.html. That's probably where I'll let things rest, as it's fully functional for my use case.

All 22 comments

So, to brainstorm what this might consist of

(1) a new systemd service wg0-namespace, which
(a) creates a network namespace named physical
(b) moves eth0 and wlan0 (or equivalent) into that namespace
(c) creates the wireguard interface wg0 inside namespace physical
(d) moves wg0 to the init namespace
phys-net-ns would conflict with wpa_supplicant and dhcpcdas they would no longer have access to eth0/wlan0.

(2) two new systemd services, wg0-supplicant and wg0-dhcpcd, which are equivalent to the wpa_supplicant and dhcpcd services, respectively, except that they wrap ExecStart with ip netns exec physical ...
these two services depend on wg0-namespace

(3) a new systemd service, wg0-vpn, which (all in the init namespace)
(a) configures the wg0 interface with private key, peer IP and so on
(b) brings up the interface
(c) adds the interface as default route

(4) optionally a cli utility with the physexec functionality described in the wireguard docs, to allow explicitly launching process to bypass the vpn

--

possibly (3) should just be folded into (1).

ideally shutting off the vpn would automatically restore wpa_supplicant and dhcpcd services (in the default root namespace). I'm not sure if this can be expressed explicitly with systemd units - we might just need an explicit systemctl invocation in ExecStopPost

I have a prototype implementation here https://github.com/anderspapitto/nixos-configuration/commit/ff73f584e717e84ccab11a454d5f4987082e2ee3, with one small tweak to nixpkgs needed here https://github.com/anderspapitto/nixpkgs/commit/3429be383a98a5213ec1b509a5145d3d23a8d7a7

It works great, but some parts are a bit hacky at the moment. What would really help is a clean way to say "restart this systemd service inside this user namespace". I'm not sure if there's any support for this type of thing in systemd itself, or in nixos. One thought is to maybe use nixos' containers, if they support namespaces.

my implementation is improved at https://github.com/anderspapitto/nixos-configuration/blob/master/wireguard-client.nix (no longer requiring the separate tweak to nixpkgs), and discussed at https://reflexivereflection.com/posts/2018-12-18-wireguard-vpn-with-network-namespace-on-nixos.html. That's probably where I'll let things rest, as it's fully functional for my use case.

Thanks for the write-up! I'll probably be stealing this ;)

Just a question, but where does enp0s25 come from? I see that the wireless interface is included at networking.wireless.interfaces but I'm lost on where enp0s25 comes from. I only have a single network interface, wlp59s0.

Also it seems I can't use iwd with this currently, since I read that dbus services can't connect to network namespaces.

Oops, thought enp0s25 was required, forgot that it was an interface for ethernet connections.

I have set up NixOS with multiple network namespaces in the following way. I prefer OpenVPN, but on occasion I have tested this with WireGuard too.

I keep physical network interfaces in the initial network namespace (I call it inet), while launching the desktop and creating virtual interfaces in a new namespace (I call it vnet). I also have nonet with no interfaces other than loopback.

Relevant bits of configuration:

{
  services.nscd.enable = false; # Do not let DNS cross namespaces.
  services.dnsmasq.enable = true; # inet DNS
  services.unbound.enable = true; # vnet DNS
  # Use local DNS.
  environment.etc = {
    "resolv.conf".text = "nameserver 127.0.0.1\n";
  };
  # Launch the desktop in vnet.
  services.xserver.displayManager.sddm.enable = true;
  services.xserver.displayManager.sddm.extraConfig = ''
    [General]
    Namespaces=/run/netns/vnet
  '';

  systemd.services = {
    # Create namespaces and bind them in /run/netns/.
    ns-inet = nsunit "inet" false;
    ns-nonet = nsunit "nonet" true;
    ns-vnet = nsunit "vnet" true;

    # Launch select services in vnet.
    "user@" = VNET;
    nix-daemon = VNET;
    vboxnet0 = VNET;
    cups = VNET;
    quassel = VNET;
    postgresql = VNET;
    unbound = VNET;
}

where

{
  PN = { PrivateNetwork = "yes"; };
  NSVNET = { JoinsNamespaceOf = "ns-vnet.service"; };
  VNET = { serviceConfig = PN; unitConfig = NSVNET; };

  nsunit = ns: new: {
    description = "ns: " + ns;

    after = [ "run-netns.mount" ];
    wants = [ "run-netns.mount" ];
    wantedBy = [ "network.target" ];

    serviceConfig = {
      Type = "oneshot";
      RemainAfterExit = "yes";
    } // (if new then PN else {});

    path = [ pkgs.utillinux ];

    environment = { NS = ns; };
    script = builtins.readFile ./netns.start.sh;
    preStop = builtins.readFile ./netns.stop.sh;
  };

netns.start.sh:

test -e /run/netns/"$NS" || touch /run/netns/"$NS"
umount /run/netns/"$NS" || true
mount -o bind /proc/self/ns/net /run/netns/"$NS"

netns.stop.sh:

umount /run/netns/"$NS"
rm /run/netns/"$NS"

SDDM supports this since 0.18.0 (https://github.com/sddm/sddm/pull/798) which is already in Nixpkgs.

I managed to configure the netns after it got exposed to ip netns but JoinsNamespaceOf and PrivateNetwork still require the use of ip netns exec to actually use it (or maybe the way I'm setting it up is wrong)

ls -la /sys/class/net

fg12-unit-script-netns-test-start[30310]: drwxr-xr-x  2 root root 0 Dec 31 10:51 .
fg12-unit-script-netns-test-start[30310]: drwxr-xr-x 55 root root 0 Dec 31 10:51 ..
fg12-unit-script-netns-test-start[30310]: lrwxrwxrwx  1 root root 0 Dec 31 10:51 docker.
fg12-unit-script-netns-test-start[30310]: lrwxrwxrwx  1 root root 0 Dec 31 10:51 lo
fg12-unit-script-netns-test-start[30310]: lrwxrwxrwx  1 root root 0 Dec 31 10:51 wlp...

ip netns exec physical ls -la /sys/class/net

fg12-unit-script-netns-test-start[30310]: drwxr-xr-x  2 root root 0 Dec 31 10:51 .
fg12-unit-script-netns-test-start[30310]: drwxr-xr-x 55 root root 0 Dec 31 10:51 ..
fg12-unit-script-netns-test-start[30310]: lrwxrwxrwx  1 root root 0 Dec 31 10:51 lo

My configuration is

systemd.services = {
    "netns-${NETNS}" = {
      # Automatically create namespace
      # after = [ "run-netns.mount" ];
      # wants = [ "run-netns.mount" ];
      # wantedBy = [ "network.target" ];

      description = "Wireguard network namespace `${NETNS}`";
      documentation = [ "https://github.com/systemd/systemd/issues/2741#issuecomment-433979748" ];
      path = with pkgs; [ iproute utillinux ];
      serviceConfig = {
        Type = "oneshot";
        PrivateNetwork = true;
        RemainAfterExit = true;
      };

      postStop = ''
        ip netns del ${NETNS}
      '';

      script = ''
        ip netns add ${NETNS}
        umount /run/netns/${NETNS}
        mount --bind /proc/self/ns/net /run/netns/${NETNS}
      '';
    };

    netns-test = {
      # after = [ "netns-${NETNS}" ];
      # bindsTo = [ "netns-${NETNS}" ];
      # requires = [ "netns-${NETNS}" ];

      path = with pkgs; [ iproute ];
      unitConfig.JoinsNamespaceOf = "netns-${NETNS}.service";
      serviceConfig = {
        PrivateNetwork = true;
      };
      script = ''
        _NETNS=$(ip netns identify)

        ip netns list
        echo "CURRENT NETNS"
        echo "netns ''${_NETNS}"
        ip netns identify
        ls -la /sys/class/net
        ip netns exec ${NETNS} ls -la /sys/class/net
        echo "END"
      '';
    };
};

while executing

systemctl start netns-physical
systemctl start netns-test
systemctl status netns-test

ip netns exec does more than switching the network namespace: it also switches the mount namespace and mounts a private /sys.

However, in general, the files in /sys/class/net are not indicative of the network namespace you are in, and most programs do not care about them. You can list the network devices in your network namespace with ip link. This list is affected by PrivateNetwork and JoinsNamespaceOf as expected.

The problem of ip link for me is that it doesn't only output only the wireless interfaces (even though for me the wl* matches all of my wireless interfaces). I only managed to get this to work using wpa_supplicant and only got as far as seeing the interface on networkmanager and iwd and being unable to connect them to the Internet afterwards.

I think it is easier to leave physical interfaces in the initial network namespace: then NetworkManager just works.

Based on the article, it looks like interfaces have to be in the same network namespace in order for all the traffic to go through the WireGuard interface.

The WireGuard interfaces should be created in the namespace with internet connectivity (e.g. the namespace with physical interfaces), and then moved into the namespace with no interfaces (other than loopback). The netns article explains how to create a new namespace, move physical interfaces there, reconfigure them (because they lose their settings when they are moved), create a wireguard interface in that new namespace, and then move it into the original namespace. The intended benefit of this scheme is that it makes wireguard available to all applications running in the original namespace, but this includes Systemd and NetworkManager who manage them and the downside is the ensuing difficulty of configuring the physical interfaces after the move. I have shown a NixOS config that avoids this difficulty, but requires you to list the desktop and other services that you want to run over the VPN.

For people seeking a solution to the Allowed IPs = 0.0.0.0/0 problem, the newer networking.wg-quick.interfaces seems to be working flawlessly.

FYI: The whole "creating the namespace, and moving the wireguard interface into the namespace" part should be much easier once https://github.com/systemd/systemd/pull/14915 has landed.

see also: https://discourse.nixos.org/t/run-systemd-service-in-network-namespace/

@matthias-t can you please be more explicit how to use your solution to routing all traffic via Wireguard? I tried to follow your blog but it seems I cannot connect to the wg service which runs Wireguard.

I don't know if the following part is relevant in this case.

{ ... }: {
  systemd.services.<service> = {
    bindsTo = [ "wg.service" ];
    after = [ "wg.service" ];
    unitConfig.JoinsNamespaceOf = "[email protected]";
    serviceConfig.PrivateNetwork = true;
  };
}

For people seeking a solution to the Allowed IPs = 0.0.0.0/0 problem, the newer networking.wg-quick.interfaces seems to be working flawlessly.

@tpanum do you mean it works on the client-side or the server-side?

@Zhen-hao I'm not sure what you mean by "routing all traffic via Wireguard". The point of my post was to confine specific systemd services, not routing traffic for all processes. The service I called wg configures a wireguard interface in the wg netns. Its purpose is to isolate systemd services with JoinsNamespaceOf. But you can also just sudo ip netns exec wg sudo -u <user> bash for a shell.

@matthias-t sorry for the confusion. I thought you meant your solution could also solve the "Allowed IPs = 0.0.0.0/0" issue mentioned earlier in this thread. That's what I called "routing all traffic"

Also, you blog looks similar to this other blog. Do you think your solution can be used to achieve the same goal?

@Zhen-hao I believe many people get here by wanting to use Wireguard as a VPN, by basically routing all traffic through some service that support Wireguard (like Mullvad) in order to not disclose their traffic to some untrusted NAT.

What I described have been working great for me, as a client.

@tpanum thank you! I had some trouble on the server-side because I forgot to reboot after setting "net.ipv4.ip_forward" = 1;

Was this page helpful?
0 / 5 - 0 ratings

Related issues

worldofpeace picture worldofpeace  路  103Comments

danykey picture danykey  路  64Comments

timokau picture timokau  路  66Comments

joepie91 picture joepie91  路  102Comments

purefn picture purefn  路  68Comments