Podman: Unable to get Podman to run an image, building from source

Created on 17 Oct 2019  路  6Comments  路  Source: containers/podman

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

I am unable to get Podman to run an image, this is being built from scratch and I've followed the dependencies for installation. The most confusing part is the CNI plugins, however, I have build them and setup the config in /etc/cni/net.d, yet when I try to run the image it has an obscure error with iptables:

INFO[0000] About to add CNI network podman (type=bridge) 
DEBU[0000] mounted container "b2492a8b728319e70eae54524e4bfd7062e89b467ab0e2bb2bc494dcc9fbdd4b" at "/home/core/var/lib/containers/storage/overlay/357c856ed0dcb232cb74d59a85f999e18bbac84cccce73891f1ac5ebed29f55b/merged" 
DEBU[0000] Created root filesystem for container b2492a8b728319e70eae54524e4bfd7062e89b467ab0e2bb2bc494dcc9fbdd4b at /mnt/ssd0data/home/core/var/lib/containers/storage/overlay/357c856ed0dcb232cb74d59a85f999e18bbac84cccce73891f1ac5ebed29f55b/merged 
ERRO[0000] Error adding network: running [/usr/sbin/iptables -t nat -A CNI-701a16b5b6a1ae7074a21462 -d 10.88.0.4/16 -j ACCEPT -m comment --comment name: "podman" id: "b2492a8b728319e70eae54524e4bfd7062e89b467ab0e2bb2bc494dcc9fbdd4b" --wait]: exit status 1: iptables: No chain/target/match by that name. 
ERRO[0000] Error while adding pod to CNI network "podman": running [/usr/sbin/iptables -t nat -A CNI-701a16b5b6a1ae7074a21462 -d 10.88.0.4/16 -j ACCEPT -m comment --comment name: "podman" id: "b2492a8b728319e70eae54524e4bfd7062e89b467ab0e2bb2bc494dcc9fbdd4b" --wait]: exit status 1: iptables: No chain/target/match by that name. 
DEBU[0000] unmounted container "b2492a8b728319e70eae54524e4bfd7062e89b467ab0e2bb2bc494dcc9fbdd4b" 
DEBU[0000] Network is already cleaned up, skipping...   
DEBU[0000] Cleaning up container b2492a8b728319e70eae54524e4bfd7062e89b467ab0e2bb2bc494dcc9fbdd4b 
DEBU[0000] Network is already cleaned up, skipping...   
DEBU[0000] Container b2492a8b728319e70eae54524e4bfd7062e89b467ab0e2bb2bc494dcc9fbdd4b storage is already unmounted, skipping... 
DEBU[0000] ExitCode msg: "error configuring network namespace for container b2492a8b728319e70eae54524e4bfd7062e89b467ab0e2bb2bc494dcc9fbdd4b: running [/usr/sbin/iptables -t nat -a cni-701a16b5b6a1ae7074a21462 -d 10.88.0.4/16 -j accept -m comment --comment name: \"podman\" id: \"b2492a8b728319e70eae54524e4bfd7062e89b467ab0e2bb2bc494dcc9fbdd4b\" --wait]: exit status 1: iptables: no chain/target/match by that name.\n" 
ERRO[0000] error configuring network namespace for container b2492a8b728319e70eae54524e4bfd7062e89b467ab0e2bb2bc494dcc9fbdd4b: running [/usr/sbin/iptables -t nat -A CNI-701a16b5b6a1ae7074a21462 -d 10.88.0.4/16 -j ACCEPT -m comment --comment name: "podman" id: "b2492a8b728319e70eae54524e4bfd7062e89b467ab0e2bb2bc494dcc9fbdd4b" --wait]: exit status 1: iptables: No chain/target/match by that name. 

Steps to reproduce the issue:

  1. Build Podman from source

  2. Setup dependencies, load image from well known obi-minimal "registry.access.redhat.com/ubi8/ubi-minimal" that works on Fedora 28.

  3. Save using 'podman save oci-redhat-minimal.tar'

  4. Load image on another Linux system, 'podman load -I oci-redhat-minimal.tar'

  5. This works:

[root@test1 /]# podman images
REPOSITORY              TAG      IMAGE ID       CREATED       SIZE
localhost/ubi-minimal   latest   db87fd3a378e   6 weeks ago   91.7 MB
  1. Try to run it and it fails with the above message 'podman run --log-level debug ubi-minimal'

Describe the results you received:
Error on iptables

Describe the results you expected:
Just to run without error

Additional information you deem important (e.g. issue happens only occasionally):
Occurs every time

Output of podman version:

[root@nc1 /]# podman version
Version:            1.6.2-dev
RemoteAPI Version:  1
Go Version:         go1.10.8
OS/Arch:            linux/amd64

Output of podman info --debug:

[root@nc1 /]# podman info --debug
debug:
  compiler: gc
  git commit: ""
  go version: go1.10.8
  podman version: 1.6.2-dev
host:
  BuildahVersion: 1.11.3
  CgroupVersion: v1
  Conmon:
    package: Unknown
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.2-dev, commit: 0e888a95b9f7a632ce2557332967e142675cb661'
  Distribution:
    distribution: unknown
    version: unknown
  MemFree: 29606912000
  MemTotal: 33579040768
  OCIRuntime:
    package: Unknown
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc9+dev
      commit: 4e3701702e966b4258fbab5b92efa6418c5ae6c6
      spec: 1.0.1-dev
  SwapFree: 0
  SwapTotal: 0
  arch: amd64
  cpus: 8
  eventlogger: journald
  hostname: nc1
  kernel: 4.19.24
  os: linux
  rootless: false
  uptime: 39m 23.49s
registries:
  blocked: null
  insecure: null
  search:
  - docker.io
  - quay.io
  - registry.fedoraproject.org
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 5
  GraphDriverName: overlay
  GraphOptions:
    .mountopt: nodev
    .skip_mount_home: "false"
  GraphRoot: /home/core/var/lib/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  ImageStore:
    number: 1
  RunRoot: /home/core/var/run/containers/storage
  VolumePath: /home/core/var/lib/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

N/A built from source

Additional environment details (AWS, VirtualBox, physical, etc.):
Stripped down Linux OS based on Fedora, physical box

kinbug

All 6 comments

You mention your Linux install is stripped down - is IPTables installed? Is it running? Any rules created by default?

iptables (xtables-multi) is installed with all of the plugins installed as well. I start Podman with an empty ruleset for iptables. Same result the iptables error.

Empty ruleset? As in, iptables -nvL returns no active rules?

Correct it returns no active rules. In fact iptables-save is also empty, but after running the podman run command it is populated with the rule that perhaps should have already been there?

[root@nc1 ~]# iptables -nvL
Chain INPUT (policy ACCEPT 106 packets, 7485 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 91 packets, 7830 bytes)
 pkts bytes target     prot opt in     out     source               destination

After running podman run output of iptables-save (prior it was empty):

[root@nc1 ~]# iptables-save
# Generated by iptables-save v1.6.2 on Thu Oct 17 18:08:32 2019
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:CNI-c9de117064a1b26e2e3ed3ab - [0:0]
COMMIT
# Completed on Thu Oct 17 18:08:32 2019
# Generated by iptables-save v1.6.2 on Thu Oct 17 18:08:32 2019
*filter
:INPUT ACCEPT [117:8375]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [116:12393]
COMMIT
# Completed on Thu Oct 17 18:08:32 2019

The error message has the same CNI hash that was added above:

ERRO[0000] Error while adding pod to CNI network "podman": running [/usr/sbin/iptables -t nat -A CNI-c9de117064a1b26e2e3ed3ab -d 10.88.0.9/16 -j ACCEPT -m comment --comment name: "podman" id: "7162f9b03bc66c349b68b43581623b65ff81bfaf1a28e515ea3fbbd06e888805" --wait]: exit status 1: iptables: No chain/target/match by that name. 
Error: error configuring network namespace for container 7162f9b03bc66c349b68b43581623b65ff81bfaf1a28e515ea3fbbd06e888805: running [/usr/sbin/iptables -t nat -A CNI-c9de117064a1b26e2e3ed3ab -d 10.88.0.9/16 -j ACCEPT -m comment --comment name: "podman" id: "7162f9b03bc66c349b68b43581623b65ff81bfaf1a28e515ea3fbbd06e888805" --wait]: exit status 1: iptables: No chain/target/match by that name.

This one was a bit tricky, but turned out the kernel I'm building did not have the CONFIG_NETFILTER_XT_MATCH_COMMENT enabled...

The error message was quite cryptic from iptables.

I really appreciate the help, made me look further into iptables!

SO @hickersonj I guess we can close this issue, reopen if I am mistaken.

Was this page helpful?
0 / 5 - 0 ratings