Podman: log corrupted

Created on 22 Feb 2020  路  13Comments  路  Source: containers/podman

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Steps to reproduce the issue:

podman run -d --net=host --security-opt "label=disable" --name pd --hostname "pd.tikv" docker.io/pingcap/pd \
        --name pd \
        --client-urls http://0.0.0.0:2379 \
        --peer-urls http://0.0.0.0:2380 \
        --advertise-client-urls http://127.0.0.1:2379 \
        --advertise-peer-urls http://127.0.0.1:2380

podman run --net=host --security-opt "label=disable" --name tikv --hostname "tikv.tikv"   docker.io/pingcap/tikv --addr 0.0.0.0:20160 \
    --status-addr 0.0.0.0:20180 \
    --advertise-addr 127.0.0.1:20160 \
    --pd-endpoints 127.0.0.1:2379

Describe the results you received:

[WARN] [server.rs:751] ["check: kernel"] [err="kernel parameters net.core.somaxconn got 4096, expect 32768"]
[WARN] [server.rs:751] ["check: kernel"] [err=
[WARN] [server.rs:751] ["check: kernel"] [err="kernel parameters vm.swappiness got 60, expect 0"]
[INFO] [
[INFO] [<unknown>] 
[INFO] [util.rs:385] ["PD failed to respond"] [err=
[WARN] [client.rs:58] ["validate PD endpoints failed"] [err="
[INFO] [util.rs:425] ["connecting to PD endpoint"] [endpoints
[INFO] [<unknown>] ["New connected subchannel at 0x7fb0aca4b120 for subchannel 0x7fb0ac621200"]
[INFO] [util.rs:425] ["connecting to PD endpoint"] 
[INFO] [<unknown>]
[INFO
[INFO] [<unknown>] ["New connected subchannel at 0x7fb0aca4b060 for subchannel 0x7fb0ac621600
[INFO] [util.rs:484] ["connected to PD leader"] 
[INFO] [util.rs:413] ["all PD endpoints are consistent"] [endpoints
[INFO] [server.rs:244] ["connect to PD cluster"] [
[INFO] [server.rs:759] ["beginning system configuration check"]
[FATAL] [server.rs:770] [

Describe the results you expected:

[WARN] [server.rs:751] ["check: kernel"] [err="kernel parameters net.core.somaxconn got 4096, expect 32768"]
[WARN] [server.rs:751] ["check: kernel"] [err="kernel parameters net.ipv4.tcp_syncookies got 1, expect 0"]
[WARN] [server.rs:751] ["check: kernel"] [err="kernel parameters vm.swappiness got 60, expect 0"]
[INFO] [util.rs:425] ["connecting to PD endpoint"] [endpoints=127.0.0.1:2379]
[INFO] [<unknown>] ["New connected subchannel at 0x7f16852132d0 for subchannel 0x7f1685221000"]
[INFO] [util.rs:425] ["connecting to PD endpoint"] [endpoints=http://127.0.0.1:2379]
[INFO] [<unknown>] ["New connected subchannel at 0x7f1685666150 for subchannel 0x7f1685221200"]
[INFO] [util.rs:425] ["connecting to PD endpoint"] [endpoints=http://127.0.0.1:2379]
[INFO] [<unknown>] ["New connected subchannel at 0x7f1685666120 for subchannel 0x7f1685221400"]
[INFO] [util.rs:484] ["connected to PD leader"] [endpoints=http://127.0.0.1:2379]
[INFO] [util.rs:413] ["all PD endpoints are consistent"] [endpoints="[\"127.0.0.1:2379\"]"]
[INFO] [server.rs:244] ["connect to PD cluster"] [cluster_id=6796362732088887994]
[INFO] [server.rs:759] ["beginning system configuration check"]
[FATAL] [server.rs:770] ["the maximum number of open file descriptors is too small, got 1024, expect greater or equal to 82920"]

(produced by running same image inside docker)

Output of podman version:

Version:            1.8.0
RemoteAPI Version:  1
Go Version:         go1.13.6
OS/Arch:            linux/amd64

Output of podman info --debug:

debug:
  compiler: gc
  git commit: ""
  go version: go1.13.6
  podman version: 1.8.0
host:
  BuildahVersion: 1.13.1
  CgroupVersion: v1
  Conmon:
    package: conmon-2.0.10-2.fc31.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.10, commit: 6b526d9888abb86b9e7de7dfdeec0da98ad32ee0'
  Distribution:
    distribution: fedora
    version: "31"
  IDMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  MemFree: 893992960
  MemTotal: 16416837632
  OCIRuntime:
    name: crun
    package: crun-0.12.1-1.fc31.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 0.12.1
      commit: df5f2b2369b3d9f36d175e1183b26e5cee55dd0a
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  SwapFree: 8530223104
  SwapTotal: 9656332288
  arch: amd64
  cpus: 8
  eventlogger: journald
  hostname: knight1
  kernel: 5.4.19-200.fc31.x86_64
  os: linux
  rootless: true
  slirp4netns:
    Executable: /usr/bin/slirp4netns
    Package: slirp4netns-0.4.0-20.1.dev.gitbbd6f25.fc31.x86_64
    Version: |-
      slirp4netns version 0.4.0-beta.3+dev
      commit: bbd6f25c70d5db2a1cd3bfb0416a8db99a75ed7e
  uptime: 55h 56m 21.74s (Approximately 2.29 days)
registries:
  search:
  - docker.io
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - quay.io
store:
  ConfigFile: /home/f.ludwig/.config/containers/storage.conf
  ContainerStore:
    number: 53
  GraphDriverName: overlay
  GraphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-0.7.5-2.fc31.x86_64
      Version: |-
        fusermount3 version: 3.6.2
        fuse-overlayfs: version 0.7.5
        FUSE library version 3.6.2
        using FUSE kernel interface version 7.29
  GraphRoot: /home/f.ludwig/.local/share/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 46
  RunRoot: /run/user/1000
  VolumePath: /home/f.ludwig/.local/share/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

podman-1.8.0-2.fc31.x86_64

Additional environment details (AWS, VirtualBox, physical, etc.):

Good First Issue kinbug

All 13 comments

A friendly reminder that this issue had no activity for 30 days.

Hi bot! Thanks for your friendly reminder but who exactly are you reminding :sweat_smile: ?

It is notifying everyone to take a look at it again, otherwise these issues just get lost.

I don't even recall seeing this one, and no one investigated, that I know of.

These logs are not being written by Podman, I would guess they are coming from CNI? @mheon @mccv1r0 WDYT?

Those logs don't look like anything CNI produces.

Concur, this is the app in the container. Looks like output is somehow being cut off.

Can you see the full logs in podman logs?

Yes, the output is from the app itself (output from podman run). Running the exact same commands in docker does produce non scrambled output.

Ok so most likely standard out/error is sending some non printable characters, which is causing us to have an end log line. Perhaps a null embedded in the output?

A friendly reminder that this issue had no activity for 30 days.

@haircommander @ryanchpowell PTAL

This may or may not be the same issue, but specifically with hashicorp vault log output being incorrect is biting me.

podman run --cap-add=IPC_LOCK -d -p 8200:8200 --name=dev-vault vault
0e82dafadf4b2215918c322fefe5da2e836cea5da43d44494c601e5c2ec03d95
podman logs 0e82dafadf4b2215918c322fefe5da2e836cea5da43d44494c601e5c2ec03d95
==> Vault server configuration:

             Api Address: http:/
                     Cgo: disabled
         Cluster Address: https://0.0.0.0:8201
              Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disab
               Log Level: in
                   Mlock: supported: true, enabled: 
           Recovery Mode: fa
                 Storage: inme
                 Version: Vault 

WARNING! dev mode is enabled! In this mode, Vault runs entirely in-
and starts unsealed with a single unseal key. The root token is al
authenticated to the CLI, so you can immediately begin using Vault

You may need to set the following environment variabl

    $ export VAULT_ADDR='http://0.0.0.0:8200'

The unseal key and root token are displayed below in case you w
seal/unseal the Vault or re-authenti

Unseal Key: x82PGc0SOTm/U2Qs8w3gNYeg7ztScXnbDQAK8Zyikws=
Root Token: s.Kp5pT58XmZDXX68pisGYR

The issue specifically is around the root token, that thing is supposed to be = 26 chars as far as I understand it, it is not. podman logs has given me a range of character returns, 19, 20, 23, but so far not the entire 26 characters, which in turn means I can't log into vault.

Also at various times it mangles the API address, cluster address, VAULT_ADDR.

On a subsequent run after podman rm:

podman run --cap-add=IPC_LOCK -d -p 8200:8200 --name=dev-vault vault
55e98c626fd0178adce70ed2f52a03f719d34568fd04a8b3933f23e8f74508d6

podman logs 55e98c626fd0178adce70ed2f52a03f719d34568fd04a8b3933f23e8f74508d6
==> Vault server configuration:

             Api Address: http://0.0.0.0:8200
                     Cgo: disabled
         Cluster Address: https://0.0.0.0:8201
              Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
               Log Level: info

           Recovery Mode: false
                 Storage: inmem
                 Version: Vault v1.4

WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory
and starts unsealed with a single unseal key. The root token is alrea
authenticated to the CLI, so you can immediately begin using Vau

You may need to set the following environment variable:

    $ export VAULT_ADDR='http://0.0.0.0

The unseal key and root token are displayed below in case you want to
seal/unseal the Vault or re-authenticate

Unseal Key: /CmrNUP85+R7yP2JoDslzRpTS/KUxqebX0/vk4IdSjw
Root Token: s.6oJtb8qiAEuE9wKyA

Root token is now 19 characters, and VAULT_ADDR does not even include the port, 'Api Address' is complete on this run versus the previous run, 'Listener 1' is also complete versus the previous run.

And here just for completeness is the logs from docker:

==> Vault server configuration:
             Api Address: http://0.0.0.0:8200
                     Cgo: disabled
         Cluster Address: https://0.0.0.0:8201
              Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
               Log Level: info
                   Mlock: supported: true, enabled: false
           Recovery Mode: false
                 Storage: inmem
                 Version: Vault v1.4.2
WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory
and starts unsealed with a single unseal key. The root token is already
authenticated to the CLI, so you can immediately begin using Vault.
You may need to set the following environment variable:
    $ export VAULT_ADDR='http://0.0.0.0:8200'
The unseal key and root token are displayed below in case you want to
seal/unseal the Vault or re-authenticate.
Unseal Key: 9OFHYXwEt/hW6+TCjjyK83ufCenzyIRITejWmbMl5fY=
Root Token: s.TMfHhiNtWWDp7f6ASJndtHVm

26 characters on Root roken, entire lines for the Warning, on and on.

This was tested in rootless mode on F32 with podman-1.9.3-1.fc32.x86_64

I suspect the issue is here: https://github.com/containers/libpod/blob/b2200dbe056ac409461033f646cdd25bcccea521/libpod/container_log.go#L64

It should say:

        if nll.Partial() {
            partial += nll.Msg
            continue
        } else if !nll.Partial() && len(partial) > 1 {
            nll.Msg = partial + nll.Msg
            partial = ""
        }

To add a little more explanation, looking at the raw ctr.log, it appears that the missing text is always the contents of an "F" line, following a series of "P" lines.

==> Vault server configuration: 

            Api Address: http://0.0.0.0:8200
                     Cgo: disabled
         Cluster Address: https://0.0.0.0:8201
               Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "di
               Log Level: in
                   Mlock: supported: true, enabled: fals
           Recovery Mode:
                 Storage: inme
                 Version: Vault v1.4

vs

2020-06-10T11:35:54.389609973-06:00 stdout F ==> Vault server configuration:
2020-06-10T11:35:54.389609973-06:00 stdout F
2020-06-10T11:35:54.389609973-06:00 stdout P
2020-06-10T11:35:54.389696026-06:00 stdout F             Api Address: http://0.0.0.0:8200
2020-06-10T11:35:54.389696026-06:00 stdout F                      Cgo: disabled
2020-06-10T11:35:54.389696026-06:00 stdout F          Cluster Address: https://0.0.0.0:8201
2020-06-10T11:35:54.389696026-06:00 stdout P               Listener 1: tcp (addr: "0.0.0.0:8200", cl
uster address: "0.0.0.0
2020-06-10T11:35:54.389702957-06:00 stdout P :8201", max_request_du
2020-06-10T11:35:54.389721487-06:00 stdout P ration: "1m3
2020-06-10T11:35:54.389725031-06:00 stdout P 0s", max_requ
2020-06-10T11:35:54.389728210-06:00 stdout P est_size:
2020-06-10T11:35:54.389731534-06:00 stdout P "33554432"
2020-06-10T11:35:54.389734794-06:00 stdout P , tls: "di
2020-06-10T11:35:54.389738024-06:00 stdout F sabled")
2020-06-10T11:35:54.389741282-06:00 stdout P         
2020-06-10T11:35:54.389744547-06:00 stdout P        Log
2020-06-10T11:35:54.389747790-06:00 stdout P  Level: in
2020-06-10T11:35:54.389750992-06:00 stdout F fo
2020-06-10T11:35:54.389750992-06:00 stdout P     
2020-06-10T11:35:54.389754412-06:00 stdout P       
2020-06-10T11:35:54.389757685-06:00 stdout P     Mlock:
2020-06-10T11:35:54.389760899-06:00 stdout P  supported
2020-06-10T11:35:54.389764249-06:00 stdout P : true, ena
2020-06-10T11:35:54.389767473-06:00 stdout P bled: fals
2020-06-10T11:35:54.389770614-06:00 stdout F e
2020-06-10T11:35:54.389770614-06:00 stdout P
2020-06-10T11:35:54.389774023-06:00 stdout P       Reco
2020-06-10T11:35:54.389777271-06:00 stdout P very Mode:
2020-06-10T11:35:54.389780374-06:00 stdout F  false
2020-06-10T11:35:54.389783661-06:00 stdout P        
2020-06-10T11:35:54.389786910-06:00 stdout P        Sto
2020-06-10T11:35:54.389790225-06:00 stdout P rage: inme
2020-06-10T11:35:54.389793368-06:00 stdout F m
2020-06-10T11:35:54.389793368-06:00 stdout P      
2020-06-10T11:35:54.389796715-06:00 stdout P       
2020-06-10T11:35:54.389799932-06:00 stdout P   Version:
2020-06-10T11:35:54.389803177-06:00 stdout P  Vault v1.4
2020-06-10T11:35:54.389806257-06:00 stdout F .2

Good catch! Mind making a pull request with that change?

Was this page helpful?
0 / 5 - 0 ratings