minikube creating >1000 files in /tmp (1-3 per run)

Created on 20 Jun 2018  Â·  19Comments  Â·  Source: kubernetes/minikube

I'm seeing lots of files being created in /tmp when I run minikube

pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ cat /tmp/minikube.pcarlton1.pcarlton.log.WARNING.20180620-093000.14780 
Log file created at: 2018/06/20 09:30:00
Running on machine: pcarlton1
Binary: Built with gc go1.9.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ ls -l /tmp | grep minikube.pcarlton1.pcarlton.log | wc -l
1452
pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ cat /tmp/minikube.pcarlton1.pcarlton.log.INFO.20180620-093015.15416
Log file created at: 2018/06/20 09:30:15
Running on machine: pcarlton1
Binary: Built with gc go1.9.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0620 09:30:15.721658   15416 notify.go:109] Checking for updates...
pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ ls -l /tmp | grep minikube.pcarlton1.pcarlton.log | wc -l
1480

minikube version
minikube version: v0.28.0
pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ echo "";

pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ echo "OS:";
OS:
pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04.4 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.4 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ echo "";

pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ echo "VM driver": 
VM driver:
pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ grep DriverName ~/.minikube/machines/minikube/config.json
    "DriverName": "kvm2",
pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ echo "";

pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ echo "ISO version";
ISO version
pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ grep -i ISO ~/.minikube/machines/minikube/config.json
        "Boot2DockerURL": "file:///home/pcarlton/.minikube/cache/iso/minikube-v0.28.0.iso",
        "ISO": "/home/pcarlton/.minikube/machines/minikube/boot2docker.iso",

how can I fix this?

help wanted kincleanup lifecyclrotten prioritawaiting-more-evidence 2019q2

Most helpful comment

I have similar issue and I think it is related to vscode cloud-code extension .cache/cloud-code/installer/google-cloud-sdk/bin/minikube

https://github.com/GoogleCloudPlatform/cloud-code-vscode/issues/286

All 19 comments

Is this a problem for you when a tool creates files in /tmp?

A few files are ok but we are talking about hundreds of files
ls -l /tmp | grep minikube.pcarlton1.pcarlton.log | wc -l
1452

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Generally, minikube only creates a single .INFO file per execution, with the format of:

minikube..INFO..

minikube will also symlink the latest .INFO file to /tmp/minikube.INFO. These two behaviors are inherited from https://github.com/golang/glog

If minikube needs to log lines at ERROR or WARNING level, these will be extracted out into .ERROR and .WARNING files appropriately. At maximum, you should be seeing 3 files per minikube execution, plus a set of symlinks to the latest run.

We could presumably change these behaviors by switching to a different logging library, but I'm not yet convinced that it is of great benefit to change the behavior. Will leave this bug open though just in case.

FWIW, on my workstation where I execute minikube dozens of times a day, I had 142 files in /tmp. My OS also automatically cleans up stale /tmp files, however.

@paulcarlton,

I noticed that minikube on my system was creating a set of three (INFO, ERROR and WARNING files) in /tmp/ pretty much every minute, sometimes several sets per minute. Eventually some kube operations would take seemingly forever to return and I started having other system issues (in my case it was a VM running minikube)... and I found loads (hundreds of thousands in my case) of these files in /tmp.

I think the problem is that the minikube executable does its business launching the cluster and then exits - but in my systemd configuration I had set the service to Restart=always with a restartSec=10. So I assume what was happening was minikube was being restarted 10 seconds after it exited... and creating a new set of log files for every re-launch.

I don't recall at this point if I originally manually created the systemd script (CentOS 7) in /usr/lib/systemd/system/minikube.service for systemd or if it was part of some package installation. Probably I manually crafted it and had erroneously set Restart=always. Surely this was causing all sorts of unnecessary chaos for me. You might want to doublecheck that minikube isn't being continually restarted such as by systemd as I described above.

Presently I have the systemd service set (with vm-driver=none for Docker-only) as follows with minikube installed in /usr/local/bin:

[Service]
Type=simple
ExecStart=/usr/local/bin/minikube start --vm-driver=none
Restart=no
StartLimitInterval=0
RestartSec=30
ExecStop=/usr/local/bin/minikube stop

Additional suggestions for better tuning of the systemd startup file for minikube are welcome. I didn't see a suggested systemd config with the minikube installation notes, but perhaps I overlooked it.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

I'm having a similar issue with the difference that I'm not running minikube (or at least I thought I wasn't), I only have it installed.

After reading all comments here something crossed my mind: I'm running zsh and oh-my-zsh with the minikube plugin. What happens is that every time I open a new terminal the plugin runs minikube completion zsh and new log files are created. As I use a tilling window manager (i3) I am opening and closing terminals all the time, hence generating dozens of log files after a few days without cleaning my /tmp.

/reopen

On Thu, Nov 7, 2019, 2:17 PM Tiago Queiroz notifications@github.com wrote:

I'm having a similar issue with the difference that I'm not running
minikube (or at least I thought I wasn't), I only have it installed.

After reading all comments here something crossed my mind: I'm running zsh
and oh-my-zsh with the minikube plugin. What happens is that every time I
open a new terminal the plugin runs minikube completion zsh and new log
files are created. As I use a tilling window manager (i3) I am opening and
closing terminals all the time, hence generating dozens of log files after
a few days without cleaning my /tmp.

—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/minikube/issues/2918?email_source=notifications&email_token=AAAYYMHDOUXJLUYQDP6W6R3QSSHZDA5CNFSM4FF3LC5KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDOAZWI#issuecomment-551292121,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAAYYMBWEZWLCB4HORK6I33QSSHZDANCNFSM4FF3LC5A
.

@tstromberg: Reopened this issue.

In response to this:

/reopen

On Thu, Nov 7, 2019, 2:17 PM Tiago Queiroz notifications@github.com wrote:

I'm having a similar issue with the difference that I'm not running
minikube (or at least I thought I wasn't), I only have it installed.

After reading all comments here something crossed my mind: I'm running zsh
and oh-my-zsh with the minikube plugin. What happens is that every time I
open a new terminal the plugin runs minikube completion zsh and new log
files are created. As I use a tilling window manager (i3) I am opening and
closing terminals all the time, hence generating dozens of log files after
a few days without cleaning my /tmp.

—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/minikube/issues/2918?email_source=notifications&email_token=AAAYYMHDOUXJLUYQDP6W6R3QSSHZDA5CNFSM4FF3LC5KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDOAZWI#issuecomment-551292121,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAAYYMBWEZWLCB4HORK6I33QSSHZDANCNFSM4FF3LC5A
.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

I don't even have minikube installed on my system, but it indeed creates hundreds files per minute, anyone has come across this issue?

I have similar issue and I think it is related to vscode cloud-code extension .cache/cloud-code/installer/google-cloud-sdk/bin/minikube

https://github.com/GoogleCloudPlatform/cloud-code-vscode/issues/286

Kill the VScode and no more /tmp/minikube*x1000 files

Was this page helpful?
0 / 5 - 0 ratings