Running a highstate on minions with some NFS mounts results in the mount being remounted every time. This did not occur under 2014.1
----------
[INFO ] Completed state [/nfsmnt] at time 17:21:20.145018
[INFO ] Running state [/nfsmnt] at time 17:21:20.146726
[INFO ] Executing state mount.mounted for /nfsmnt
[INFO ] Executing command 'mount -l' in directory '/root'
[INFO ] Executing command 'mount -l' in directory '/root'
[INFO ] Executing command 'mount -o rw,tcp,bg,hard,intr,remount -t nfshost:/nfsmnt /nfsmnt ' in directory '/root'
[INFO ] {'umount': 'Forced remount because options changed'}
[INFO ] Completed state [/nfsmnt] at time 17:21:20.267764
...
ID: /nfsmnt
Function: mount.mounted
Result: True
Comment:
Started: 10:04:16.078806
Duration: 68.802 ms
Changes:
----------
umount:
Forced remount because options changed
Running mount -l shows the following:
...
nfshost:/nfsmnt on /nfsmnt type nfs (rw,remount,tcp,bg,hard,intr,addr=x.x.x.x)
I can only assume it's breaking due to the addr option (which is automatically filled by the OS by the looks of it, it was never manually specified as a mount option) or the ordering.
The mount.mounted state looks as follows:
/nfsmnt:
mount.mounted:
- device: nfshost:/nfsmnt
- fstype: nfs
- opts: rw,tcp,bg,hard,intr
# salt-call --versions-report
Salt: 2014.7.0
Python: 2.6.8 (unknown, Nov 7 2012, 14:47:45)
Jinja2: 2.5.5
M2Crypto: 0.21.1
msgpack-python: 0.1.12
msgpack-pure: Not Installed
pycrypto: 2.3
libnacl: Not Installed
PyYAML: 3.08
ioflo: Not Installed
PyZMQ: 14.3.1
RAET: Not Installed
ZMQ: 4.0.4
Mako: Not Installed
@nvx Thank you for this very helpful bug report! We'll check this out.
This might be related to #18474
@garethgreenaway I don't think so: applying the patch manually didn't fix the issue for me. Besides I believe the option checking should not fail in the first place: as @nvx said: no option changed.
This issue arose for me when upgrading salt-2014.1.10-4.el5.noarch -> salt-2014.7.0-3.el5.noarch if that helps.
As a guess I would agree with @nvx it's remounting it because the addr option appears in the list of mounted file systems but not in the mount options specified in the state. There are a few that we flag as "hidden" options, we may have to do the same for the addr option.
Not sure if "me too"s are useful, but I'm experiencing this as well, and also with an NFS mount. My SLS entry looks like this:
/egtnas:
file.directory:
- makedirs: True
mount.mounted:
- device: 10.0.7.48:/
- fstype: nfs
- mkmnt: True
- opts: rw,proto=tcp,port=2049
and mount -l
includes this line:
10.0.7.48:/ on /egtnas type nfs (rw,proto=tcp,port=2049)
but when I run salt-call -l debug state.highstate
then I see this in the log output:
[INFO ] Executing state mount.mounted for /egtnas
[INFO ] Executing command 'mount -l' in directory '/root'
[DEBUG ] stdout: /dev/xvda1 on / type ext4 (rw) [cloudimg-rootfs]
<... snip ...>
10.0.7.48:/ on /egtnas type nfs (rw,proto=tcp,port=2049)
[INFO ] Executing command 'mount -l' in directory '/root'
[DEBUG ] stdout: /dev/xvda1 on / type ext4 (rw) [cloudimg-rootfs]
<... snip ...>
10.0.7.48:/ on /egtnas type nfs (rw,proto=tcp,port=2049)
[INFO ] Executing command 'mount -o rw,proto=tcp,port=2049,remount -t nfs 10.0.7.48:/ /egtnas ' in directory '/root'
[INFO ] {'umount': 'Forced remount because options changed'}
[INFO ] Completed state [/egtnas] at time 11:10:16.359984
and then the output includes this:
ID: /egtnas
Function: mount.mounted
Result: True
Comment:
Started: 11:10:16.335117
Duration: 24.867 ms
Changes:
----------
umount:
Forced remount because options changed
I'm running version 2014.7.0+ds-2trusty1 from the PPA:
Salt: 2014.7.0
Python: 2.7.6 (default, Mar 22 2014, 22:59:56)
Jinja2: 2.7.2
M2Crypto: 0.21.1
msgpack-python: 0.3.0
msgpack-pure: Not Installed
pycrypto: 2.6.1
libnacl: Not Installed
PyYAML: 3.10
ioflo: Not Installed
PyZMQ: 14.0.1
RAET: Not Installed
ZMQ: 4.0.4
Mako: 0.9.1
Planning to take a look at this today, hoping that some of the fixes I put in that got merged this morning addressed this. Thanks for the reports.
The fix provided by @garethgreenaway in #18978 changed, but didn't fix the situation for me.
Using now salt/states/mount.py
from 2014.7
as of ec9061983e822fa95597e155e811c18d4bf278e4 and get now this result from my NFS mount states:
----------
ID: library-storage-mount
Function: mount.mounted
Name: /media/remotefs/library
Result: None
Comment: Remount would be forced because options (bg) changed
Started: 23:35:40.916259
Duration: 73.019 ms
Changes:
@eliasp Can you include your state?
@garethgreenaway Sure, sorry…
{{ share }}-storage-mount:
mount.mounted:
- name: {{ pillar['storage']['mountroot'] }}/{{ share }}
- device: {{ host }}:/{{ share }}
- fstype: nfs
- mkmnt: True
- opts:
- defaults
- bg
- soft
- intr
- timeo=5
- retrans=5
- actimeo=10
- retry=5
- require:
- pkg: nfs-common
The current mount already includes the bg
option state.highstate test=True
wants to change:
$ mount | ack-grep -i library
134.2.xx.xx:/library on /media/remotefs/library type nfs (rw,bg,soft,intr,timeo=5,retrans=5,actimeo=10,retry=5)
The corresponding entry in /proc/self/mountinfo
doesn't contain the bg
option:
53 22 0:37 / /media/remotefs/library rw,relatime - nfs 134.2.xx.xx:/library rw,vers=3,rsize=32768,wsize=32768,namlen=255,acregmin=10,acregmax=10,acdirmin=10,acdirmax=10,soft,proto=tcp,timeo=5,retrans=5,sec=sys,mountaddr=134.2.xx.xx,mountvers=3,mountport=33308,mountproto=udp,local_lock=none,addr=134.2.xx.xx
Besides that, I stumbled upon a traceback while working on this. In case the device/resource is currently busy, the mount state will backtrace instead of handling this more graceful.
I'll file a separate issue for this.
Another related case I found here with a CIFS mount:
State:
elite-mount:
mount.mounted:
- name: {{ pillar['linux']['remotefspath'] }}/{{ name }}_share
- device: //{{ data.address }}/{{ data.share }}
- fstype: cifs
- mkmnt: True
- opts:
- username={{ data.user }}
- password={{ data.password }}
- _netdev
- soft
state.highstate test=True:
ID: elite-mount
Function: mount.mounted
Name: /media/remotefs/elite_share
Result: None
Comment: Remount would be forced because options (password=plain-text-password-redacted) changed
Started: 01:49:46.970902
Duration: 86.95 ms
Changes:
Entry in /proc/self/mountinfo
:
45 22 0:35 / /media/remotefs/elite_share rw,relatime - cifs //192.168.1.2/data_mirror rw,vers=1.0,cache=strict,username=backupuser,domain=8B9865J,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.1.2,file_mode=0755,dir_mode=0755,nounix,serverino,rsize=61440,wsize=65536,actimeo=1
So there are two issues here:
password
parampassword
as plaintext (redacted above as plain-text-password-redacted
)… and another option which causes mount.mounted
to stumble:
Forced remount because options (soft) changed
This is caused by the CIFS state described in my previous comment.
Is soft a valid flag for cifs?
Yes, soft is a valid flag which is used by default, so my usage of it in the state is actually redundant but not wrong.
On December 31, 2014 3:18:06 AM CET, garethgreenaway [email protected] wrote:
Is soft a valid flag for cifs?
Reply to this email directly or view it on GitHub:
https://github.com/saltstack/salt/issues/18630#issuecomment-68418655
Sent from my Android device with K-9 Mail. Please excuse my brevity.
Created #19369 to address some additional issues I ran into in my setup. With #19369 applied everything works for me so far.
Thanks for following up everyone! @nvx with the two pull requests applied, is this issue fixed for you as well?
I can confirm that #19369 will help with this problem... I have the same problem with nfs mounts that have a "bg" option set. The problem is that these options are not reflected in the proc filesystem.
In which release could I see the fix?
In which release could I see the fix?
This will be in 2014.7.1
and 2015.2.0
.
If you want to test it now, place salt/states/mount.py from 2014.7
your states repository/directory in a new subdirectory named _states
.
Then run salt your-minion saltutil.sync_all
to distribute it to your minion(s) where you'd want to test it.
See also Dynamic Module Distribution for more details on how to do this.
Don't forget to remove _states/mount.py
once you updated your minions to the next release.
Hmm I updated states/mount.py from the 2014.7 branch and I now get this error:
----------
ID: /nfsmnt
Function: mount.mounted
Result: False
Comment: An exception occurred in this state: Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/salt/state.py", line 1533, in call
**cdata['kwargs'])
File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/lib/python2.6/site-packages/salt/utils/context.py", line 41, in func_globals_inject
yield
File "/usr/lib/python2.6/site-packages/salt/state.py", line 1533, in call
**cdata['kwargs'])
File "/var/cache/salt/minion/extmods/states/mount.py", line 191, in mounted
if opt not in active[real_name]['opts'] and opt not in active[real_name]['superopts'] and opt not in mount_invisible_options:
KeyError: 'superopts'
Started: 14:43:01.999878
Duration: 40.286 ms
Changes:
I tried updating modules/mount.py as well but it didn't help.
/nfsmnt:
mount.mounted:
- device: nfshost:/nfsmnt
- fstype: nfs
- opts: rw,tcp,bg,hard,intr
This could be an issue in 2014.7 unrelated to the fixing of this issue though. Is there a known-good revision that has this issue fixed that I can test to confirm fix?
@nvx what OS? distribution? Kernel version?
Same as described in the initial post.
RHEL5 x64
Linux boxen 2.6.18-400.1.1.el5 #1 SMP Sun Dec 14 06:01:17 EST 2014 x86_64 x86_64 x86_64 GNU/Linux
Salt: 2014.7.0
Python: 2.6.8 (unknown, Nov 7 2012, 14:47:45)
Jinja2: 2.5.5
M2Crypto: 0.21.1
msgpack-python: 0.1.12
msgpack-pure: Not Installed
pycrypto: 2.3
libnacl: Not Installed
PyYAML: 3.08
ioflo: Not Installed
PyZMQ: 14.3.1
RAET: Not Installed
ZMQ: 4.0.4
Mako: Not Installed
Thanks. 2.6.18 obviously doesn't have the superopts bits on the proc file. Will take a look and submit at a fix.
Was able to duplicate this in a docker instance. Couple questions, are you running it inside a Docker instance? What happens when you run the blkid command on the machine in question?
The machine isn't running under Docker. (Docker doesn't run on such old kernels I believe). It's a VM running under VMware.
blkid output
/dev/mapper/VolGroup00-swap: TYPE="swap"
/dev/mapper/VolGroup00-var: UUID="d5869585-274f-4f08-a4ad-8a61b96b831c" TYPE="ext3"
/dev/mapper/VolGroup00-root: UUID="359fb50e-a827-4a72-8328-db5db5c71747" TYPE="ext3"
/dev/sda1: LABEL="/boot" UUID="8269ff87-977a-4eeb-aa68-48d374ba305f" TYPE="ext3" SEC_TYPE="ext2"
/dev/VolGroup00/root: UUID="359fb50e-a827-4a72-8328-db5db5c71747" TYPE="ext3"
/dev/VolGroup00/swap: TYPE="swap"
/dev/cdrom: LABEL="VMware Tools" TYPE="iso9660"
/dev/hda: LABEL="VMware Tools" TYPE="iso9660"
Note that the filesystem in question isn't listed as it's a NFS mount.
Does the following work?
salt-call disk.blkid
Yes, but again doesn't show NFS mounts either.
Shows the same info as I pasted above but parsed of course.
Quick question, did you update the states/mount.py _and_ the modules/mount.py?
I initially tried only states/mount.py, but then tried also updating modules/mount.py as well. Both resulted in the same error.
Hi, with CIFS there is also another problem...
When using a CIFS mount with a gid=groupname,uid=username, then uid/gid is compared against the numeric-name/numeric-group reported by proc-fs. Hence, we would have to translate the names/groups to the numeric values before trying to detect configuration changes.
Is there anybody who takes care of this? @garethgreenaway maybe?
Otherwise I could also help...
@nvx And you ran the following after putting modules/mount.py in place?
salt your-minion saltutil.sync_all
Indeed.
FYI - I had to add 'ac', 'vers', 'auto', 'user', 'nouser' to mount_invisible_options and 'vers' to mount_invisible_keys, in states/mount.py for some nfs mount points. I was getting always the strange error saying that a flag has changed when it actually did not.
salt-2014.7.1-180.10.noarch, opensuse.
Thanks for the updates here everyone. It looks like there might still be some more work to be done here such as fixing up the error stating that a flag had changed as @cr1st1p mentioned. @nvx It looks like this isn't fixed for you either? Has anyone tried this on the 2014.7.2 release?
@RobertFach I think this issue with CIFS might be useful to be its own issue? Not sure. I think if you'd like to tackle that though, we'd happily accept your pull request.
My issue was that I received an exception when I tried the fix, so I couldn't test if it actually fixed the issue (RHEL5).
I can re-test if the RHEL5 regression has been fixed though easily enough.
@rallytime I'll create a new issue and prepare a pull request for the cifs issue.
Note 2014.7.1 has regressed further causing this error when run:
ID: /nfsmnt
Function: mount.mounted
Result: False
Comment: An exception occurred in this state: Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/salt/state.py", line 1529, in call
**cdata['kwargs'])
File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/lib/python2.6/site-packages/salt/utils/context.py", line 41, in func_globals_inject
yield
File "/usr/lib/python2.6/site-packages/salt/state.py", line 1529, in call
**cdata['kwargs'])
File "/usr/lib/python2.6/site-packages/salt/states/mount.py", line 201, in mounted
if opt not in active[real_name]['opts'] and opt not in active[real_name]['superopts'] and opt not in mount_invisible_options:
KeyError: 'superopts'
Started: 10:18:15.216899
Duration: 195.849 ms
Changes:
Note this is the same mount and mount options as mentioned in the original post (same system, etc).
@nvx This issue is related to #21215 and has been already fixed with #21269 . But maybe you can provide more details about your environment, I have seen that the error might occur under FreeBSD, Solaris and under serveral circumstances in CentOS, it was first reported under CentOS (container).
RHEL 5 (as such, fairly old kernel). Looking at the fix you mentioned I suspect that would do the trick, will give it a shot and report back if it resolves the issue.
I just hit this with 2014.7.2. Nfs remounts every time. First on "noauto", then on "wsize=...".
@joshland Which OS do you use? Can you please paste your mount.mounted state? Are you going to provide a fix for that or should I do that?
Cheers,
Hit this on 2014.7.5
Salt: 2014.7.5
Python: 2.7.3 (default, Mar 13 2014, 11:03:55)
Jinja2: 2.6
M2Crypto: 0.21.1
msgpack-python: 0.1.10
msgpack-pure: Not Installed
pycrypto: 2.6
libnacl: Not Installed
PyYAML: 3.10
ioflo: Not Installed
PyZMQ: 13.1.0
RAET: Not Installed
ZMQ: 3.2.3
Mako: 0.7.0
Debian source package: 2014.7.5+ds-1~bpo70+1
I'm having the same problem. Nothing changed in fstab but it still forces a remount.
## fstab.sls
/media/vault:
mount.mounted:
- device: nfs:/mnt/nfs/website.com/vault
- fstype: nfs
- mkmnt: True
- opts: _netdev,auto,soft,retrans=10,nfsvers=3
$ sudo salt "app-03" state.highstate
ID: /media/vault
Function: mount.mounted
Result: True
Comment: Target was already mounted. Entry already exists in the fstab.
Started: 01:06:39.892423
Duration: 135.991 ms
Changes:
----------
umount:
Forced unmount and mount because options (nfsvers=3) changed
$ salt-master --versions-report
Salt: 2015.5.0
Python: 2.7.3 (default, Dec 18 2014, 19:03:52)
Jinja2: 2.6
M2Crypto: 0.21.1
msgpack-python: 0.1.10
msgpack-pure: Not Installed
pycrypto: 2.4.1
libnacl: Not Installed
PyYAML: 3.10
ioflo: Not Installed
PyZMQ: 14.0.1
RAET: Not Installed
ZMQ: 4.0.4
Mako: Not Installed
Debian source package: 2015.5.0+ds-1precise1
Ran into this again except with CIFS mounts on RHEL7 and Salt 2015.5.0
----------
ID: /test
Function: mount.mounted
Name: /test
Result: True
Comment: Target was already mounted. Entry already exists in the fstab.
Started: 09:13:51.230025
Duration: 179.521 ms
Changes:
----------
umount:
Forced remount because options (credentials=/example.cred) changed
I feel like the mount.mounted state should have a list of options to ignore checking for so that things like nfsvers, credentials, etc can be specified there (but still have said options passed to the mount command if the filesystem isn't mounted at all).
hi,
can you also please add also backup-volfile-servers to the invisible options - this option is required when working with glusterfs
@infestdead please file a separated issue.
incidentally i got around this by adding an onlyif to test for nfs at the mountpoint... (this if probably only useful if your options _don't_ change often):
/my/mount/point:
mount.mounted:
- device: xx.xx.xx.xx:/my/nfs
- fstype: nfs
- opts: vers=3
- persist: True
- mkmnt: True
- onlyif:
- stat --file-system --format=%T /my/mount/point | grep -v nfs
cc: @thatisgeek
Having the same problem on 2015.8.1:
mount:
mount.mounted:
- name: /path/to/mount
- device: x.x.x.x:/path
- fstype: nfs
- mkmnt: true
- persist: true
- opts: vers=4
----------
ID: mount
Function: mount.mounted
Name: /path/to/mount
Result: True
Comment: Target was already mounted. Entry already exists in the fstab.
Started: 11:52:01.612492
Duration: 254.1 ms
Changes:
----------
umount:
Forced unmount and mount because options (vers=4) changed
@garethgreenaway Just a ping here to see if you were ever able to swing back around to this one. Looks like several fixes were submitted, but a final resolution hasn't been applied yet?
I'm also experiencing the same problem with cifs
on 2015.8.3
Using state:
/mnt/drive:
mount.mounted:
- device: //path/to/device
- fstype: cifs
- mkmnt: True
- opts:
- domain=MyDomain
- file_mode=0755
- credentials=/path/to/credentials
- exec,noperm
- dump: 0
- pass_num: 0
Running this the second time yields:
----------
ID: /mnt/drive
Function: mount.mounted
Result: True
Comment: Target was already mounted. Entry already exists in the fstab.
Started: 01:25:41.807448
Duration: 69.868 ms
Changes:
----------
umount:
Forced remount because options (exec,noperm) changed
but none of the options have changed in fstab
.
Hmm. For @eywalker: I think you should use either a list, or a comma separated string for opts
, not both. It seems that the 4th line is interpreted by salt as a single option named exec,noperm
.
@dr4Ke: Separating exec,noperm
into distinct entries doesn't solve the problem, with the message now saying Forced remount because options (exec) changed
.
If I remove exec
, then now I get message Forced remount because options (credentials=/path/to/credentials) changed
.
Among the ones I checked, following options cause forced remount despite no change in values: exec
, credentials
, netbiosname
, gid
. On the other hand forced remount does _not_ occur with the following options: domain
, file_mode
, noperm
.
@eywalker When the CIFS volume is mounted, can you include a comment that shows the output of mount and the line relative to the volume in question? The mount module will check both /etc/fstab and the options on the current mount to see if a remount is needed. I'm wondering if some of those options need to be flagged as invisible options if they don't show up in the output from the mount command.
@pkruithof Can you check the output of mount also? I ran a quick test and I'm seeing the same thing you're seeing, and I suspect it's because of the same reasons. In my mount line for an NFS v4 mount, vers=4 is showing up as vers=4.0, which is causing the remount attempts.
in /etc/fstab
(paths redacted):
192.168.12.6:/remote/path /mount/path nfs vers=4 0 0
@pkruithof Perfect. Can you also provide the output after running the mount command? I'm curious to see what options are making it into the actual mounted file system vs. what they look like in the salt state. As mentioned, I saw a discrepancy when testing an NFSv4 filesystem which caused Salt to try to remount it during each run.
You mean the output when I mount manually?
@pkruithof Run the mount command in a terminal, copy & paste the output. I'm curious to see what options end up in there.
$ sudo mount -v /mount/path
mount.nfs: timeout set for Wed Feb 10 09:18:38 2016
mount.nfs: trying text-based options 'vers=4,addr=192.168.12.6,clientaddr=192.168.12.5'
$ mount
192.168.12.6:/remote/path on /mount/path type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.12.5,local_lock=none,addr=192.168.12.6)
Does that help?
@pkruithof Perfect! Exactly what I was seeing as well. the vers option is showing up in the mounted file system as vers=4.0 but when specified in the salt state it's vers=4, salt sees the difference and forces a remount. Looks like a scenario we need to account for, I'll look at a fix later today.
@garethgreenaway any progress on this by any chance?
I see this in 2015.8.11
/srv/salt-images:
mount.mounted:
- device: {{ salt['pillar.get']('nfsmount') }}
- fstype: nfs
- opts: nfsvers=3,rsize=32768,wsize=32768,noatime,nodiratime
- dump: 0
- pass_num: 0
- persist: True
- mkmnt: True
ID: /srv/salt-images
Function: mount.mounted
Result: True
Comment: Target was already mounted. Entry already exists in the fstab.
Started: 14:03:06.516272
Duration: 139.938 ms
Changes:
----------
umount:
Forced unmount and mount because options (nfsvers=3) changed
Same in 2016.3.3:
# Mount CIFS share
backups_mount:
mount.mounted:
- name: {{ backup_directory }}
- device: //my.server.com/backups
- fstype: cifs
- opts: vers=3.0,credentials=/etc/backups.cifs,uid=900,gid=34,file_mode=0660,dir_mode=0770
- require:
- file: {{ backup_directory }}
- file: /etc/backups.cifs
----------
ID: /etc/backups.cifs
Function: file.managed
Result: True
Comment: File /etc/backups.cifs is in the correct state
Started: 14:31:52.839528
Duration: 22.704 ms
Changes:
----------
ID: backups_mount
Function: mount.mounted
Name: /srv/backups
Result: True
Comment: Target was already mounted. Entry already exists in the fstab.
Started: 14:31:52.864405
Duration: 93.889 ms
Changes:
----------
umount:
Forced remount because options (credentials=/etc/backups.cifs) changed
i had this issue as well but specifying the opts as list helped:
- opts:
- noatime
- nobarrier
I'm having the same issue on 2016.11.2 with the size
option on a tmpfs mount:
mount.mounted:
- device: tmpfs
- fstype: tmpfs
- opts:
- rw
- size=256M
- noatime
- mkmnt: true
ID: /var/lib/varnish
Function: mount.mounted
Result: True
Comment: Target was already mounted. Entry already exists in the fstab.
Started: 02:05:46.434034
Duration: 20.417 ms
Changes:
----------
umount:
Forced remount because options (size=256M) changed
This is causing problems for us as well.
My state:
{% for export in pillar.get('nfs_exports',[]) %}
/mnt/{{ export }}:
mount.mounted:
- device: 10.10.10.25:/var/nfs/{{ export }}
- fstype: nfs
- opts: auto,noatime,nolock,bg,nfsvers=4,intr,tcp,actimeo=1800
- persist: True
- mkmnt: True
- require:
- pkg: nfs-common
{% endfor %}
Output:
ID: /mnt/report_export
Function: mount.mounted
Result: True
Comment: Target was already mounted. Updated the entry in the fstab.
Started: 20:50:48.478451
Duration: 105.511 ms
Changes:
----------
persist:
update
umount:
Forced unmount and mount because options (nfsvers=4) changed
I tried removing the nfsvers=4
option from the state to see if that helped, but on the next highstate it complained about a different option:
ID: /mnt/report_export
Function: mount.mounted
Result: True
Comment: Target was already mounted. Updated the entry in the fstab.
Started: 20:51:51.493903
Duration: 65.774 ms
Changes:
----------
persist:
update
umount:
Forced unmount and mount because options (nolock) changed
This issue seems to have been forgotten.
@corywright Compare the options in your state with the options that are listed in /proc/mounts. Mount options are a royal pain since they change with what is used when the mount command is issued and what actually ends up being used. I've seen nfsvers=4 translate into nfsvers=4.0 or similar. And I wonder if nolock is one that ends up being hidden in the /proc/mounts file.
@garethgreenaway Thanks Gareth. I can see the differences there.
Is there a solution? Or should the mount.mounted
state with - persist: True
be avoided for NFS volumes?
It seems like the documentation currently directs users to implement states that can be problematic in production (unexpectedly unmounting busy nfs volumes during highstates).
It stands to reason that the forced unmount and mount because options changed in case of "vers=4" vs. "vers=4.0" is counter-intuitive. If mount(8)
accepts "4" as alias for "4.0" there, so should the salt mount module. I realize it will be a minor pain to maintain an endless list of these aliases, but such is life.
@shallot the salt module and state module do accept "4" as the value for nfsvers, the issue is on the system side outside of Salt. That "4" can be translated to 4.0, 4.1, etc. so the next time the state runs the values are different and the volume is remounted.
@garethgreenaway sure, but when the system being modified doesn't actually see the difference between two invocations of such a salt state, then it didn't make sense to remount.
I can fathom a situation where someone applies such a salt state with the actual intention of having the mount upgraded to whatever is the latest 4.x version, but it seems too far-fetched to be the default.
When the nifty upgrade feature implicitly risks downtime or user data loss, its default should be conservative, to require the user to make that choice explicit.
Long read! But I can confirm that I have this issue with nfsvers=4.1 which complains and usually fails to do this because my workloads are using the shares!
Long read too, and I have the issue with the options "ac" of nfs on Centos 6.9
Same issue with EFS aka nfsvers=4.1, salt 2016.11.7.
I got the same problem. I've resolved it by replacing:
nfs-montato:
mount.mounted:
- name: /var/www
- device: "my.hostname:/data"
- opts: "rw,rsize=32768,wsize=32768,hard,tcp,nfsvers=3,timeo=3,retrans=10"
- fstype: nfs
with:
nfs-montato:
mount.mounted:
- name: /var/www
- device: "my.hostname:/data"
- opts: "rw,rsize=32768,wsize=32768,hard,tcp,vers=3,timeo=3,retrans=10"
- fstype: nfs
Now the remount is not forced every time, and it works.
That's because if i mount a share with the option nfsvers=3 and then run the command "mount", i can see that parameter show as vers=3, not "nfsvers" !
Looking at the nfs manpange:
vers=n This option is an alternative to the nfsvers option. It is included for compatibility with other operating systems
So, using "vers" instead of "nfsvers" is a good workaround.
@davidegiunchidiennea That sounds like the solution!
We could actually say that this is a good solution and document this somewhere.
I've got the same problem here with option noauto. (Version 2016.11.4)
ID: /tmp/install
Function: mount.mounted
Result: True
Comment: Target was already mounted. Entry already exists in the fstab.
Started: 13:59:50.574278
Duration: 83.513 ms
Changes:
----------
umount:
Forced unmount and mount because options (noauto) changed
State is:
/tmp/install:
mount.mounted:
- device: hostname:/share
- fstype: nfs
- opts: noauto,ro,vers=3
- dump: 3
- pass_num: 0
- persist: True
- mkmnt: True
same bug here with user_xattr for ext4 fs
State:
lvm-lv-srv-mount:
mount.mounted:
- name: /srv
- device: /dev/mapper/sys-srv
- fstype: ext4
- opts: noatime,nodev,user_xattr
- dump: 0
- pass_num: 2
- persist: True
- mkmnt: True
Output:
ID: lvm-lv-srv-mount
Function: mount.mounted
Name: /srv
Result: True
Comment: Target was already mounted. Entry already exists in the fstab.
Started: 21:53:42.155559
Duration: 99.332 ms
Changes:
----------
umount:
Forced remount because options (user_xattr) changed
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.