Salt: previously applied successful patch is not recognized when file.patch is re-run the same patch file

Created on 26 Mar 2019  路  22Comments  路  Source: saltstack/salt

Description of Issue/Question

since https://github.com/saltstack/salt/pull/47010 file.patch generate errors in minion log

Setup

example_of_issue:
  file.patch:
    - name: /usr/lib/python3/dist-packages/salt/transport/zeromq.py
    - source: salt://salt/files/master/zeromq.patch

Steps to Reproduce Issue

salt 'host' state.highstate generate error in minion logs

Mar 22 20:14:33 elasticsearch-worker salt-minion[13232]: [ERROR   ] Command '['/usr/bin/patch', '-N', '-r', '/tmp/__salt.tmp.fdbyaflo', '-o', '/tmp/__salt.tmp.pt62h1c2', '-i', '/tmp/__salt.tmp.t3i7qtxv', '/usr/lib/python3/dist-packages/salt/transport/zeromq.py']' failed with return code: 1
Mar 22 20:14:33 elasticsearch-worker salt-minion[13232]: [ERROR   ] stdout: patching file /tmp/__salt.tmp.pt62h1c2 (read from /usr/lib/python3/dist-packages/salt/transport/zeromq.py)
Mar 22 20:14:33 elasticsearch-worker salt-minion[13232]: Reversed (or previously applied) patch detected!  Skipping patch.
Mar 22 20:14:33 elasticsearch-worker salt-minion[13232]: 2 out of 2 hunks ignored -- saving rejects to file /tmp/__salt.tmp.fdbyaflo
Mar 22 20:14:33 elasticsearch-worker salt-minion[13232]: [ERROR   ] retcode: 1

Versions Report

Salt Version:
           Salt: 2019.2.0

Dependency Versions:
           cffi: Not Installed
       cherrypy: unknown
       dateutil: 2.6.1
      docker-py: Not Installed
          gitdb: 2.0.3
      gitpython: 2.1.8
          ioflo: Not Installed
         Jinja2: 2.10
        libgit2: 0.26.0
        libnacl: Not Installed
       M2Crypto: 0.32.0
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.5.6
   mysql-python: Not Installed
      pycparser: Not Installed
       pycrypto: 2.6.1
   pycryptodome: Not Installed
         pygit2: 0.26.2
         Python: 3.6.7 (default, Oct 22 2018, 11:32:17)
   python-gnupg: 0.4.1
         PyYAML: 3.12
          PyZMQ: 16.0.2
           RAET: Not Installed
          smmap: 2.0.3
        timelib: Not Installed
        Tornado: 4.5.3
            ZMQ: 4.2.5

System Versions:
           dist: Ubuntu 18.04 bionic
         locale: UTF-8
        machine: x86_64
        release: 4.15.0-1031-aws
         system: Linux
        version: Ubuntu 18.04 bionic

Example:
zeromq.patch

--- /usr/lib/python3/dist-packages/salt/transport/zeromq.py 2019-03-05 15:50:35.198995571 -0500
+++ /usr/lib/python3/dist-packages/salt/transport/zeromq.py 2019-03-05 15:44:22.884853241 -0500
@@ -528,9 +528,9 @@
             payload = self.serial.loads(messages[0])
         # 2 includes a header which says who should do it
         elif messages_len == 2:
-            if (self.opts.get('__role') != 'syndic' and messages[0] not in ('broadcast', self.hexid)) or \
-                (self.opts.get('__role') == 'syndic' and messages[0] not in ('broadcast', 'syndic')):
-                log.debug('Publish received for not this minion: %s', messages[0])
+            if (self.opts.get('__role') != 'syndic' and messages[0].decode('utf-8') not in ('broadcast', self.hexid)) or \
+                (self.opts.get('__role') == 'syndic' and messages[0].decode('utf-8') not in ('broadcast', 'syndic')):
+                log.debug('Publish received for not this minion: %s', messages[0].decode('utf-8'))
                 raise tornado.gen.Return(None)
             payload = self.serial.loads(messages[1])
         else:
@@ -912,7 +912,7 @@
                                 log.trace('Sending filtered data over publisher %s', pub_uri)
                                 # zmq filters are substring match, hash the topic
                                 # to avoid collisions
-                                htopic = salt.utils.stringutils.to_bytes(hashlib.sha1(topic).hexdigest())
+                                htopic = salt.utils.stringutils.to_bytes(hashlib.sha1(topic.encode('utf-8')).hexdigest())
                                 pub_sock.send(htopic, flags=zmq.SNDMORE)
                                 pub_sock.send(payload)
                                 log.trace('Filtered data has been sent')

sls

file.patch:
  - name: /usr/lib/python3/dist-packages/salt/transport/zeromq.py
  - source: salt://salt/files/master/zeromq.patch
  - watch_in:
    - service: salt_master_service
Bug Needs Testcase help-wanted severity-high

Most helpful comment

I'm seeing this issue since upgrading to 2019.2.0 as well. I've tried to distill an example of the issue that you can easily run from the comfort of your own home machine:

Setup:

[root@localhost salttest]# pwd
/tmp/salttest

[root@localhost salttest]# cat testdata
Line 1
Line 2
Line 3

[root@localhost salttest]# cat testdata.patch
@@ -1,3 +1,3 @@
 Line 1
-Line 2
+I've Been Replaced!
 Line 3

[root@localhost salttest]# cat patchtest.sls
Patch example:
  file.patch:
    - name: /tmp/salttest/testdata
    - source: /tmp/salttest/testdata.patch

Now try applying the state that we've defined:

[root@localhost salttest]# salt-call --local --file-root $(pwd) state.apply patchtest
local:
----------
          ID: Patch example
    Function: file.patch
        Name: /tmp/salttest/testdata
      Result: True
     Comment: Patch successfully applied
     Started: 09:55:52.459566
    Duration: 56.232 ms
     Changes:
              ----------
              pid:
                  3857
              retcode:
                  0
              stderr:
              stdout:
                  patching file /tmp/salttest/testdata

Summary for local
------------
Succeeded: 1 (changed=1)
Failed:    0
------------
Total states run:     1
Total run time:  56.232 ms

[root@localhost salttest]# cat testdata
Line 1
I've Been Replaced!
Line 3

So far so good! Now try applying the state a second time:

[root@localhost salttest]# salt-call --local --file-root $(pwd) state.apply patchtest
[ERROR   ] Command '[u'/bin/patch', u'-N', u'-r', u'/tmp/__salt.tmp.T7HqK4', u'-o', u'/tmp/__salt.tmp.3FZBGh', u'-i', u'/tmp/__salt.tmp.sMwLXs', u'/tmp/salttest/testdata']' failed with return code: 1
[ERROR   ] stdout: patching file /tmp/__salt.tmp.3FZBGh (read from /tmp/salttest/testdata)
Reversed (or previously applied) patch detected!  Skipping patch.
1 out of 1 hunk ignored -- saving rejects to file /tmp/__salt.tmp.T7HqK4
[ERROR   ] retcode: 1
local:
----------
          ID: Patch example
    Function: file.patch
        Name: /tmp/salttest/testdata
      Result: True
     Comment: Patch was already applied
     Started: 09:56:05.809527
    Duration: 56.766 ms
     Changes:

Summary for local
------------
Succeeded: 1
Failed:    0
------------
Total states run:     1
Total run time:  56.766 ms

In previous versions of Salt (or 2018.3.3 at least), you wouldn't see the above ERROR lines. It's not a problem _per se_, but it does pollute the log files with errors-that-aren't-errors. Maybe it's possible to change the logic in salt/states/file.py so that the ERROR output is only logged _after_ the check for Patch was already applied has been performed?

All 22 comments

@aarnaud Thanks for bring this to SaltStack's attention, but the file.patch was unsuccessful:

Mar 22 20:14:33 elasticsearch-worker salt-minion[13232]: 2 out of 2 hunks ignored -- saving rejects to file /tmp/__salt.tmp.fdbyaflo
Mar 22 20:14:33 elasticsearch-worker salt-minion[13232]: [ERROR ] retcode: 1

Surely you would want the error logged in the minion's log file. Can you explain why you feel this is an erroneous operation.

@dmurphy18 it appends when the file is already patched.
The first time all it's good. but next time we have this error in logs.
the old version of file.patch had a hash property to prevent this, by verifying the file hash before patching.

@aarnaud The error is getting appended due to retcode: 1 from the operation.

The problem is that a previously applied successful patch is not recognized when file.patch is re-run the same patch file. This would be more descriptive of the issue.

Could you restate the title of this issue to reflect this and provide an example of how you are using file.patch, so that we can duplicate your environment and fix / test accordingly.

Thanks

I update this issue with example

Same here:

/etc/init.d/newrelic-daemon:
  file.patch:
    - name: /etc/init.d/newrelic-daemon
    - source: salt://newrelic/files/etc/init.d/newrelic-daemon.patch
    - hash: md5=215a78f25e90c783e1050e3d8c9b4022
    - require:
      - pkg: newrelic-daemon
    - require_in:
      - cmd: newrelic-daemon-register-service
    - watch_in:
      - cmd: newrelic-daemon-register-service

creates in 2019.2.0 these error lines on minion; last 3 years it was working fine with all older saltstack versions:

# salt-call state.apply test=True
[ERROR   ] Command '[u'/usr/bin/patch', u'-N', u'-r', u'/tmp/__salt.tmp.1L909l', u'-o', u'/tmp/__salt.tmp.iZtvUM', u'-i', u'/tmp/__salt.tmp.ma05l8', u'/etc/init.d/newrelic-daemon']' failed with return code: 1
[ERROR   ] stdout: patching file /tmp/__salt.tmp.iZtvUM (read from /etc/init.d/newrelic-daemon)
Reversed (or previously applied) patch detected!  Skipping patch.
1 out of 1 hunk ignored -- saving rejects to file /tmp/__salt.tmp.1L909l
[ERROR   ] retcode: 1

I'm seeing this issue since upgrading to 2019.2.0 as well. I've tried to distill an example of the issue that you can easily run from the comfort of your own home machine:

Setup:

[root@localhost salttest]# pwd
/tmp/salttest

[root@localhost salttest]# cat testdata
Line 1
Line 2
Line 3

[root@localhost salttest]# cat testdata.patch
@@ -1,3 +1,3 @@
 Line 1
-Line 2
+I've Been Replaced!
 Line 3

[root@localhost salttest]# cat patchtest.sls
Patch example:
  file.patch:
    - name: /tmp/salttest/testdata
    - source: /tmp/salttest/testdata.patch

Now try applying the state that we've defined:

[root@localhost salttest]# salt-call --local --file-root $(pwd) state.apply patchtest
local:
----------
          ID: Patch example
    Function: file.patch
        Name: /tmp/salttest/testdata
      Result: True
     Comment: Patch successfully applied
     Started: 09:55:52.459566
    Duration: 56.232 ms
     Changes:
              ----------
              pid:
                  3857
              retcode:
                  0
              stderr:
              stdout:
                  patching file /tmp/salttest/testdata

Summary for local
------------
Succeeded: 1 (changed=1)
Failed:    0
------------
Total states run:     1
Total run time:  56.232 ms

[root@localhost salttest]# cat testdata
Line 1
I've Been Replaced!
Line 3

So far so good! Now try applying the state a second time:

[root@localhost salttest]# salt-call --local --file-root $(pwd) state.apply patchtest
[ERROR   ] Command '[u'/bin/patch', u'-N', u'-r', u'/tmp/__salt.tmp.T7HqK4', u'-o', u'/tmp/__salt.tmp.3FZBGh', u'-i', u'/tmp/__salt.tmp.sMwLXs', u'/tmp/salttest/testdata']' failed with return code: 1
[ERROR   ] stdout: patching file /tmp/__salt.tmp.3FZBGh (read from /tmp/salttest/testdata)
Reversed (or previously applied) patch detected!  Skipping patch.
1 out of 1 hunk ignored -- saving rejects to file /tmp/__salt.tmp.T7HqK4
[ERROR   ] retcode: 1
local:
----------
          ID: Patch example
    Function: file.patch
        Name: /tmp/salttest/testdata
      Result: True
     Comment: Patch was already applied
     Started: 09:56:05.809527
    Duration: 56.766 ms
     Changes:

Summary for local
------------
Succeeded: 1
Failed:    0
------------
Total states run:     1
Total run time:  56.766 ms

In previous versions of Salt (or 2018.3.3 at least), you wouldn't see the above ERROR lines. It's not a problem _per se_, but it does pollute the log files with errors-that-aren't-errors. Maybe it's possible to change the logic in salt/states/file.py so that the ERROR output is only logged _after_ the check for Patch was already applied has been performed?

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.

Please don't close this stale-bot. :-)

Thank you for updating this issue. It is no longer marked as stale.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.

Sorry stale-bot, but this is still an issue!

Thank you for updating this issue. It is no longer marked as stale.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.

Sorry stale-bot, but this is still an issue

Thank you for updating this issue. It is no longer marked as stale.

FWIW. This fixed it for me. Adding an additional unless:, pointing to a file that gets installed only at the first state.apply:

# This fixes the broken reverse sort in the webUI
/usr/lib/python3/dist-packages/cobbler/remote.py:
  file.patch:
    - source: salt://cobbler3/files/patches/remote.patch
    - unless: test -e /var/lib/cobbler/loaders/pxelinux.0

Issue happens here as well with 2019.2.2-1.el7. The documentation seems to hint that the way the state detects that the patch is already applied changed at some point; is this a difference in the way the patch program works under different OSs?

Edit: Looking closer at what happens is that patch is called with the -N flag to ignore reversed (or already applied) patches. This works, in the sense that it detects that the patch has already been applied. The result is still that all hunks get rejected and the return value is 1. This is not necessarily an indication of a serious error (return value 2) but it could mean that a patch was only partially applied.

There seems to be no way to distinguish a good outcome (patch already applied) from a bad one (some hunks rejected) without the intermediate step of actually trying to reverse the patch in dry-run.

I've just hit this and found the problem. Should be able to provide the PR shortly. Essentially, this comment is wrong, you have to pass -p0.

https://github.com/saltstack/salt/blob/v3000.2/salt/states/file.py#L6294

@dwoz assigning this one to you and put in the Magnesium release in an attempt to move it from blocked and perhaps more - we will at least discuss in planning for the Magnesium release cycle set to occur over the next 2-4 weeks.

de-scoping from Magnesium as the PR needs a test case written

Friendly poke, what's still missing and who's working on it?

the PR needs a test case written and looks like there were never test written for that part of the code base, so I have update the PR against the master branch and tests will then run and asked Wayne to put this on the list for the Test Clinic - if the PR author hasn't written test cases then it is up to the community and the Core team to write those tests.

Was this page helpful?
0 / 5 - 0 ratings