Core: 'Could not get node for path' messages in log file when deleting a file/folder

Created on 13 Mar 2018  路  17Comments  路  Source: owncloud/core

Steps to reproduce

  1. create a folder
  2. delete the folder
  3. check the logfile

Expected behaviour

no new messages in the log-file

Actual behaviour

log content:

{"reqId":"yN2bBgP0f4ZlsxbW6o2u","level":2,"time":"2018-03-13T10:14:32+00:00","remoteAddr":"127.0.0.1","user":"admin","app":"dav","method":"DELETE","url":"\/owncloud-core\/remote.php\/dav\/files\/admin\/folder1","message":"Could not get node for path: \"files\/admin\/folder1\" : File with name folder1 could not be located"}

Server configuration

Operating system:
Ubuntu 17.10
Web server:
Apache2
Database:
SQLite
PHP version:
7.1.15
ownCloud version: (see ownCloud admin page)
10.1.0 prealpha (git)
Updated from an older ownCloud or fresh install:
fresh
Where did you install ownCloud from:
git

The content of config/config.php:

{
    "system": {
        "license-key": "***REMOVED SENSITIVE VALUE***",
        "instanceid": "oc35rjhx6uol",
        "passwordsalt": "***REMOVED SENSITIVE VALUE***",
        "secret": "***REMOVED SENSITIVE VALUE***",
        "trusted_domains": [
            "localhost"
        ],
        "datadirectory": "\/home\/artur\/www\/owncloud-core\/data",
        "overwrite.cli.url": "http:\/\/localhost\/owncloud-core",
        "dbtype": "sqlite3",
        "version": "10.1.0.0",
        "dbname": "owncloud",
        "dbhost": "localhost",
        "dbtableprefix": "oc_",
        "mysql.utf8mb4": true,
        "dbuser": "***REMOVED SENSITIVE VALUE***",
        "dbpassword": "***REMOVED SENSITIVE VALUE***",
        "logtimezone": "UTC",
        "installed": true,
        "ldapIgnoreNamingRules": false,
        "loglevel": 0,
        "maintenance": false,
        "default_language": "en",
        "skeletondirectory": "\/home\/artur\/www\/owncloud-core\/tests\/acceptance\/webUISkeleton\/",
        "csrf.disabled": true,
        "enable_previews": false,
        "theme": ""
    }
}

List of activated apps:

Enabled:
  - comments: 0.3.0
  - dav: 0.3.2
  - diagnostics: 0.1.0
  - enterprise_key: 0.1.3
  - federatedfilesharing: 0.3.1
  - federation: 0.1.0
  - files: 1.5.1
  - files_external: 0.7.1
  - files_sharing: 0.10.1
  - files_texteditor: 2.2
  - files_trashbin: 0.9.1
  - files_versions: 1.3.0
  - firewall: 2.4.2
  - gallery: 16.0.2
  - market: 0.2.2
  - provisioning_api: 0.5.0
  - systemtags: 0.3.0
  - testing: 0.1.0
  - updatenotification: 0.2.1
Disabled:
  - customgroups
  - files_antivirus
  - multidirtest
  - notifications
  - password_policy
  - security
  - theme-example
  - user_ldap

Are you using external storage, if yes which one: local/smb/sftp/...
local
Are you using encryption: yes/no
no
Are you using an external user-backend, if yes which one: LDAP/ActiveDirectory/Webdav/...
no

Client configuration

Browser:
FF 58
Operating system:
Ubuntu 17.10

Bug blue-ticket p2-high

Most helpful comment

That change only looks like it would still be logged before but with the text $message instead of the actual message.

As I understand this bug is about why the log entry exists at all. If reproducible consistently it should be possible to do a git bisect to find out the commit that made it appear.

All 17 comments

GitMate.io thinks possibly related issues are https://github.com/owncloud/core/issues/22180 (Could not get node for path), https://github.com/owncloud/core/issues/23679 (Can't Delete Some Files and Folders), https://github.com/owncloud/core/issues/8101 (Deleted Files: Hidden folder not visible), https://github.com/owncloud/core/issues/5201 (Cannot delete file), and https://github.com/owncloud/core/issues/13824 (Delete File).

@DeepDiver1975 . I see this being logged a lot in our Enterprise environment too.

It is not new. I have noticed it for "a while" and perhaps "a long while" when I happen to have run the PHP dev server in a terminal window and see the output noise.

@phil-davis . I only see it in 10.0.7, not 10.0.3. So something happened between then and now.

Seems like this may have introduced it?
https://github.com/owncloud/core/commit/3284e6dd72528f16b1a6e45f13e310ff4ff2de2e

That is probably correct. That change has made the message actually come out. So there has also been an underlying reason that getNodeForPath has been "wanting/trying" to give this message, perhaps for quite some time.

That change only looks like it would still be logged before but with the text $message instead of the actual message.

As I understand this bug is about why the log entry exists at all. If reproducible consistently it should be possible to do a git bisect to find out the commit that made it appear.

git bisect lands me to this commit => https://github.com/owncloud/core/commit/9b6dcfe9a1be7f3ef3a525fe53d87a0ac6889ff9

Did a small verification, by checking out https://github.com/owncloud/core/commit/e0936236bfc5c96081d0eac29075a399ba911d79 which is the commit before the commit mentioned above. And the log doesn't appear.

Somewhere like:
https://github.com/owncloud/core/pull/29136/files#diff-d9adeff76c4cf2852e19574ce708c588R55
looks suspicious.
The delete($path) method says This method is called after a node is deleted - maybe it is some place like that where $this->getNodeForPath($path) is done "too late" after the actual file has been deleted?

@VicDeo can you fix it ?

@PVince81 It seems to be more than just a log message.
Sabre\DAV\PropertyStorage\Backend\BackendInterface implies that function delete($path) should be defined in implementing class and it's possible to define a function with an empty body to fix the log message and fit into the interface.
But as it was said above the main reason of this message is resolving path into fileId on the stage when the entry doesn't exist in filecache anymore. So it's too late to cleanup the dav properties and properties table will have orphaned entries.
What do you think about a simple cronjob left cleaning this table?
with a query like this http://sqlfiddle.com/#!9/325993/1

@VicDeo can we detect this deletion event and properly clean the properties at the right moment ?

A cleanup cron job is ok for the cases where regular deletion is not possible in case of timeouts or other exceptional situations but I don't think this should be the regular way for deleting properties. (unless there is really no other way)

@PVince81 there is no 'before delete' hook within sabre/dav and 'after delete' hook is called with a path as a parameter so I don't know how to proceed with converting it to id on this stage.
quick-and-dirty :tm: solution would be to find an OC filesystem before delete hook but I haven't found a proper signal so far.

how did we do it in past versions ? or was the message always there ?

Before 10.0 this plugin used file path instead of file id.
We had no such log entry but slower performance and 255 char limit on the file path so the path was trimmed sometimes.

@PVince81 ok. there is another solution besides the cronjob: a cache of deleted items in the DAV tree that holds only deleted items as filePath => fileId and persists within a single request only.

In this case FileCustomPropertiesBackend::delete will resolve path into fileId not with generic getNodeForPath but querying deleted items cache.

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

Was this page helpful?
0 / 5 - 0 ratings