WebDAV server successfully tested with cadaver client but Joplin fails to sync.
I see there was a previous issue to-do with Nginx and WebDAV that gave the same "Cannot read property '0' of null" error: https://github.com/laurent22/joplin/issues/523 Perhaps @bradmcl would know.
joplin-desktop/log.txt relevant output:
2018-09-19 11:37:00: "Operations completed: "
2018-09-19 11:37:00: "Total folders: 1"
2018-09-19 11:37:00: "Total notes: 1"
2018-09-19 11:37:00: "Total resources: 0"
2018-09-19 11:37:00: "There was some errors:"
2018-09-19 11:37:00: "TypeError: Cannot read property '0' of null
TypeError: Cannot read property '0' of null
at WebDavApi.resourcePropByName (/tmp/.mount_JoplinjmpEyE/app/resources/app/lib/WebDavApi.js:166:19)
at FileApiDriverWebDav.statFromResource_ (/tmp/.mount_JoplinjmpEyE/app/resources/app/lib/file-api-driver-webdav.js:63:41)
at FileApiDriverWebDav.stat (/tmp/.mount_JoplinjmpEyE/app/resources/app/lib/file-api-driver-webdav.js:33:16)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)"
nginx error log, relevant output:
2018/09/19 11:25:40 [notice] 23661#23661: signal process started
2018/09/19 11:25:57 [error] 23669#23669: *89 mkdir() "/var/www/dav/joplin/.sync" failed (17: File exists), client: 127.0.0.1, server: , request: "MKCOL /.sync/ HTTP/1.1", host: "localhost"
2018/09/19 11:25:57 [error] 23669#23669: *90 mkdir() "/var/www/dav/joplin/.resource" failed (17: File exists), client: 127.0.0.1, server: , request: "MKCOL /.resource/ HTTP/1.1", host: "localhost"
2018/09/19 11:25:57 [alert] 23669#23669: *91 dav_ext stat failed on '/var/www/dav/joplin/a872b0152c1e4d479466598a057fe231.md' (2: No such file or directory), client: 127.0.0.1, server: , request: "PROPFIND /a872b0152c1e4d479466598a057fe231.md HTTP/1.1", host: "localhost"
nginx conf:
server {
listen 80;
listen [::]:80;
root /var/www/dav/joplin;
client_body_temp_path /var/www/dav/tmp;
client_max_body_size 10M;
dav_methods PUT DELETE MKCOL COPY MOVE;
dav_ext_methods PROPFIND OPTIONS;
create_full_put_path on;
dav_access user:rw group:rw all:r;
autoindex on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
}
}
Looks like about the same bug to me, which was fixed at version 1.0 and then reverted as it broke something else. I just version locked to the "good" versions and moved on, there should be sufficient documentation for someone else to progress another fix that incorporates both interpretations of the underlying standards.
Thanks. I downloaded v1.0.100 and it works with nginx WebDAV! Updating the docs to warn people of this issue with nginx could be useful.
Is it worth leaving this issue open then?
We can leave it open till someone finds a real fix.
Thanks. I downloaded v1.0.100 and it works with nginx WebDAV! Updating the docs to warn people of this issue with nginx could be useful.
Is it worth leaving this issue open then?
So do we have a regression?
I can reproduce this problem on the Android Joplin from 2018-10-03.
In case it is useful, when I tried to use nginx WebDAV, I discovered that it does not send correct error responses (for instance when a file is missing). It sends generic-ish 404s rather than the XML formatted response that box.com and wsgidav do. Based on my memory of the error logs, this was the most likely problem. It is therefore likely that this is a nginx bug rather than a Joplin one, though it may be worth having someone confirm my information. With all that said however, I have no idea how it was working for you in the first place. I now use wsgidav behind a nginx reverse proxy.
Isn't it possible to add the solution/workaround from 1.0.100 as a separate synchronisation-target?
So it should be possible to use nginx as a webdav-server without breaking other webdav-connections.
Based on what I remember seeing, it's likely that it would not even need to be a separate target. The webdav library (if any) would simply have to be hacked to accept a 404 as a "Not Found" response.
What's wrong with using this hack until there is a nicer solution?
I'm using version 1.0.100 on all devices because I need nginx-support. In my opinion that is a worse way.
The problem is the hack was breaking support for SeaFile, so it would need to be tweaked so that it works with both SeaFile and Nginx.
And addinga new synchronization target "webdav on nginx" or something like that, that uses the hack without touching the "webdav"-synchronization target is no option?
No unfortunately creating a sync target for this is not really an option. I prefer if the existing driver supports as many services as possible, as it makes long term maintenance easier and it means that the fix we add for Nginx could also benefit other services.
I assume it's not too hard to get it working with Nginx in a backward compatible way but I've never had the opportunity to properly look at it with a test server.
Would it help, if I would give you access to an an nginx-webdav account and access to the log files?
At the moment I wouldn't have time but I'll let you know when I do.
@laurent22 The code you pasted at https://github.com/laurent22/joplin/issues/624#issuecomment-401188578 works for me with nginx.
The WebDAV spec is a bit ambiguous but multistatus is allowed to have zero or an arbitrary number of response nodes (don't ask me how zero makes sense) and I don't see any text disallowing sending a multistatus error to a PROPFIND request for a file.
There is even an example with a multistatus reply for PROPFIND a file https://tools.ietf.org/html/rfc4918#section-9.1.3 , so I guess nginx is not violating the spec.
Oh I forgot I've suggested a solution at some point, good to know it's actually working. Yes I assume what Nginx is doing is spec compliant but a bit unusual.
Since the hack is narrowly scoped it shouldn't affect other implementations so I think we could go ahead and add it, then let's see if it's working for everybody using Nginx. I won't have time in the coming days but if someone can create a PR I'll merge it.
It works!
Nginx 1.14 Ubuntu - webdav configuration.
Tested using build instructions on master branch that includes commit 3943192
Works for me now too with v1.0.118 beta and a Nginx running on Debian GNU/Linux 8 (jessie).