[x]
):The try-instance seems to use git-lfs (if enabled) via https and does not accept my login credentials
I have problems pushing a very large repository (1.8GB) with git lfs to my gitea instance. git pushes the first 80MB, after that, the counter of already uploaded data jumps between weird values (e.g. it increases, decreases, increases again, and so on), and after some time, i get lots of LFS client errors, which are HTTP 413 status codes reported back from gitea. The last few lines say that there is an authorization error for info/lfs/objects/batch
and the push process stops.
This is reproducible for several push attempts. I tried setting the log level to Debug
, but the log file stays quite empty.
So i'm asking for help for how to set up Debug logging (i cannot find a list of proper log level options), and i want to notice for this issue in general.
I have configured a Bitbucket Repository as a second remote, and it accepts the big push without problems, so i think it is a server related issue.
This maybe a reverse proxy upload size setting problem?
I am not sure. I have played around with different upload size limitations in my reverse proxy, and finally disabled it completely (client_max_body_size 0;
). The result now is this:
$ git push -u tharan master [09:08:21]
Git LFS: (42 of 467 files, 396 skipped) 148.06 MB / 471.06 MB, 84.70 MB skipped Git LFS: (42 of 467 files, 396 skipped) 151.75 MB / 471.06 MB, 84.70 MB skipped Git LFS: (42 of 467 files, 396 skipped) 151.75 MB / 471.06 MB, 84.70 MB skipped
Git LFS: (42 of 467 files, 396 skipped) 151.75 MB / 471.06 MB, 84.70 MB skipped
The first of the 4 Git LFS lines was the output of the push command, which stalled after some time. The other 3 lines are the result of repeatedly pressing the Enter key. I entered the command on 09:08, now it's 09:54, and i am seeing no progress.
Please note that i have another project with LFS data which sums up to about 10MB. This is working perfectly fine with gitea.
I am experiencing same issue. I have over 10GB repository and pushing it over ssh fails exactly the same way. I managed to push about 1GB leaving process run over night restarting git push command when it fails. Oddly enough i changed remote to use https and now it has pushed over 800MB without a single error. Could the issue be in ssh?
P.S. @simonszu you should use email as login credentials for https login as for some reason it does not accept usernames (on my local instance at least). looks like it accepts both login and email, but asks it 3 times. weird.
I copied my git repository into docker container and tried to push it to gitea without any proxies in the middle. This what happens:
Git LFS: (0 of 1779 files, 104427 skipped) 0 B / 2.92 GB, 10.00 GB skipped
LFS: Put https://privatehostname.com/user/repo.git/info/lfs/objects/9742a4f0128be197d24483be6d7cf6567ad9f512c59b718ce43833041e81d899: dial tcp 172.21.0.2:443: getsockopt: connection refused
LFS: Put https://privatehostname.com/user/repo.git/info/lfs/objects/2b31a74d70bf26b78b19937a58629b2c18457c5ad13b4d842e2bfba406e36b63: dial tcp 172.21.0.2:443: getsockopt: connection refused
LFS: Put https://privatehostname.com/user/repo.git/info/lfs/objects/1b741218078d119793c28a20bd633c1f8cd1bd9ce15c8e55aef6f96677c9e554: dial tcp 172.21.0.2:443: getsockopt: connection refused
LFS: Put https://privatehostname.com/user/repo.git/info/lfs/objects/5cb116a997aa62cdfe317135f8812d0c506ee96e57a5136d51e2b2fb7918b0f4: dial tcp 172.21.0.2:443: getsockopt: connection refused
LFS: Put https://privatehostname.com/user/repo.git/info/lfs/objects/9164d2aaa1dab2236a35887d62d75ccca70e8b371a02974c18827816e93ba0f8: dial tcp 172.21.0.2:443: getsockopt: connection refused
LFS: Put https://privatehostname.com/user/repo.git/info/lfs/objects/b8f50aa3eb233ec647ca31e96c0fbe9fa3fbd4ad5303cffc329d33545eea6244: dial tcp 172.21.0.2:443: getsockopt: connection refused
LFS: Put https://privatehostname.com/user/repo.git/info/lfs/objects/13d10c8b3b0ba6bf05151ad205f97e5bb7d489f31f409878b388ef96705620d9: dial tcp 172.21.0.2:443: getsockopt: connection refused
LFS: Put https://privatehostname.com/user/repo.git/info/lfs/objects/071aa9e3774e320c64d65e48d4dbe2ca00c614beaa4a42ee7ee9a719fd6f8d2e: dial tcp 172.21.0.2:443: getsockopt: connection refused
LFS: Put https://privatehostname.com/user/repo.git/info/lfs/objects/dac70040a45900e5f9df2e66cb4d15f6a61f2c1bc4597e412c45f1aecc1ec64b: dial tcp 172.21.0.2:443: getsockopt: connection refused
Curious thing is that remote is set to http://localhost:3000/user/repo.git
.
I added this in nginx configure file
http {
# ..........
# at the END of this segment!
client_max_body_size 3000m;
}
But, I still got
Git LFS: (0 of 2 files, 3 skipped) 0 B / 3.96 GB, 287.45 MB skipped
LFS: Client error: https://git...d.git/info/lfs/objects/6fb4175dcacea6d2ba94e3e82de5b9c322e4aa662812dc3c9448e6212847524f from HTTP 413
LFS: Client error: https://git....d.git/info/lfs/objects/f3fe319831272c3feeb55198c3d1e1b3a165081f1290a20bd11a27f33128e326 from HTTP 413
error: failed to push some refs to 'ssh://[email protected]'
I used gitea-docker version ca30698
I can replicate similar bug with sqlite. I got server side error
RemoveLFSMetaObjectByOid: database table is locked
.
I will have a look at it but it seems not related to size but more about concurrency.
Hi
Is there any resolution on this? I'm trying to push a test repository containing ~400MB of LFS files and it fails with authorization error.
c:\temp\Test>git push -u origin master
Git LFS: (0 of 2 files) 0 B / 472.87 MB
batch response: Repository or object not found: https://dato0011:[email protected]/Dato0011/test2.git/info/lfs/objects/batch
Check that it exists and that you have proper access to it
error: failed to push some refs to 'https://[email protected]/Dato0011/test2.git'
Was it fixed by #4035 ?
I need to wait for the next release. I like bleeding edge, but compiling from HEAD isn't always a great idea. ;)
I tried running the master branch and it fixed a similar problem for me: My issue looked like this:
Gitea Bootlog:
2018/06/08 14:09:46 [I] Git Version: 2.11.0 │
2018/06/08 14:09:46 [I] SQLite3 Supported │
2018/06/08 14:09:46 [I] Run Mode: Production │
2018/06/08 14:09:46 [I] Listen: unix:///tmp/gitea.socket │
2018/06/08 14:09:46 [I] LFS server enabled
Nginx reverse Proxy log:
[08/Jun/2018:14:17:01 +0200] "PUT /singinwhale/MoeWars.git/info/lfs/objects/fdbabf7c652dc5122b2e552189342d5b67393b89eb27086b5bce7471d0bd9815 HTTP/1.1" 401 12 "-" "git-lfs/2.4.2 (GitHub; windows amd64; go 1.8.3; git 6f4b2e98)"
[08/Jun/2018:14:17:01 +0200] "POST /singinwhale/MoeWars.git/info/lfs/objects/batch HTTP/1.1" 401 26 "-" "git-lfs/2.4.2 (GitHub; windows amd64; go 1.8.3; git 6f4b2e98)"
Git output during upload:
# git push singinwhale.com
Uploading LFS objects: 0% (0/1), 34 MB | 537 KB/s
Git output after upload:
Uploading LFS objects: 0% (0/1), 0 B | 544 KB/s, done
batch response: Authentication required: Authorization error: https://git.singinwhale.com/singinwhale/MoeWars.git/info/lfs/objects/batch
Check that you have proper access to the repository
error: failed to push some refs to '[email protected]:singinwhale/MoeWars.git'
I am pushing a commit with a tag (1.0.0) to an existing repo on gitea via SSH and LFS. I have set the new LFS_HTTP_AUTH_EXPIRY_MINUTES value to 3220 (~2 days) as introduced by #4035 and now the 178MB file uploads just fine. As opposed to the Error 401 I got previously. OP had an Error 413 though so I dont know if it is exactly the same issue.
I think #4035 fixes this issue, reopen if issue still persists
Actually, I have the same problem (HTTP 413), set size limit on server side to 0, no improvement. 4.7 GB here. Updated to newest Gitlab CE version today, no change.
In my case increasing size of post data on reverse proxy helped.
On April 4, 2019 18:23:40 Gerhard notifications@github.com wrote:
Actually, I have the same problem (HTTP 413), set size limit on server side
to 0, no improvement. 4.7 GB here. Updated to newest Gitlab CE version
today, no change.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
Could you guide me please a bit to where do what - I'm not directly an expert in these details (I'm glad that Gitlab is working at all...)? I have the Gitlab standard omnibus installation with default webserver.
@innoreq This is Gitea, not Gitlab, by the way.
oh, sorry - I was lead here by some googling for git and lfs... however, the problems seem pretty similar ;-)
This is very setup-dependent. I run nginx reverse proxy that redirects traffic to gitea running inside docker container. nginx max post size can be adjusted by setting it up in nginx.conf: https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size.
Edit: I forgot to mention that post size should be that of max size of single file that will be pushed to repository.
Thanks a lot - I figured out where the (gitlab)-nginx was sitting, and I found the size limit setting. Now, it behaves differently - still uploading, but no error up to now.
I added this in nginx configure file
http { # .......... # at the END of this segment! client_max_body_size 3000m; }
Fixed it. Thanks, @liu-kan !
I'm having the same issue: after ~60sec of uploading to LFS it fails.
LFS: Client error: http://host.com/user/repo/info/lfs/objects/9d49bf70e8f00e0a815b07ffc8fbce4aa521e6e303f19c2d173fa5d8e1e518c8 from HTTP 413
I am using Kubernetes with nginx ingress (installed from Helm), and playing with nginx.ingress.kubernetes.io/proxy-body-size
didn't help.
60s seems to be the default timeout of nginx. You can try to adjust them (like proxy_read_timeout). http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream_timeout
@sapk thanks for suggestions! I tried everything :)
metadata:
name: gitea-http
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: 5000m
nginx.ingress.kubernetes.io/proxy-connect-timeout: 3600
nginx.ingress.kubernetes.io/proxy-send-timeout: 3600
nginx.ingress.kubernetes.io/proxy-read-timeout: 3600
nginx.ingress.kubernetes.io/proxy-next-upstream-timeout: 3600
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
Actually I don't think the timeout is a problem, as underneath git CLI issues a lot of individual uploads which take ~15s at max.
I even tried the direct connection to k8s service using NodePort
(without ingress), no luck there as well, but now it fails with 413
after ~1min 30s instead of 1 min :) Unfortunately, git CLI seems to start over all the time, which doesn't allow me to upload all the objects even with this strange errors.
Gitea pod logs say nothing about a bunch of 413's I see after executing git push
but just a number of successful uploads:
[Macaron] 2020-04-29 18:56:36: Started POST /user/name.git/info/lfs/objects/batch for 10.1.27.1
[Macaron] 2020-04-29 18:56:37: Completed POST /user/name.git/info/lfs/objects/batch 200 OK in 967.411097ms
To add even more details, I am uploading ~1.5 Gb of data which are large jpegs of 5-100mb.
UPDATE by looking at git
CLI debug trace more closely, I figured out that it certainly fails to upload some files. (not to mention it always try to http
and then gets 308
to https
)
git lfs push zitroserver --object-id 55ac260c6a81ab0ddd71be1784f9c13b5aff2ccfe99510b0e24a461e3043627a
Uploading LFS objects: 0% (0/1), 688 KB | 248 KB/s, done.
LFS: Put https://git.svc.nikita.tk/ZitRos/nikita-tk.git/info/lfs/objects/55ac260c6a81ab0ddd71be1784f9c13b5aff2ccfe99510b0e24a461e3043627a: read C:\ZitRo\Projects\Personal\nikita-tk\.git\lfs\objects\55\ac\55ac260c6a81ab0ddd71be1784f9c13b5aff2ccfe99510b0e24a461e3043627a: file already closed
55ac260c6a81ab0ddd71be1784f9c13b5aff2ccfe99510b0e24a461e3043627a
is 178kb file!
UPDATE 2 wasting a bit more time I figured out that this issue is related to http->https redirect. Even the direct connection (Gitea server) returns http
URL (external host as set up in settings), and then gets 308 to https
from ingress. Uploading via https for some reason fails, and only for some files...
@ZitRos
Maybe you should try to add all the typical proxy_pass
parameters to the nginx.conf
. I think the configuration is convoluted for you, as opposed to the normal user, that does not use a Kubernetes ingress. For example, the http/s issue you mention is usually solvable by adding the X-Real-IP
(or whatever it is called) parameter to the nginx.conf
.
That said, an ingress is technically just a fancy word for a reverse proxy. So you can edit it like a typical reverse proxy.
UPDATE 3 pushing via HTTP is not successful either, however, now it seems not to push because of the size of the objects (~>48mb).
@theAkito thanks! I keep trying :D
Damn, it turned to be just a strange ingress issue. I drilled down the Nginx ingress controller and for some reason, it was ignoring any new annotations I was adding to the ingress spec. Deleting the ingress and bringing it back solved the issue! Thanks @theAkito and @sapk for your quick feedback!
UPDATE Still, it doesn't work over https however now the problem is identified as ingress problem.
UPDATE2: YAY! Turns out it's git-lfs problem, which doesn't handle 308 redirects properly which results in "file already closed" errors. The latest git-lfs binary (which is not yet released!) works flawlessly!
@ZitRos That is good to know, especially for future readers of this issue.
I am experiencing this same bug. Has this issue been fixed? I am using git-lfs/2.11.0 (GitHub; linux amd64; go 1.13.4)
I am experiencing this same bug. Has this issue been fixed? I am using
git-lfs/2.11.0 (GitHub; linux amd64; go 1.13.4)
Update. I was able to fix this issue with git config http.version HTTP/1.1
gitea v1.11.5, even set nginx client_max_body_size=0, still occurs, my repo size is about 30m. please reopen it @simonszu
http {
client_max_body_size 0;
}
$ git push --set-upstream origin master
Enumerating objects: 834, done.
Counting objects: 100% (834/834), done.
Delta compression using up to 8 threads
Compressing objects: 100% (810/810), done.
Writing objects: 100% (834/834), 20.17 MiB | 15.75 MiB/s, done.
Total 834 (delta 98), reused 0 (delta 0)
error: RPC failed; HTTP 413 curl 22 The requested URL returned error: 413
fatal: the remote end hung up unexpectedly
fatal: the remote end hung up unexpectedly
Everything up-to-date
@bthulu Perhaps you should open a new issue. It is working for most people, as far as I have seen (including myself). Maybe you have a different root issue.
Most helpful comment
This maybe a reverse proxy upload size setting problem?