When trying to download texlive, nix fails because the mirror used to host the files is down. The mirror in question being:
lipa.ms.mff.cuni.cz
It might be a good idea to find a different, more reliable server to host the files.
Just try to install texlive.
The domain seems to have come back up, so my immediate problem is solved. But it'd be good if this kind of incident could be avoided in the future.
lipa
was intended as a very short-term workaround. It shouldn't really be used at all and it's not even a server. Somehow, it has remained in the fetchurl
, but the binary cache is supposed to be automatically used in preference, for example my log:
$ nix-store --realize /nix/store/j83d11699an5r07lfxvk7634rd0vnsam-texlive-20160523b-source.tar.xz.drv
these paths will be fetched (43.82 MiB download, 43.81 MiB unpacked):
/nix/store/w0iflrjsl6ssqi4d4ml9jvnyyacwb62c-texlive-20160523b-source.tar.xz
fetching path ‘/nix/store/w0iflrjsl6ssqi4d4ml9jvnyyacwb62c-texlive-20160523b-source.tar.xz’...
*** Downloading ‘http://cache.nixos.org/nar/0hrcqnq6vccycj82h31ljr5hhpsgwmjaj2y7w01ww4yadgzsvwra.nar.xz’ (signed by ‘cache.nixos.org-1’) to ‘/nix/store/w0iflrjsl6ssqi4d4ml9jvnyyacwb62c-texlive-20160523b-source.tar.xz’...
I might try to think deeper of this (again), but I'm permanently overextended...
Is there a workaround available until this issue is resolved?
Would someone mind providing an example on how to override the URL?
I don't think you can easily override this currently. The simple way would be to edit nixpkgs... I think we could use fetchurl with mirrors to provide backup using my mirror. If somebody could implement a PR that'd be nice.
@volth As far as I understand, hashes depends on the store path, so that feature would not work for many sudo-less installations. I'm not saying that it isn't worth having working though.
Oh, I haven't realized – these don't work like regular tarballs and that might be why tarballs.nixos.org
doesn't work ATM. (I don't remember if we did try to mirror all texlive in there or not.)
The problem is that the output of the fetch isn't the tarball itself, but it's an unpacked tree with additional fixes, therefore the hash of the tarball and hash of the output differ. (This is to avoid having the data in nix store multiple times, as full texlive takes gigabytes.) It's possible the upload script used the tarball hash instead of the output hash...
I don't think the issue is with texlive, rather, fetchurl should be able to retry on different mirrors if one fails with a connection timeout.
@siddhanathan: there's no working source or mirror in nixpkgs ATM ;-)
EDIT: BTW, I believe nix(pkgs) has always been retrying if the mirror behaves reasonably, e.g. not returning some junk instead of error.
@veprbl: now using your mirror instead, until we have some longer-term solution. Thanks!
@vcunat How is it possible to start using this mirror? Should I override package texlive-full
somehow in my Nix-configuration files?
@kuznero: I don't know an easier way than using master directly or cherry-picking that commit atop some commit you want.
Or you can run nix-prefetch-url
on whatever files you want, without touching any config.
@vcunat thanks for the tip! Will try...
@kuznero Here's a workaround I've ended up using, just add the following entry to /etc/hosts
:
146.185.144.154 lipa.ms.mff.cuni.cz
Thanks to @veprbl for hosting this.
Sure: http://146.185.144.154/~cunav5am/nix/texlive-2016/
If you decide to go with /etc/hosts
option, just make sure to not forget about it when the time comes)
@yegortimoshenko, thanks! looks like exactly what I need
Is there any retry mechanism that will cycle through different mirrors while installing it?
In general, multiple URLs are tried, but I'm not aware of multiple mirrors available for these texlive tarballs. (if I discount the binary cache)
@vcunat btw the binary cache, why are those packages downloaded from texlive mirrors and not from Nix binary cache in the first place?
They won't mirror any fixed versions, only the newest ones that are constantly updated. EDIT: I had asked; they think distros should do that instead.
What's the affiliation of http://146.185.144.154 ? It it on AWS or something?
What's the affiliation of http://146.185.144.154 ? It it on AWS or something?
Likely not AWS
$ dig -x 146.185.144.154
; <<>> DiG 9.11.2-P1 <<>> -x 146.185.144.154
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 41548
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: d49552c1dbc98b3939469c9e5a73326199b4173a36c5e790 (good)
;; QUESTION SECTION:
;154.144.185.146.in-addr.arpa. IN PTR
;; ANSWER SECTION:
154.144.185.146.in-addr.arpa. 1800 IN PTR lists.aspid.ru.
;; Query time: 123 msec
;; SERVER: 192.168.178.1#53(192.168.178.1)
;; WHEN: Thu Feb 01 16:29:37 CET 2018
;; MSG SIZE rcvd: 113
The service is donated by @veprbl. I'm not sure what you mean by affiliation. Still, it's all fixed-output derivations, so the servers don't matter from correctness/security point of view.
Most helpful comment
@kuznero Here's a workaround I've ended up using, just add the following entry to
/etc/hosts
:Thanks to @veprbl for hosting this.