Jobsets:
Every time we branch off a release we stabilize the release branch.
Our goal here is to get as little as possible jobs failing on the release-20.03 jobset.
I'd like to heighten, while it's great to focus on zero as our goal, it's essentially to
have all deliverables that worked in the previous release work here also.
At the opening of this issue we have the main jobset at 1204 failing jobs, x86_64-darwin
at 1384, and aarch64-linux
at 7482.
The first evaluation of 19.09 had 1654 failing jobs.
So we're actually starting off with slightly less failing jobs compared to the last release.
Select an evaluation of the release-20.03 jobset by #id
Find a failed job ❌️
Work out why it's failing and fix it
Pull Request the fix
Generally the job fails on master also, you can verify that on Hydra - example URL: https://hydra.nixos.org/job/nixpkgs/trunk/bash.x86_64-linux.
That means most PR's should be targeted on master
or staging
for mass rebuilding changes.
Always reference this issue in the body of your PR:
ZHF: #80379
Details on that are in CONTRIBUTING.
Please ping @NixOS/nixos-release-managers on the PR.
The remaining packages will be marked as broken before the release (on the failing platforms).
You can do this like:
meta = {
# ref to issue/explanation
# `true` is for everything
broken = stdenv.isDarwin
};
These are the utility flags used to test the type of platform.
This is a great way to help NixOS, and it was some of my earliest contributions.
Let's go ✌️
✨️ worldofpeace
cc @NixOS/nixpkgs-committers @NixOS/nixpkgs-maintainers
IT BEGINS!!!!
This was actually delayed by https://github.com/NixOS/nixpkgs/issues/79907, but we just got this sorted.
Many thanks to everyone who fixed it :sparkles:
This issue has been mentioned on NixOS Discourse. There might be relevant details there:
It looks like all nixosTests on i686 are failing because of a test in python aiohttp: https://hydra.nixos.org/build/112800138/nixlog/390/tail.
The problem is
>> int(4575744000.0).bit_lenght()
33
which can't be represented on i686 time_t.
Should we silence the test on 32bit platforms?
@rnhmjoj They already disable some tests on that platform https://github.com/aio-libs/aiohttp/pull/4021/files#diff-484462fced51d1a06b1d93b4a44dd535R34, sounds good.
Even better Somewhat preferred, patch it to disable only the failing tests.
Can I just say that this is an awesome ZHF description, with lots of valuable info in it.
Great job @worldofpeace! 👏
Can I just say that this is an awesome ZHF description, with lots of valuable info in it.
Great job @worldofpeace! clap
Thank you :blush: I tried to include all the useful information from my predecessors and my own stylized information.
Awesome! Anyone know if there's a way to show failing builds by maintainer? Also, if I want to test my config, should I use https://github.com/NixOS/nixpkgs/tree/20.03-beta?
Thanks for the great work!
@tbenst Here's a simple expression you can put in your nixpkgs checkout (say mypkgs.nix
), which you can then nix-build mypkgs.nix
to build all top-level packages with you as a maintainer (adjust the maintainer there to yourself):
{ pkgs ? import ./. {}
, maintainer ? "infinisil"
}: with pkgs.lib;
filterAttrs (name: value:
(builtins.tryEval value).success &&
elem maintainers.${maintainer} (value.meta.maintainers or [])
) pkgs
Or nix-build mypkgs.nix -A <TAB>
to see and build individual ones
Also, if I want to test my config, should I use https://github.com/NixOS/nixpkgs/tree/20.03-beta?
No, that's the tag, you want to use the release-20.03
branch. But I would wait a few days because I'm not sure everything is built.
I have this extremely primitive script:
#!/usr/bin/env bash
# Simple script to check nix attrs for build status on Hydra
while [ -n "$1" ]; do
url="https://hydra.nixos.org/job/nixos/release-20.03/nixpkgs.$1.x86_64-linux"
printf "Last status of $1 = "
res=$(curl -s "$url" | rg -o 'title="[A-Za-z]+"' | sed 's|title=||' | sed 's|"||g' | head -1)
echo "$res"
if [ "$res" = "Failed" ]; then
xdg-open "$url" >/dev/null 2&>1
fi
shift
done
which can then be called like this:
$ hydra python37Packages.numpy dnnl s4cmd
Last status of python37Packages.numpy = Succeeded
Last status of dnnl = Succeeded
Last status of s4cmd = Succeeded
It'll open the Hydra page in your web browser for details if there's a failure.
I've been chaining it together with ripgrep
, but I think chaining together with a nix expression similar to @Infinisil's could result in something more robust.
I've also noticed Hydra is responding very slowly to page loads, so I'm wondering if there's a faster/better way to hit it; I'm not calling in high volume, but please wait for someone who knows more to chime in before you hammer the server with a huge load. It's possible Hydra's REST API has something better here than curl'ing and grepping the status page.
https://nixos.org/hydra/manual/#chap-api
@tbenst Here's a simple expression you can put in your nixpkgs checkout (say
mypkgs.nix
), which you can thennix-build mypkgs.nix
to build all top-level packages with you as a maintainer (adjust the maintainer there to yourself):{ pkgs ? import ./. {} , maintainer ? "infinisil" }: with pkgs.lib; filterAttrs (name: value: (builtins.tryEval value).success && elem maintainers.${maintainer} (value.meta.maintainers or []) ) pkgs
Or
nix-build mypkgs.nix -A <TAB>
to see and build individual ones
note that this will only do the top-level packages defined in all-packages
. if you're concerned about other language frameworks, you will need to merge those attrs as well
I'm not sure what's up here.
EDIT:
I can't reproduce it with pkgsi686Linux.python3Packages.pytest-timeout
I ran a script to determine which packages cause the most transitive failures:
[('zjl4pbwar26kn4h1vffldazh3n1ns2fj-python3.7-pytest-timeout-1.3.3', 333),
('g4lj2f02pqj87vfmy2sypc7k7g8k14sv-python2.7-decorator-4.4.1', 154),
('8627r1m36krk4767l4vv5ljhf4j6n2xw-python3.8-fsspec-0.6.2', 22),
('7f18w1x5v2i17n4g2zzzg27br3k9sv11-ghostscript-9.50', 17),
('767sn47ai3q0vvwg1wm10pzmgy11hr64-kodi-18.5', 10),
('y5k2ff3y45cv0av9prylff4rzhm5mxn0-tensorflow-1.15.0', 9),
('h5588ngz0n72cmcwn5qjfixp12xx0drw-tensorflow-1.15.0', 8),
('0rz02m46032dzvqwhc9kmz381v4176cq-hscolour-1.24.4', 6),
('jcbmmjbp6ank5xkx9wdmrzcrjv681mlh-python3.7-pysam-0.15.3', 5),
('xvw3lmkp5lpba8rq0la92cc45pccm60m-torch-5.1', 5)]
Indeed, pytest-timeout
, as @rnhmjoj already noticed, and also decorator
. After that, it falls of quickly.
@Synthetica9: you can use this tool to generate a report on problematic dependencies: eval-report.
https://hydra.nixos.org/build/113063456#tabs-summary - needs cherry pick of e24c04f278e7b13ec4edbcfcadec55e33b42c1de
into release-20.03.
I spent a bit of time on google-cloud-storage but didn't figure out how to fix correctly. AFAICT the issue is down to google-cloud-* using pth
files. The module seems to work fine when used in a normal, symlinked virtualenv:
NIX_PATH=nixpkgs=$pwd nix-shell -p 'python3.withPackages (x: [x.google_cloud_storage])' --command "python -c 'import google.cloud.storage'"
But doesn't work when running tests which set PYTHONPATH
instead of a symlink tree. The quick option for 20.03 would be to disable tests, the longer term solution to me seems to test in the same environment as is used later (i.e. python3.withPackages
).
IIRC @teh, the google python packages have a .pth namespace, which doesn't work well with nix. For the azure-* packages, i essentially had to force PEP420 compliance in #71797
the quick option wont work, if someone has google_auth
and google_api_core
in their environment, they will only be able to pick up one of those files, but not both (because they both share the same namespace). This is an instance that the unit tests demonstrate the package is in a defunct state. For ZHF, I would just disable.
If it fails in the checkPhase we could export PYTHONPATH as NIX_PYTHONPATH.
On Tue, 18 Feb 2020, 22:08 Jon, notifications@github.com wrote:
the quick option wont work, if someone has protobuf and google_api_core
in their environment, they will only be able to pick up one of those files,
but not both. This is instance that the unit tests demonstrate the package
is in a defunct state. For ZHF, I would just disable.—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub
https://github.com/NixOS/nixpkgs/issues/80379?email_source=notifications&email_token=AAQHZ32DHHUT4CZDCEO55D3RDRE6JA5CNFSM4KWZNMRKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEME6HAY#issuecomment-587850627,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAQHZ36KGL6LEE45US2NG23RDRE6JANCNFSM4KWZNMRA
.
Replying to @bhipple 's script:
url="https://hydra.nixos.org/job/nixos/release-20.03/nixpkgs.$1.x86_64-linux" printf "Last status of $1 = " res=$(curl -s "$url" | rg -o 'title="[A-Za-z]+"' | sed 's|title=||' | sed 's|"||g' | head -1) echo "$res"
Try instead:
curl -s -L 'http://hydra.nixos.org/api/latestbuilds?nr=1&project=nixos&jobset=release-20.03&job-nixpkgs.dnnl.x86_64-linux' | jq .[0].buildstatus
This should be a lighter-weight query, although the buildstatus is numeric. 0 = success, anything else probably warrants looking into.
@jonringer I started work on pep420 compliance here. It's a non-trivial amount of work and I'm not sure I can complete for 20.03.
re https://hydra.nixos.org/build/113055208 nixos:release-20.03:nixos.tests.docker-tools.x86_64-linux
I've checked the nixos/tests/docker-tools.nix
and it works ok locally. On hydra, it has the status Aborted: cannot connect to ‘[email protected]’: ssh: connect to host edcd58f5.packethost.net port 22: Connection timed out
, so, I guess it's some timeout, but not sure where to look for it...
Seems like it works okay in newer builds like https://hydra.nixos.org/eval/1571285
@teh given we're past feature freeze, I would suggest we don't include it in 20.03.
@Frostman: I see it failing repeatedly on Hydra for some other reason: https://hydra.nixos.org/build/113288778#tabs-buildsteps (which is from the latest 20.03 evaluation; locally it seems to hang)
Do you have any idea on why hydra reports a failure here but there is nothing indicating one occured in the log: https://hydra.nixos.org/build/113082114/nixlog/1/tail?
Hmm, I'd say the log doesn't seem complete. I'm not sure.
If it did seem complete, I'd say this typically happens when a dependency of a build first fails but later someone restarts it (from a different depending build). As logs are stored per *.drv
and get over-written, so then you may see an old failure with a link to a log from a succeeding build.
Well, I'm clueless. I went as far as building a 32bit VM to build and run the tests and couldn't reproduce the failures. I can even get the pytest-timeout package from the binary cache now but hydra somehow still fails.
Looking at the state now, perhaps it was complete, so I expect my second paragraph holds. In any case, it's succeeded on Hydra.
@teh I'm going to agree with @disassembler and what I said https://github.com/NixOS/nixpkgs/issues/80379#issuecomment-587850627. For ZHF, I would just disable the broken google packages. IIRC, they were mostly broken for the 19.09 release, so it's not much of a regression for them to be broken in the upcoming release.
It's still worthwhile work if you have a use case for the packages. My use case was bringing azure-cli
back to nixpkgs. Not sure if google has a similar cli utility for their cloud.
pytest-timeout test suite seems flaky, for similar reasons as https://github.com/NixOS/nixpkgs/pull/80512. It may be tricky to fix its test suite however, because it's kinda the entire point of the module to make tests flaky ;). In fact, if something is using is this module, its tests are also suspect for being flaky (it may be worth to patch out pytest-timeout
usages from packages that depend on it).
It's probably possible to reproduce the failure by having a ridiculous load on the machine, so that certain tests would fail. It won't be easy however. You need to prevent the scheduler from running a task for a second.
@xfix or we could patch pytest-timeout to do nothing. I'll look into this.
pytest-timeout
is a real Python module, we shouldn't modify it to behave differently. However, I don't think Nixpkgs tests should depend on it (putting timeouts on tests is pretty much asking for flakiness). Our packages however can depend on pytest-timeout
mock. I don't think there are that many packages that depend on pytest-timeout
however, so patching out its usages would work better.
In fact, I looked at aiohttp
, one of packages that depends on pytest-timeout
. Turns out this requirement was deleted because it was causing flaky tests: https://github.com/aio-libs/aiohttp/commit/00f5c40533b3095c31c6502bdea0e0d46a08fec7
Those google packages are needed by apache-airflow too.
Perhaps there is enough of us to get at least one version in good shape for
this release?
On Thu, 20 Feb 2020 at 8:22 AM Konrad Borowski notifications@github.com
wrote:
In fact, I looked at aiohttp, one of packages that depends on
pytest-timeout. Turns out this requirement was deleted: aio-libs/aiohttp@
00f5c40
https://github.com/aio-libs/aiohttp/commit/00f5c40533b3095c31c6502bdea0e0d46a08fec7—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub
https://github.com/NixOS/nixpkgs/issues/80379?email_source=notifications&email_token=AAANNV6SQ6WRAG2WHBQHTETRDZ7YVA5CNFSM4KWZNMRKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEMN3VRY#issuecomment-589019847,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAANNV5EGRQKTIIOFJ7BN63RDZ7YVANCNFSM4KWZNMRA
.
xfix was faster, but I have tested :)
Merge whatever in terms of tigervnc, those commits are identical as they are cherrypicks from master
.
@xflx for flakey python tests, just disable them pytest -k 'not some_test_which_is_flakey'
unittests are still a good sanity check to ensure the package has compatible dependencies and is working as expected.
Because https://hydra.nixos.org/build/112812334 is failing can https://github.com/NixOS/nixpkgs/pull/59414 be cherry-picked into the release?
@hlolli if you want to see change happen, you're fully enabled to create a PR targeting the release-20.03 branch, and fix the packages you're invested in
@jonringer ok thanks, I got confused from all the cherry-picks from NeQuissimus https://github.com/NixOS/nixpkgs/commits/release-20.03 that it had to go trough them. Sorry for the noise, I guess hundreds of people will get notification :1st_place_medal:
it's all good, one team.
just remember to follow the directions from @worldofpeace , cherry-picks should include the -x
switch which will reference the commit it was taken from.
I've updated the OP to include, please ping the @NixOS/nixos-release-managers team on PRs to port to stable.
@worldofpeace did you only want the release-managers doing backports? or is acceptable if they follow the cherry-picking and other guidelines for other to merge ports?
No, it would be impossible for either of us to perform that many backports.
I think the release managers should be notified about them, and review them.
If a change is for ZHF it has to be backported, and the author should PR that too (why the how to help doesn't stop at step 4) Committers are trusted to integrate backports correctly.
There's a whole bunch of aarch64 packages failing because the 20.03 qtwebengine.aarch64-linux build timed out: https://hydra.nixos.org/build/113000788 Is there a way to rerun it / give it more time in future?
I'm not sure why the abortions; I restarted them.
IIRC our Firefox is doomed for the regular i686 build, as some build steps will probably never fit into 4G address space anymore. I don't expect there will be enough motivation to somehow solve this in nixpkgs; perhaps we should just mark that variant as broken.
On 02:54 25.02.20, Vladimír Čunát wrote:
IIRC our Firefox is doomed for the regular i686 build, as some build
steps will probably never fit into 4G address space anymore. I don't
expect there will be enough motivation to somehow solve this in
nixpkgs; perhaps we should just mark that variant as broken.
Agreed. They have been very unreliable for the last few years and there
is really no hope left that they can be compiled on i686 (without
significant investment).
Here there's something strange: the build failure is propagating from a successful build.
IIRC our Firefox is doomed for the regular i686 build, as some build steps will probably never fit into 4G address space anymore. I don't expect there will be enough motivation to somehow solve this in nixpkgs; perhaps we should just mark that variant as broken.
Can't it be cross-compiled? Or am I misunderstanding the issue?
the build failure is propagating from a successful build
You can see it in the "build steps" tab that first attempt failed (and thus killed depending jobs) and then someone must have restarted it. I now restarted also the build you linked.
Yes, for Firefox the typical approach (e.g. what upstream uses) is a relatively simple cross-compilation from a 64-bit machine IIRC.
@tomberek and I kicked off a bunch of builds on 20.03 compiled against CUDA and MKL. Generally looks good! Pytorch builds for example.
Tensorflow does not, which is known. Airflow fails as well: https://hydra.nix-data.org/eval/172#tabs-still-fail
@tomberek and I kicked off a bunch of builds on 20.03 compiled against CUDA and MKL. Generally looks good! Pytorch builds for example.
That is one of the best news I've heard for a while, amazing job !
Edit: oh wait I think I misunderstood, do you mean that Pytorch with CUDA on Hydra is not a thing? :/
Edit: oh wait I think I misunderstood, do you mean that Pytorch with CUDA on Hydra is not a thing? :/
cuda is nonfree. Hydra is not able to "redistribute" it without potential license infringements. It's an Nvidia thing, not a hydra thing.
cuda is nonfree. Hydra is not able to "redistribute" it without potential license infringements.
Can we build on Hydra to get one bit output: success or failure and provide build logs in case of failure without potential license infringements?
I mean, if we build as free packages, but only .nar direct link will return "error 451"?
I think this discussion thread should stick to ZHF, cuda discussion can be done in a separate thread.
my final words on the matter:
Can we build on Hydra to get one bit output: success or failure and provide build logs in case of failure without potential license infringements?
seems like a lot of compute resources just to get a thumbs up or down, and doesn't really change the fact that the consumer would still have to build it locally.
I mean, if we build as free packages, but only .nar direct link will return "error 451"?
I don't work on the nixos infrastructure, but my immediate gut reaction is that this would introduce another dimension of "technical debt" to how the caching mechanism works, with little to no benefit to users.
@GuillaumeDesforges @conferno right, this is not an "official" NixOS hydra server but one I'm privately hosting that currently builds against MKL & CUDA but does not yet distribute cache. May switch to Nix-Community infra in near future. Hydra itself can use CUDA, it's just that hydra.nixos.org won't. Better to discuss here: https://discourse.nixos.org/t/re-improving-nixos-data-science-infrastructure-ci-for-mkl-cuda/
Kicad is failing, but it seems to work on my machine. I think it's because it tries to pull in the 3D models for components, which is 4.88 GiB on my machine.
Also, I have #81030 #81038 and #81042 that are I think ready to be merged -- apologies if it's bad manners to bump PR this way.
Also, I have #81030 #81038 and #81042 that are I think ready to be merged -- apologies if it's bad manners to bump PR this way.
Oh no problem. That is exactly what this thread is for :+1:
Also, I have #81030 #81038 and #81042 that are I think ready to be merged -- apologies if it's bad manners to bump PR this way.
Oh no problem. That is exactly what this thread is for 👍
On that note, I will cherry pick https://github.com/NixOS/nixpkgs/pull/81035 and https://github.com/NixOS/nixpkgs/pull/81033 once approved which should fix ~20 builds dependent on tensorflow. Edit: done
This issue has been mentioned on NixOS Discourse. There might be relevant details there:
This fixes a regression in 20.03 (ebtables command renamed with no replacement).
Is there a reason why ZHF builds are performed with config.allowAliases = true
?
It results in successful builds of packages which would be failed otherwise (https://github.com/NixOS/nixpkgs/pull/81585 https://github.com/NixOS/nixpkgs/pull/81584)
tree-sitter
should be fixed now after backporting the update to 0.16.4 in 57fafc08f95e2519dce88a259761d7af69e5b519.
Not sure if it's a big deal or not, but on 19.09 upower
"just works"; on 20.03 it will segfault unless services.upower.enable = true
. I think this is a result of https://github.com/NixOS/nixpkgs/pull/73968, but haven't bisected it to verify.
Doesn't necessarily seem like a big problem (?), but perhaps worth calling out in the release notes more explicitly.
I tried removing /etc/UPower/UPower.conf
but the service still does not crash, only a warning is logged. Could you obtain a backtrace and open a new issue?
Opened https://github.com/NixOS/nixpkgs/issues/82529. We can take the thread over there, but let me know if it segfaults for you when you haven't explicitly enabled the upower
service in NixOS.
(Tagging ZHF in that last PR of mine was a mistake: the broken version of openblas (0.3.8) is currently only in nixpkgs master)
This issue has been mentioned on NixOS Discourse. There might be relevant details there:
https://discourse.nixos.org/t/i-want-to-help-out-easy-to-fix-issues/6334/2
Backported #80756 as 50de0ac5541b02abbf1a46962da04d727efae3b3. This fixes at least the MySQL support for bitwarden_rs
(https://hydra.nixos.org/build/115532261).
This issue has been mentioned on NixOS Discourse. There might be relevant details there:
https://discourse.nixos.org/t/go-no-go-meeting-nixos-20-03-markhor/6495/5
@bhipple i've written a similar tool called hydra-check
which provides similar features (with a bit more sugar around), check out https://github.com/nix-community/hydra-check
FYI hydra-unstable
broke quite recently due to API changes in nixUnstable
. This is fixed in #84501.
The haskell infrastructure seems to be in sad state for aarch64: the compiler doesn't build, so all haskell packages fail on AArch64: https://hydra.nixos.org/eval/1580715?filter=compiler.ghc&compare=1580586&full=#tabs-still-fail
Needs a backport of https://github.com/NixOS/nixpkgs/pull/80355
Putting my GHC hat on, I would strongly recommend that people use at leaat GHC 8.8.2 on AArch64. Previous releases had some pretty awful bugs on architctures with weak memory models.
With the compiler not even building we're quite safe from such bugs ;-)
After that point is addressed, on 20.03 I see the default ghc is 8.6.5 (doesn't differ by platform AFAIK). Master has 8.8.3 now. Perhaps we can conservatively backport the default upgrade only on some platforms, but I don't know... (to be clear, I don't know much about Haskell)
I don't think it's a good idea to have a different version depending on a platform, especially when the version difference in question introduces so many changes that ghc couldn't be safely updated for all platforms on a stable release of nixpkgs. This could get confusing.
Yes, but last-minute switch of the main platform to 8.8 didn't sound very nice to me either :shrug:
It seems we had a huge regression (+8300 package failures) three days ago; is that just the GHC stuff?
https://hydra.nixos.org/jobset/nixos/release-20.03
I did just declare 20.03 as a GO https://discourse.nixos.org/t/go-no-go-meeting-nixos-20-03-markhor/6495/23.
So are those regressions just haskell stuff on aarch64?
I also believe hydra was having a hard time for a few days, currently trying to get stuff to not timeout :frowning:
That may be because we got aarch64 working for GHC.
It looks like something went wrong with the x86_64 llvm build: click on the propagated build failure in https://hydra.nixos.org/build/116359963, which actually links to a successful build? Does that mean the build was restarted?
It looks like something went wrong with the x86_64 llvm build: click on the propagated build failure in https://hydra.nixos.org/build/116359963, which actually links to a successful build? Does that mean the build was restarted?
Yes, I restarted it.
We currently are at the stage of marking packages as broken on master and 20.03 https://github.com/NixOS/nixpkgs/issues/83805 https://github.com/NixOS/nixpkgs/pull/85331 https://github.com/NixOS/nixpkgs/pull/85417. By marking the remaining failing packages as broken that would achieve "reach as close to zero". This will likely cause a situation where maintainers will notice their packages are broken. I consider that the final stage of ZHF for 20.03, thank you everyone. Also know that during 20.03 lifespan you can fix broken packages, and I'd encourage you to please do so if your package has been marked as broken.
Committers, if there's open PRs that conflict with marking as broken please integrate them after things have been marked.
I'm going to another small round of marking after https://github.com/NixOS/nixpkgs/issues/83805 is evaluated by hydra. With that task finished this thread will be closed.
@worldofpeace What notification will one get if one's package is marked as broken?
@expipiplus1, if you're part of the nixpkgs-maintainers team (intentionally not @ mentioned), you get notifications from ofborg when PRs are opened for a package if you're in meta.maintainers
. I'm not sure that's true if the commits don't pass through a PR though.
It seems worthwhile to investigate why mitmproxy
is broken on release-20.03. This does not seem related to mitmproxy per se, it may be related to python, or openssl, or its dependencies.
@orivej It looks like an openssl update broke the test, at least that's what I get bisecting: 0e5ef8c4709e44dfdf22f742c1ebf92d2420e1cf
Yes, the bisect is correct: https://github.com/mitmproxy/mitmproxy/pull/3692#issuecomment-608454530. The real problem is mitmproxy in 20.03 is one major release behind.
The real problem is mitmproxy in 20.03 is one major release behind.
Do you mean cryptography
rather than mitmproxy
?
Do you mean cryptography rather than mitmproxy?
No, I really meant mitmproxy, which is at 4.0.4, latest is 5.1.1. cryptography is only slightly outdate: 2.8, latest 2.9.
I'm trying to build 5.1.1 right now, if I manage to get it working without changing too many dependencies I could port it to 20.03, otherwise I'll just add yet another patch.
Aha, OK, mitmproxy is not broken on master since master has not yet updated OpenSSL from 1.1.1d to 1.1.1f.
I've fixed mitmproxy in 20.03 with 2e08e8cb260. The update to 5.1.1 requires a bump of cryptography
, which is probably better not to backport, see #85458.
@disassembler Why was python3Packages.svgwrite marked as broken in c6be4c19578ecd8ac8211b753640f3693d00c8d9 ? It seems to build fine, and Python can import the resulting package.
If anyone has a package that is wrongly marked as broken or can be fixed, please open a PR.
There's no way us as RMs are going to be able to follow that after the fact, it's just not possible.
20.03 is out https://discourse.nixos.org/t/nixos-20-03-release/6785.
Thanks to everyone who helped with 20.03. We accept backports into the stable release during its lifetime, bugfixes and security updates. Minor updates also. If certain exceptions are needed we can work with it. :wave:
I just want to thank @worldofpeace @disassembler and many others for the work they put forward to make this release happen. It makes me happy to see how much the nix community desires to see NixOS be successful :).
May NixOS be the way of the future :)
Most helpful comment
20.03 is out https://discourse.nixos.org/t/nixos-20-03-release/6785.
Thanks to everyone who helped with 20.03. We accept backports into the stable release during its lifetime, bugfixes and security updates. Minor updates also. If certain exceptions are needed we can work with it. :wave: