Cudf: [BUG] Cannot do a clean install of rapids+blazingsql 0.14.1

Created on 26 Aug 2020  路  8Comments  路  Source: rapidsai/cudf

Describe the bug

A bit hard to get a the full error due to conda taking ~45min to fail to emit ;-)

From a largely empty docker / mini-conda env for 0.14.1 (last stable) / python 3.7.7 (empirically determined 1 week ago: https://hub.docker.com/r/graphistry/graphistry-blazing/tags)

    && /conda/bin/conda install \
        --yes --freeze-installed \
        -c $RAPIDS_CHANNEL -c nvidia -c conda-forge -c defaults \
        rapids=$RAPIDS_VERSION \
        python=$PYTHON_VERSION \

... w/ blazing 0.14...
&& time conda install \ --yes --freeze-installed \ $BLAZING_CHANNEL \ $RAPIDS_CHANNEL \ -c conda-forge \ -c defaults \ blazingsql=$BLAZING_VERSION \ python=$PYTHON_VERSION \ cudatoolkit=10.0 \

? - Needs Triage bug

All 8 comments

If you're already in container, why not start from a RAPIDS container and just conda install blazingsql to see if that works?

We're controlling our container -- this is just emphasizing how clean of an env that RAPIDS cannot install in

(Working on pasting the error)

(currently retrying by swapping rapids with cudf, cuml, etc. @ 0.14.1 and not including cuxfilter)

@lmeyerov To set the cuda version during conda install for blazingsql 0.14, you want to do something like this:

conda install -c blazingsql/label/cuda10.0 -c blazingsql -c rapidsai -c nvidia -c conda-forge -c defaults blazingsql python=$PYTHON_VERSION

specifying cudatoolkit=10.0 wont tell blazingsql 0.14 what package to get. We have plans to fix that.

@williamBlazing that used to be true, see above ;-) cuspatial 0.14 dependencies have conflicts around gdal, and looks like cuxfilter has a bit of a mess around pinning as well but less clear

Tested interim fix for rapids 0.14, credit to @kkraus14 : downgrade "gdal>=3.0.2,<3.1.0a0" :

So 0.14.1 / py 3.7.7 (or 3.7)

    && /conda/bin/conda create --name rapids python=$PYTHON_VERSION -y \
    && source activate rapids \
    && /conda/bin/conda install \
        --yes --freeze-installed \
        -c $RAPIDS_CHANNEL -c nvidia -c conda-forge -c defaults \
        python=$PYTHON_VERSION \
        rapids=$RAPIDS_VERSION \
       "gdal>=3.0.2,<3.1.0a0" \

While tested, conda still complains about being forced to relax versions, so still a bit of a lottery

I'm going to close this issue as it seems a resolution has been found.

@kkraus14 @mike-wendt @datametrician As this escaped from dev into stable release, felt worth a minimal post mortem.

-- Faster detection & response: Filed an issue for daily testing of last 2 stable releases (=3mo), esp. around heavy Python dependencies https://github.com/rapidsai/cudf/issues/6108

-- Preventing repeats: AFAICT part of the issue is heavy & unstable third-party dependencies with unreliable version pinning of known-good states // inconsistent versions forcing conda to relax pinning, with both cuspatial & cuxfilter being especially heavy in taking on many untrustworthy dependencies (?). I don't know enough to file general/specific issues.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

saifrahmed picture saifrahmed  路  3Comments

AjayThorve picture AjayThorve  路  3Comments

henningpeters picture henningpeters  路  3Comments

yasmina-altair picture yasmina-altair  路  3Comments

galipremsagar picture galipremsagar  路  3Comments