I use Linux at work but at home I have windows and would like to be able to run it on my main machine with conda. Currently when I run:
conda env create --name pygdf_dev --file conda_environments/testing_py35.yml
I am seeing this error:
NoPackagesFoundError: Package missing in current win-64 channels:
- libgdf_cffi >=0.1.0a1.dev
I am hoping that this could be easily added to the win-64 channels?
We are only doing CI testing and building on Linux. The currently tested platforms are Linux and OSX (my development machine). The Linux CI builds are uploading from TravisCI. To provide windows CI builds, we can use AppVeyor. However, we are planning to address the windows support later once we get more of the basic feature done.
Hey - really fantastic project. I was just wondering if there was any update on windows support?
@sklam extremely needed support for windows
do you guys have windows support now>?
@SteffenRoe I asked that question 2 weeks ago in the RAPIDS-GoAi slack community and Keith Kraus and Mark Harris said that 1. if/when windows supports is added, that conversation would happen here, and 2. it's a big undertaking and they'd have to get window dev boxes. I'm also excited/anxious to try it out too, but I think for now, the best we can do (or at least what I've done) is up vote this request (to show interest in numbers) and subscribe to it (and remain hopeful). HTH
Whats the ETA for windows support?
Whats the ETA for windows support?
Would also like to know...
There is currently no plans for Windows support, if someone would like to try it and contribute fixes to enable Windows support we would be happy to support.
Hi, speaking of the devil, I am trying to compile Rapids CuDF in Windows 10 and I ran into some troubles. First of all my configuration:
When I run cmake .. -DCMAKE_CXX11_ABI=ON inside the newly created build folder of cudf I have the following error :
`Determining if the CUDA compiler works failed with the following output:
Change Dir: /cygdrive/c/cudf/cpp/build/CMakeFiles/CMakeTmp
Run Build Command(s):/usr/bin/make.exe cmTC_bd22e/fast
/usr/bin/make -f CMakeFiles/cmTC_bd22e.dir/build.make CMakeFiles/cmTC_bd22e.dir/build
make[1]脗聽: on entre dans le r茅pertoire C:/cygdrive/c/cudf/cpp/build/CMakeFiles/CMakeTmp脗聽脗禄
Building CUDA object CMakeFiles/cmTC_bd22e.dir/main.cu.o
"/cygdrive/c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/bin/nvcc.exe" -x cu -c /cygdrive/c/cudf/cpp/build/CMakeFiles/CMakeTmp/main.cu -o CMakeFiles/cmTC_bd22e.dir/main.cu.o
c1xx: fatal error C1083: Impossible d'ouvrir le fichier source聽: 'C:/cygdrive/c/cudf/cpp/build/CMakeFiles/CMakeTmp/main.cu'聽: No such file or directory
main.cu
make[1]: * [CMakeFiles/cmTC_bd22e.dir/build.make:66: CMakeFiles/cmTC_bd22e.dir/main.cu.o] Error 2
make[1]脗聽: on quitte le r茅pertoire C:/cygdrive/c/cudf/cpp/build/CMakeFiles/CMakeTmp脗聽脗禄
make: * [Makefile:121: cmTC_bd22e/fast] Error 2`
... Well I am french sorry for the language barrier. But basically from what I understood, during the compilation process, nvcc has to compile a file created by compilation in _cudf/build/CMakeFiles/CmakeTmp/main.cu_ but does not find it. Actually, I checked the folder is empty...
Would someone please help me ?
Thanks in advance,
I'm unfortunately not familiar enough with cygwin to be able to help here, but the main.cu is essentially a CMake test file to ensure that the CUDA compiler is working as expected before actually trying to tackle something in the real project.
@eidalex were you able to resolve the main.cu error?
cudf doesn't support windowsAm I right that there is no way to use cudf on windows?
Whats the status on windows support?
As of now cuDF does not support windows and there's currently no plans to support windows at this time. If WSL would support GPUs and CUDA that would be ideal for us as we could "just work".
Unfortunately we do not have the infrastructure or development expertise to support Windows, but if someone would like to explore compiling and running on Windows, we'd be more than happy to support.
@grv1207 I am sorry I did not have time to test again... I'll up someday but right now I have given up on the idea to use it on Windows...
Any update regarding the Windows support?
Any update regarding the Windows support?
There are still no plans to support Windows at this time.
Since information is now public, our plan for Windows support is to rely on WSL 2.0 which will support running CUDA and GPU computing.
You can see the announcement blog from Microsoft here: https://devblogs.microsoft.com/commandline/the-windows-subsystem-for-linux-build-2020-summary/#wsl-gpu
Rapids was my very first thought upon seeing the WSL 2.0 announcement earlier today :)
@kkraus14 any instructions on how to proceed to get this to work with CUDA and condas? Besides installing the update, what else do I need to do?
@kkraus14 any instructions on how to proceed to get this to work with CUDA and condas? Besides installing the update, what else do I need to do?
I don't believe the update is publicly available quite yet, you can track it here: https://developer.nvidia.com/cuda/wsl. Once it's available you'll basically have a full fledged linux installation so you can just use the normal conda installation commands that you normally would.
Here, showing Windows support too. I hope the update comes soon.
I believe the public beta is available now, instructions for setting up WSL 2 with CUDA support are available here: https://developer.nvidia.com/cuda/wsl
Once that's working you have a full fledged linux environment within Windows in which you can install and use RAPIDS
I did just this, but it appears the CUDA JIT compiler is not included at this point. See the #5 of the limitations here. I ran into this issue running the average tip example for cuDF.
I did just this, but it appears the CUDA JIT compiler is not included at this point. See the #5 of the limitations here. I ran into this issue running the average tip example for cuDF.
The CUDA JIT compiler should not be needed to run cuDF. We explicitly build for supported architectures and do not rely on runtime PTX compilation.
How did this issue manifest for you?
@jrhemstad nvRTC isn't supported in WSL2 CUDA yet it seems like which causes Jitify to blow up.
Here's what I did and the trace:
Python 3.7.7 (default, May 7 2020, 21:25:33)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.16.1 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import cudf, io, requests
...: from io import StringIO
...:
...: url = "https://github.com/plotly/datasets/raw/master/tips.csv"
...: content = requests.get(url).content.decode('utf-8')
...:
...: tips_df = cudf.read_csv(StringIO(content))
...: tips_df['tip_percentage'] = tips_df['tip'] / tips_df['total_bill'] * 100
...:
...: # display average tip by dining party size
...: print(tips_df.groupby('size').tip_percentage.mean())
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-1-31e0c4338384> in <module>
6
7 tips_df = cudf.read_csv(StringIO(content))
----> 8 tips_df['tip_percentage'] = tips_df['tip'] / tips_df['total_bill'] * 100
9
10 # display average tip by dining party size
/mnt/c/Users/steve/dev/rapids/conda-env-37-ubuntu/lib/python3.7/site-packages/cudf/core/series.py in __truediv__(self, other)
1238
1239 def __truediv__(self, other):
-> 1240 return self._binaryop(other, "truediv")
1241
1242 def rtruediv(self, other, fill_value=None, axis=0):
/mnt/c/Users/steve/dev/rapids/conda-env-37-ubuntu/lib/python3.7/contextlib.py in inner(*args, **kwds)
72 def inner(*args, **kwds):
73 with self._recreate_cm():
---> 74 return func(*args, **kwds)
75 return inner
76
/mnt/c/Users/steve/dev/rapids/conda-env-37-ubuntu/lib/python3.7/site-packages/cudf/core/series.py in _binaryop(self, other, fn, fill_value, reflect)
1000 rhs = rhs.fillna(fill_value)
1001
-> 1002 outcol = lhs._column.binary_operator(fn, rhs, reflect=reflect)
1003 result = lhs._copy_construct(data=outcol, name=result_name)
1004 return result
/mnt/c/Users/steve/dev/rapids/conda-env-37-ubuntu/lib/python3.7/site-packages/cudf/core/column/numerical.py in binary_operator(self, binop, rhs, reflect)
93 raise TypeError(msg.format(binop, type(self), type(rhs)))
94 return _numeric_column_binop(
---> 95 lhs=self, rhs=rhs, op=binop, out_dtype=out_dtype, reflect=reflect
96 )
97
/mnt/c/Users/steve/dev/rapids/conda-env-37-ubuntu/lib/python3.7/contextlib.py in inner(*args, **kwds)
72 def inner(*args, **kwds):
73 with self._recreate_cm():
---> 74 return func(*args, **kwds)
75 return inner
76
/mnt/c/Users/steve/dev/rapids/conda-env-37-ubuntu/lib/python3.7/site-packages/cudf/core/column/numerical.py in _numeric_column_binop(lhs, rhs, op, out_dtype, reflect)
432 out_dtype = "bool"
433
--> 434 out = libcudf.binaryop.binaryop(lhs, rhs, op, out_dtype)
435
436 if is_op_comparison:
cudf/_lib/binaryop.pyx in cudf._lib.binaryop.binaryop()
cudf/_lib/binaryop.pyx in cudf._lib.binaryop.binaryop_v_v()
RuntimeError: CUDA_ERROR_JIT_COMPILER_NOT_FOUND
Please let me know if there's any further information you'd like.
@jrhemstad nvRTC isn't supported in WSL2 CUDA yet it seems like which causes Jitify to blow up.
Ah, okay. That's a different statement than what is in the docs:
PTX JIT is not supported (so PTX code will not be loaded from CUDA binaries for runtime compilation).
@stevemarin I misunderstood what the docs were saying about the restriction. You are indeed hitting this limitation.
Just to update folks following this issue, the latest CUDA WSL beta now supports PTX JIT compilation so everything in cuDF (single GPU) should work in it. You can find the updated install instructions here: https://docs.nvidia.com/cuda/wsl-user-guide/index.html
I'm going to leave this issue open for the community to continue discuss native Windows support.
Does cuDF WSL support require a special developer preview version of Windows? Or does it work with any WSL2 instance in Windows?
Does cuDF WSL support require a special developer preview version of Windows? Or does it work with any WSL2 instance in Windows?
See here for requirements: https://docs.nvidia.com/cuda/wsl-user-guide/index.html#getting-started
The relevant part from the link kkraus14 posted is:
Note:
Ensure that you install Build version 20145 or higher.
You can check your build version number by running winver via the Windows Run command.
I'd rather not run a version from Microsoft's "insider program" so based on previous Windows 10 releases:
https://docs.microsoft.com/en-us/windows/release-information/
I'm HOPING this coming May (2021) we'll see a version of Windows that meets the Build version 20145 or higher requirement without needing to run an "Insider Program" build.
Does cuDF WSL support require a special developer preview version of Windows? Or does it work with any WSL2 instance in Windows?
See here for requirements: https://docs.nvidia.com/cuda/wsl-user-guide/index.html#getting-started
Thanks. I am getting an AWS EC2 provisioned so my organization can use cuDF. I can't find anything that suggests that I can run RAPIDS on the Amazon Linux distribution. Can you confirm whether I can use RAPIDS on a machine running Amazon Linux?
Thanks. I am getting an AWS EC2 provisioned so my organization can use cuDF. I can't find anything that suggests that I can run RAPIDS on the Amazon Linux distribution. Can you confirm whether I can use RAPIDS on a machine running Amazon Linux?
Yes, RAPIDS works on every cloud. https://rapids.ai/cloud
Yes, that will work nicely: https://rapids.ai/cloud.html#AWS-EC2
Most helpful comment
Since information is now public, our plan for Windows support is to rely on WSL 2.0 which will support running CUDA and GPU computing.
You can see the announcement blog from Microsoft here: https://devblogs.microsoft.com/commandline/the-windows-subsystem-for-linux-build-2020-summary/#wsl-gpu