Rasa: how to know that cuda gpu is used while training?

Created on 16 Mar 2018  路  5Comments  路  Source: RasaHQ/rasa

Rasa NLU version (e.g. 0.7.3):0.11.3

Used backend / pipeline :spacy_sklearn

Operating system (windows, osx, ...):ubuntu 16.04

Issue:
how to know that cuda gpu is used while training?
I installed like below

pip install rasa_nlu
pip install rasa_nlu[spacy]
python -m spacy download en_core_web_lg

No need to python -m spacy link en_core_web_lg en(linking success message showed up, but I did it after a while. But linking success message was the same as not using python -m spacy link en_core_web_lg en

I read https://spacy.io/usage/#gpu
i did python -c "import thinc.neural.gpu_ops" and there is no messages.

Is it using my gpu supporting cuda 9.0 ?(for sure I did CUDA9=1 pip install thinc==6.10.2after a while (link)

How can I figure it out?

I used nvidia-smi , but training is too short to check.

Help me please.

type

Most helpful comment

Right now, there is no component that will benefit from a GPU. We are working on tensorflow / keras models though - training them can be speed up using GPU's, so stay tuned

All 5 comments

None of the Rasa NLU pipeline components can utilize your GPU. CPU only.

Then, there is no chance to support GPU?

@tmbo that's a question for you, though my understanding is it's a no.

Right now, there is no component that will benefit from a GPU. We are working on tensorflow / keras models though - training them can be speed up using GPU's, so stay tuned

Can RASA use multiple cores? using tensorflow settings like

intra_op_parallelism_threads: Nodes that can use multiple threads to parallelize their execution will schedule the individual pieces into this pool.
inter_op_parallelism_threads: All ready nodes are scheduled in this pool.

Was this page helpful?
0 / 5 - 0 ratings