I am trying to run the Flair framework through its paces on my machine, but my CUDA stack is currently not working. Thus, when I try even the example usage in the README,
from flair.data import Sentence
from flair.models import SequenceTagger
# make a sentence
sentence = Sentence('I love Berlin .')
# load the NER tagger
tagger = SequenceTagger.load('ner')
# run NER over sentence
tagger.predict(sentence)
CUDA has a heart attack and dies.
Is there a way that I can set a flag or configuration value to force the Flair methods to use the CPU? I know that some modules won't work on the CPU (huggingface's implementation of BERT among them), but I would think that most functionality would work without GPU support, just slowly.
Patience I have. A functioning GPU...... not yet.
Hi @prematurelyoptimized ,
try to set the following environment variable
export CUDA_VISIBLE_DEVICES=""
before calling your Python script. That should work :)
Hello @prematurelyoptimized another option is to set the flair.device at the beginning of your python script, like this:
import torch, flair
# first, set your device
flair.device = torch.device("cpu")
# then do your normal code
from flair.data import Sentence
from flair.embeddings import FlairEmbeddings
FlairEmbeddings('news-forward').embed(Sentence('Hello World'))
...
Since I'm running in a Jupyter notebook, Alan's approach is simpler to implement and works perfectly. Good to know about the environment variable though. Thank you much.
Most helpful comment
Hi @prematurelyoptimized ,
try to set the following environment variable
before calling your Python script. That should work :)