I am not sure my code is using the which model.
import spacy
nlp = spacy.load('en_core_web_sm')
doc = nlp(u"Access to documents independently of time and space")
spacy.displacy.serve(doc, style='dep')
and
import spacy
nlp = spacy.load('en_core_web_lg')
doc = nlp(u"Access to documents independently of time and space")
spacy.displacy.serve(doc, style='dep')
are giving the same result(arcs). They should be different.
The code you're using is correct.
Why do you think the results should be different? Ideally, both en_core_web_sm and en_core_web_lg give the correct, identical dependency parse. However in practice, you may ofcourse notice some differences, as they have been trained slightly differently and obtain different accuracy results. But that's not to say that they may still make the same predictions on the same sentence.
In fact, the difference is in the explosion.ai.
For example, explosion.ai is showing an arc labeled xcomp when I used the large model. Example sentence is:
Access to documents independently of time and space
But I am not able to see this xcomp arc on my local. Can you please share your explosion.ai options to me? How can I reach the arc labeled xcomp from my code?
Any idea?
Do I understand it correctly that your questions is, why you're not seeing the same results as in the documentation?
Different models may differ a little bit in their results, and the results in the documentation are probably from an older model which is slightly different than the one you are using now in code.
If you're still unsure how to navigate the tree, you can find more information about navigating the parse tree in code here.
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Most helpful comment
The code you're using is correct.
Why do you think the results should be different? Ideally, both
en_core_web_smanden_core_web_lggive the correct, identical dependency parse. However in practice, you may ofcourse notice some differences, as they have been trained slightly differently and obtain different accuracy results. But that's not to say that they may still make the same predictions on the same sentence.