https://github.com/explosion/spaCy/blob/master/examples/training/train_parser.py
It worked fine for version 2.0.16, now I have tried it using v2.3.0 and faced the following problem: on every iteration loss is zero, the resulting model assigns '_SP' tag to every token.
I ran across this bug yesterday (see #5641) and it should be fixed in the next release, v2.3.1.
Sorry, my first reply was a bit curt. Thanks for the report!
The only workaround in v2.3.0 is to use the internal spacy JSON training format and train from a GoldCorpus instead of the simplified training format that's used in the example scripts. With the spacy JSON training format you can use the train CLI, which is also what we use internally. Then it reads the tags from the training data (instead of using tagger.add_label) and doesn't have this bug when it initializes the tag map. The main difference is that tagger.begin_training() goes through all the training example to look for which tags to add:
nlp.begin_training(lambda: gold_corpus.train_tuples)
We should probably try to have some automatic checks that run on the example scripts before each release to catch bugs like this sooner.