Spacy: Training NER model error "Could not find an optimal move to supervise the parser" - spaCy 2.1.3

Created on 9 Apr 2019  Â·  10Comments  Â·  Source: explosion/spaCy

Was a change made to the training data format? Because I'm getting the following error when trying to train data that I used successfully in version 2.0.18 without issue.

(spacy) D:\temp>python ALL.py -m en_core_web_lg -o d:\temp\models
Loaded model 'en_core_web_lg'
Traceback (most recent call last):
  File "ALL.py", line 97, in <module>
    plac.call(main)
  File "D:\Anaconda\envs\spacy\lib\site-packages\plac_core.py", line 328, in call
    cmd, result = parser.consume(arglist)
  File "D:\Anaconda\envs\spacy\lib\site-packages\plac_core.py", line 207, in consume
    return cmd, self.func(*(args + varargs + extraopts), **kwargs)
  File "ALL.py", line 70, in main
    losses=losses)
  File "D:\Anaconda\envs\spacy\lib\site-packages\spacy\language.py", line 452, in update
    proc.update(docs, golds, sgd=get_grads, losses=losses, **kwargs)
  File "nn_parser.pyx", line 413, in spacy.syntax.nn_parser.Parser.update
  File "nn_parser.pyx", line 519, in spacy.syntax.nn_parser.Parser._init_gold_batch
  File "transition_system.pyx", line 86, in spacy.syntax.transition_system.TransitionSystem.get_oracle_sequence
  File "transition_system.pyx", line 148, in spacy.syntax.transition_system.TransitionSystem.set_costs
ValueError: [E024] Could not find an optimal move to supervise the parser. Usually, this means the GoldParse was not correct. For example, are all labels added to the model?

spacy version used was 2.1.3

Example first few sentences of training data

TRAIN_DATA = [(""" Order Number: FC-17-9263 Date Received: 11/16/2017 3:17 PM Date Completed: 11/17/2017 3:49 PM Date Collected: 11/16/2017 12:47 PM """,{'entities':[(58,68,'DATE'),(94,104,'DATE'),(134,144,'DATE')]}),("""Date of procedure: 07/28/2016""",{'entities':[(19,29,'DATE')]}), …

script used

#!/usr/bin/env python
# coding: utf8
"""Example of training spaCy's named entity recognizer, starting off with an
existing model or a blank model.

For more details, see the documentation:
* Training: https://spacy.io/usage/training
* NER: https://spacy.io/usage/linguistic-features#named-entities

Compatible with: spaCy v2.0.0+
"""
from __future__ import unicode_literals, print_function

import plac
import random
from pathlib import Path
import spacy
from spacy.util import minibatch, compounding

#result_gpu = spacy.require_gpu()
#print("require_gpu(): ", result_gpu)

# training data
TRAIN_DATA = [(""" Order Number:    FC-17-9263             Date Received:   11/16/2017 3:17 PM Date Completed:  11/17/2017 3:49 PM     Date Collected:  11/16/2017 12:47 PM """,{'entities':[(58,68,'DATE'),(94,104,'DATE'),(134,144,'DATE')]}),("""Date of procedure: 07/28/2016""",{'entities':[(19,29,'DATE')]})]


@plac.annotations(
    model=("Model name. Defaults to blank 'en' model.", "option", "m", str),
    output_dir=("Optional output directory", "option", "o", Path),
    n_iter=("Number of training iterations", "option", "n", int))
def main(model=None, output_dir=None, n_iter=100):
    """Load the model, set up the pipeline and train the entity recognizer."""
    if model is not None:
        nlp = spacy.load(model)  # load existing spaCy model
        print("Loaded model '%s'" % model)
    else:
        nlp = spacy.blank('en')  # create blank Language class
        print("Created blank 'en' model")

    # create the built-in pipeline components and add them to the pipeline
    # nlp.create_pipe works for built-ins that are registered with spaCy
    if 'ner' not in nlp.pipe_names:
        ner = nlp.create_pipe('ner')
        nlp.add_pipe(ner, last=True)
    # otherwise, get it so we can add labels
    else:
        ner = nlp.get_pipe('ner')

    # add labels
    for _, annotations in TRAIN_DATA:
        for ent in annotations.get('entities'):
            ner.add_label(ent[2])

    # get names of other pipes to disable them during training
    other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'ner']
    with nlp.disable_pipes(*other_pipes):  # only train NER
        optimizer = nlp.begin_training()
        for itn in range(n_iter):
            random.shuffle(TRAIN_DATA)
            losses = {}
            # batch up the examples using spaCy's minibatch
            batches = minibatch(TRAIN_DATA, size=compounding(4., 32., 1.001))
            for batch in batches:
                texts, annotations = zip(*batch)
                nlp.update(
                    texts,  # batch of texts
                    annotations,  # batch of annotations
                    drop=0.5,  # dropout - make it harder to memorise data
                    sgd=optimizer,  # callable to update weights
                    losses=losses)
            print('Losses', losses)

    # test the trained model
    for text, _ in TRAIN_DATA:
        doc = nlp(text)
        print('Entities', [(ent.text, ent.label_) for ent in doc.ents])
        print('Tokens', [(t.text, t.ent_type_, t.ent_iob) for t in doc])

    # save model to output directory
    if output_dir is not None:
        output_dir = Path(output_dir)
        if not output_dir.exists():
            output_dir.mkdir()
        nlp.to_disk(output_dir)
        print("Saved model to", output_dir)

        # test the saved model
        print("Loading from", output_dir)
        nlp2 = spacy.load(output_dir)
        for text, _ in TRAIN_DATA:
            doc = nlp2(text)
            print('Entities', [(ent.text, ent.label_) for ent in doc.ents])
            print('Tokens', [(t.text, t.ent_type_, t.ent_iob) for t in doc])


if __name__ == '__main__':
    plac.call(main)


feat / ner training upgrade

Most helpful comment

In case anyone is stuck with this, here's a quick and dirty script I wrote to trim the spans:

import re


def trim_entity_spans(data: list) -> list:
    """Removes leading and trailing white spaces from entity spans.

    Args:
        data (list): The data to be cleaned in spaCy JSON format.

    Returns:
        list: The cleaned data.
    """
    invalid_span_tokens = re.compile(r'\s')

    cleaned_data = []
    for text, annotations in data:
        entities = annotations['entities']
        valid_entities = []
        for start, end, label in entities:
            valid_start = start
            valid_end = end
            while valid_start < len(text) and invalid_span_tokens.match(
                    text[valid_start]):
                valid_start += 1
            while valid_end > 1 and invalid_span_tokens.match(
                    text[valid_end - 1]):
                valid_end -= 1
            valid_entities.append([valid_start, valid_end, label])
        cleaned_data.append([text, {'entities': valid_entities}])

    return cleaned_data

All 10 comments

There shouldn't be any difference in the data format (which is really just character offsets in this case). But one potential explanation could be the following bug fix:

  • Fix issue #2870: Make it illegal for the entity recognizer to predict whitespace tokens as B, L or U.

It looks like your texts have a lot of leading/trailing whitespace characters and tokens, so maybe you have entity span annotations that start or end with whitespace?

Yes its true, I found cases where the annotations contain leading or training whitespace for example

The segmentation module I'm using breaks this text

" Electronically signed Sep 05, 2003 Lname A. Fname, M.D"

into the following two sentences

SentenceText: "     Electronically signed Sep 05, 2003 Lname A"
SentenceText: " Fname, M.D"

But this is not because they were annotated like that, I was careful to annotate everything so there was no leading/trailing whitespace characters. So the entity was annotated correctly as "Lname A. Fname"

But since it spans across two sentences now (due to poor segmentation) and to convert correctly the character positions from the original annotation to the positions in the segmented text., I had to preserve any trailing/leading whitespace. I took the end index of the first part to be equal the end index of the first sentence (which may contain whitespace at the end) And the begin index of the next part equal to the begin of the next sentence (which also may contain whitespace depending where it was segmented)

So anyway the issue is the leading whitespace, and for now I will make a correction in my code to adjust the indices so trailing/leading white spaces are not included.

Possibly related to #3527 ?

@Zerthick Yep, the example given there definitely looks whitespace-related, too. Merged the two threads!

@erotavlas Thanks for the detailed analysis – this makes a lot of sense.

Also, more generally: spaCy should definitely raise a better and less cryptic error here and also output the annotation in question, if possible. And it should mention the whitespace constraint, since I suspect this will be the most common explanation.

The (new and experimental) debug-data command, which takes JSON-formatted input, also already includes a check for whitespace in entities:

https://github.com/explosion/spaCy/blob/86e4b68aa9d2e658fc7b1e5d41d0ec1f5f3d3a87/spacy/cli/debug_data.py#L363-L365

Is there any estimate as to when this could be fixed, or a suggestion for a workaround in the meantime? The new version of spaCy is essentially unusable for my projects until I find a way to get around this. Moving forward if it is indeed the whitespace issue it would be nice to have a data command which will automatically fix the indexes for you.

In case anyone is stuck with this, here's a quick and dirty script I wrote to trim the spans:

import re


def trim_entity_spans(data: list) -> list:
    """Removes leading and trailing white spaces from entity spans.

    Args:
        data (list): The data to be cleaned in spaCy JSON format.

    Returns:
        list: The cleaned data.
    """
    invalid_span_tokens = re.compile(r'\s')

    cleaned_data = []
    for text, annotations in data:
        entities = annotations['entities']
        valid_entities = []
        for start, end, label in entities:
            valid_start = start
            valid_end = end
            while valid_start < len(text) and invalid_span_tokens.match(
                    text[valid_start]):
                valid_start += 1
            while valid_end > 1 and invalid_span_tokens.match(
                    text[valid_end - 1]):
                valid_end -= 1
            valid_entities.append([valid_start, valid_end, label])
        cleaned_data.append([text, {'entities': valid_entities}])

    return cleaned_data

@Zerthick thank you!! that totally fixed it for me!

I facing the same problem. And I also tried Zerthick function but I am getting the same error.

@Zerthick thanks a lot. This worked like a charm.

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

Was this page helpful?
0 / 5 - 0 ratings