In PR https://github.com/RaRe-Technologies/gensim/pull/982 the example notebook ldaseqmodel.ipynb was converted to a python file ldaseqmodel.py so that it can be made part of Travis continuous integration using sphinx-gallery and be checked for any errors/inconsistencies but as can be seen in line 1613 of the travis output. The ldaseqmodel.py is taking over 45 minutes to execute which is more than the time Travis allocates for each build and thereby raising the error The job exceeded the maximum time limit for jobs, and has been terminated.
@bhargavvader. Help requested.
This can be fixed by maybe using a much smaller sample corpus; but would that mean the notebook would have to be changed accordingly? Like with all the examples and all.
That is Lev's call.
@tmylk
A smaller corpus would be better for a notebook. A link to a bigger corpus should be mentioned but not used.
Agreed, but in DTM a bigger corpus better demonstrates evolution of topics in time-slices. One other possibility is loading a pre-trained ldaseq model on the larger dataset, and commenting out the code which actually trains the model.
This way users can still see how to train the model, but the actual code being run will be just loading a model.
Loading a pre trained model is a great idea.
Okay, will do this when I get some time!
Unrelevant because now we don't run ipynb in CI.
Most helpful comment
Okay, will do this when I get some time!