Hi everyone,
first off many thanks for providing such an awesome module! I am using gensim to do topic modeling with LDA and encountered the following bug/issue. I have already read about it in the mailing list, but apparently no issue has been created on Github.
After training an LDA model with the gensim mallet wrapper I converted the model to a native gensim LDA model via the malletmodel2ldamodel function provided with the wrapper. Before and after the conversion the topic word distributions are quite different. The ldamallet version returns comprehensible topics with sensible weights, whereas the topic word distribution after conversion is nearly uniform, leading to topics without a clear focus.
I am assuming that the resulting topics are supposed to be at least somewhat similar before and after conversion. Am I doing something wrong? What could be causing this behaviour?
import gensim
from sklearn.datasets import fetch_20newsgroups
# select five quite distinct categories from the 20 newsgroups
cat = ['soc.religion.christian', 'comp.graphics', 'rec.motorcycles',
'sci.space', 'talk.politics.guns']
# keep and use only the main text
newsgroups_train = fetch_20newsgroups(subset='all', categories=cat,
remove=('headers', 'footers', 'quotes'))
tokenized = [gensim.utils.simple_preprocess(doc) for doc in newsgroups_train.data]
dictionary = gensim.corpora.Dictionary(tokenized)
corpus = [dictionary.doc2bow(text) for text in tokenized]
lda_mallet = gensim.models.wrappers.ldamallet.LdaMallet(
'c:/mallet/bin/mallet', corpus=corpus,
num_topics=5, id2word=dictionary, iterations=1000)
lda_gensim = gensim.models.wrappers.ldamallet.malletmodel2ldamodel(
lda_mallet, iterations=1000)
for topic in lda_mallet.show_topics(num_topics=5, num_words=10):
print(topic)
for topic in lda_gensim.show_topics(num_topics=5, num_words=10):
print(topic)
These are the results I get from the mallet wrapper using lda_mallet.show_topics(num_topics=5, num_words=10). Those are what one would expect considering the chosen categories from 20newsgroups:
(0, '0.021*"god" + 0.009*"people" + 0.007*"jesus" + 0.007*"church" + 0.006*"christ" + 0.005*"life" + 0.005*"christian" + 0.005*"bible" + 0.004*"christians" + 0.004*"man"')
(1, '0.014*"don" + 0.011*"ve" + 0.009*"good" + 0.008*"bike" + 0.007*"time" + 0.007*"back" + 0.007*"make" + 0.006*"ll" + 0.006*"problem" + 0.006*"thing"')
(2, '0.017*"space" + 0.006*"nasa" + 0.006*"earth" + 0.005*"system" + 0.005*"launch" + 0.004*"shuttle" + 0.004*"orbit" + 0.003*"years" + 0.003*"mission" + 0.003*"moon"')
(3, '0.012*"people" + 0.011*"gun" + 0.005*"guns" + 0.005*"government" + 0.005*"state" + 0.005*"law" + 0.005*"fire" + 0.005*"control" + 0.004*"don" + 0.004*"fbi"')
(4, '0.013*"image" + 0.009*"graphics" + 0.008*"jpeg" + 0.007*"file" + 0.006*"images" + 0.006*"data" + 0.006*"bit" + 0.006*"software" + 0.006*"ftp" + 0.006*"mail"')
These are the results I get from the converted native gensim model using lda_gensim.show_topics(num_topics=5, num_words=10). The word probabilities are all very low and not very distinctive, resulting in mostly incoherent topics:
(0, '0.000*"tribunal" + 0.000*"insruance" + 0.000*"damper" + 0.000*"unfurl" + 0.000*"urinalisys" + 0.000*"saturnation" + 0.000*"stupider" + 0.000*"improved" + 0.000*"waltons" + 0.000*"t_ng"')
(1, '0.000*"ott" + 0.000*"raved" + 0.000*"warped" + 0.000*"onesies" + 0.000*"speculating" + 0.000*"irrigate" + 0.000*"bodies" + 0.000*"inherant" + 0.000*"illustrations" + 0.000*"filler"')
(2, '0.000*"datasets" + 0.000*"addiction" + 0.000*"lr" + 0.000*"overturning" + 0.000*"supertrapp" + 0.000*"collision" + 0.000*"nl__" + 0.000*"someone" + 0.000*"switch" + 0.000*"pirate"')
(3, '0.000*"inbetweens" + 0.000*"hostname" + 0.000*"obsevatory" + 0.000*"dscharge" + 0.000*"ecclesiates" + 0.000*"drills" + 0.000*"ranching" + 0.000*"metz" + 0.000*"omnivorous" + 0.000*"normals"')
(4, '0.000*"uad" + 0.000*"undecidable" + 0.000*"eroded" + 0.000*"summarized" + 0.000*"reposition" + 0.000*"sttod" + 0.000*"sanctas" + 0.000*"broadest" + 0.000*"inception" + 0.000*"turntable"')
Thanks in advance for any help! Cheers,
Wolfgang
I am also having this, or a related, problem, with gensim 3.1.
I am trying now with gensim 3.5 and I will update if the issue still occurs.
I feel this bug should be fixable.
I tested with gensim 3.5 and encounter the same problem. This essentially makes malletmodel2ldamodel worthless.
@Wolfi-101 thanks for the report, issue reproduced with gensim==3.5.0 :+1:
Any news on this? I switched to using mallet for a study I'm doing but would still like to use pyLDAvis for consistency with previous work. I'm stuck with either
AttributeError: 'LdaMallet' object has no attribute 'inference'
or
gensim.models.wrappers.ldamallet.malletmodel2ldamodel()
returning random terms in the topics.
Using gensim 3.5 mallet 2.0.8
@mikeyearworth this looks unrelated to the current issue, can you provide full code example for reproducing your error please (with all needed data of course)
model = gensim.models.wrappers.LdaMallet('/opt/local/bin/mallet', corpus=mikeycorpus, num_topics=num_topics, id2word=mikeydictionary, workers=3)
data = pyLDAvis.gensim.prepare(model, mikeycorpus, mikeydictionary, mds='pcoa')
pyLDAvis.display(data)
`---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-1-83610b8776d0> in <module>()
130
131 model = gensim.models.wrappers.LdaMallet('/opt/local/bin/mallet', corpus=mikeycorpus, num_topics=num_topics, id2word=mikeydictionary, workers=3)
--> 132 data = pyLDAvis.gensim.prepare(model, mikeycorpus, mikeydictionary, mds='pcoa')
133 pyLDAvis.display(data)
134
/anaconda2/lib/python2.7/site-packages/pyLDAvis/gensim.pyc in prepare(topic_model, corpus, dictionary, doc_topic_dist, **kwargs)
109 See `pyLDAvis.prepare` for **kwargs.
110 """
--> 111 opts = fp.merge(_extract_data(topic_model, corpus, dictionary, doc_topic_dist), kwargs)
112 return vis_prepare(**opts)
/anaconda2/lib/python2.7/site-packages/pyLDAvis/gensim.pyc in _extract_data(topic_model, corpus, dictionary, doc_topic_dists)
40 gamma = topic_model.inference(corpus)
41 else:
---> 42 gamma, _ = topic_model.inference(corpus)
43 doc_topic_dists = gamma / gamma.sum(axis=1)[:, None]
44
AttributeError: 'LdaMallet' object has no attribute 'inference'`
Whereas
ldamodel = gensim.models.wrappers.ldamallet.malletmodel2ldamodel(model)
data = pyLDAvis.gensim.prepare(ldamodel, mikeycorpus, mikeydictionary, mds='pcoa')
pyLDAvis.display(data)
generates

etc for all topics, compared to the actual model.
@mikeyearworth as I know, pyLDAVis support only LdaModel and LdaMulticore, not LdaMallet.
For viz LdaMallet you need to convert it to LdaModel using malletmodel2ldamodel first (and current thread about this function function, it doesn't works correctly).
Jeri Wieringa wrote a tutorial on using data pyLDAvis with mallett. You load the model directly from the state object and do some transformations. I was able to make this work by adapting her code.
http://jeriwieringa.com/2018/07/17/pyLDAviz-and-Mallet/#comment-4018495276
So here is a work around until malletmodel2ldamodel is fixed.
thanks @groceryheist (and Jeri Wieringa) that works fine.
Do you know if there is a way force the gensim wrapper for mallet to specify the state filename, or return it? mallet writes a new state file each run to an obscure location /var/folders/1x/93zy0_k93gj_xvrk4v_j96_m0000gp/T/XXXXXX_state.mallet.gz where XXXXXX is random each time.
The prefix parameter in the wrapper does this.
Great! Thanks @groceryheist.
@menshikh-iv is there any update on this? what is this bug fix priority? Thanks :)
I start to work on this issue.
Thanks @horpto it works !
from gensim.models.ldamodel import LdaModel
import numpy
def ldaMalletConvertToldaGen(mallet_model):
model_gensim = LdaModel(
id2word=mallet_model.id2word, num_topics=mallet_model.num_topics,
alpha=mallet_model.alpha, eta=0, iterations=1000,
gamma_threshold=0.001,
dtype=numpy.float32
)
model_gensim.state.sstats[...] = mallet_model.wordtopics
model_gensim.sync_state()
return model_gensim
:)
Hi, @kvvaldez Do you want to add a remark or find the another bug ?
Hi @horpto , I can see this issue is closed but i am still facing the exact same issue , @Wolfi-101 reported.
I am using the latest gensim=3.7.1
After conversion i am getting vary rare keywords, here is my malletmodel2ldamodel conversion and pyLDAvis implementation.
ldamallet = gensim.models.wrappers.LdaMallet(mallet_path, corpus=corpus, num_topics=13, id2word=dictionary)
model = gensim.models.wrappers.ldamallet.malletmodel2ldamodel(ldamallet)
model.save('ldamallet.gensim')
dictionary = gensim.corpora.Dictionary.load('dictionary.gensim')
corpus = pickle.load(open('corpus.pkl', 'rb'))
lda_mallet = gensim.models.wrappers.LdaMallet.load('ldamallet.gensim')
import pyLDAvis.gensim
lda_display = pyLDAvis.gensim.prepare(lda_mallet, corpus, dictionary, sort_topics=False)
pyLDAvis.display(lda_display)

Here is the output from gensim original implementation:

Hi @gladmortal
Are steps with save and load necessary ? Does this error appear in the previous versions ?
Can you share a corpus ? I'll try to reproduce your error a bit later.
Hi @gladmortal
Are steps withsaveandloadnecessary ? Does this error appear in the previous versions ?
Can you share a corpus ? I'll try to reproduce your error a bit later.
gensim=3.7.1 didn't try any other version.@kvvaldez
A version of your code worked for me"
def mallet_to_lda(mallet_model):
model_gensim = LdaModel(
id2word=mallet_model.id2word, num_topics=mallet_model.num_topics,
alpha=mallet_model.alpha, eta=0, iterations=1000,
gamma_threshold=0.001,
dtype=np.float32
)
model_gensim.sync_state()
model_gensim.state.sstats = mallet_model.wordtopics
return model_gensim
@kvvaldez
A version of your code worked for me"def mallet_to_lda(mallet_model): model_gensim = LdaModel( id2word=mallet_model.id2word, num_topics=mallet_model.num_topics, alpha=mallet_model.alpha, eta=0, iterations=1000, gamma_threshold=0.001, dtype=np.float32 ) model_gensim.sync_state() model_gensim.state.sstats = mallet_model.wordtopics return model_gensim
This works for me, thanks :)
Most helpful comment
@kvvaldez
A version of your code worked for me"