Flair: How are the embeddings stacked? What exactly does stacking entail?

Created on 15 Jul 2019  路  4Comments  路  Source: flairNLP/flair

I'm trying to understand what stacking two embeddings entails? Does it mean that the embeddings are concatenated, or does it have something to do with the combination of outputs from two different embedding layers of a NN? Can you shed some light on how this works?
I'm reading the paper on Flair, and I'm still not clear on how this stacking actually works?
@alanakbik @stefan-it

question

Most helpful comment

Concatenation. If you stack 300d Word2Vec with 300d GloVe, you get a 600d concatenated model.

All 4 comments

Concatenation. If you stack 300d Word2Vec with 300d GloVe, you get a 600d concatenated model.

Related: What is the right way to use Flair if you want to train a classification model on the word embeddings without having to create a document embedding? Or to get results similar to what one would using either methodology?

Concatenation. If you stack 300d Word2Vec with 300d GloVe, you get a 600d concatenated model.

@Hellisotherpeople Are you sure it's concatenation, because when Flair creates a document embedding, let's say using the DocumentPoolEmbeddings class, it will average the stacked embeddings. Or when you use DocumentLSTMEmbeddings, it will train an LSTM on them to create a document embedding.

@ParthJawale1996 yes @Hellisotherpeople is correct: we stack _word embeddings_ by concatenating them. But when we go to _document embeddings_, we either average (stacked or non-stacked) word embeddings or run an RNN over them.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

gopalkalpande picture gopalkalpande  路  3Comments

davidsbatista picture davidsbatista  路  3Comments

mnishant2 picture mnishant2  路  3Comments

frtacoa picture frtacoa  路  3Comments

Aditya715 picture Aditya715  路  3Comments