I tried to using the pre-trained model (K-mean and Gumbel ) by running (vq-wav2vec_featurize.py) but there is a run-time error appear:
RuntimeError: Error(s) in loading state_dict for Wav2VecModel:
Unexpected key(s) in state_dict: "vector_quantizer.embedding", "vector_quantizer.projection.0.weight", "vector_quantizer.projection.1.weight", "vector_quantizer.projection.1.bias".
even I have a question, how to distinguish btw the importing of these two models? as no argument to be passed to the model?
Thank you,
It is because your fairseq pip version is behind. Just run from the sources by cloning the repo.
Thanks for your reply,
I tried to clone it but it gives me the same error :(
First remove to pip installation of the fairseq. Then only emotion the wav2vec from the source (fairseq folder), it works for me.
Excellent! it works now,
Thank you very much :-)
Please let me know , if you are able extract features for audio using
vq-wav2vec + Roberta checkpoint.
On Wed, Mar 11, 2020, 13:15 Ranya Jumah notifications@github.com wrote:
Closed #1811 https://github.com/pytorch/fairseq/issues/1811.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/pytorch/fairseq/issues/1811?email_source=notifications&email_token=AEA4FGX3JKF5SD73W7MNIRDRG3J3JA5CNFSM4LESBRPKYY3PNVWWK3TUL52HS4DFWZEXG43VMVCXMZLOORHG65DJMZUWGYLUNFXW5KTDN5WW2ZLOORPWSZGOXHD64GI#event-3116887577,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AEA4FGTPEBWDHMXXC43WAK3RG3J3JANCNFSM4LESBRPA
.
Hi, after installing from source with
!pip install git+https://github.com/pytorch/fairseq
and using the following code:
cp = torch.load('vq-wav2vec.pt')
model = Wav2VecModel.build_model(cp['args'], task=None)
model.load_state_dict(cp['model'])
model.eval()
it returns me this error:
RuntimeError: Error(s) in loading state_dict for Wav2VecModel:
Unexpected key(s) in state_dict: "vector_quantizer.weight_proj.0.0.weight", "vector_quantizer.weight_proj.0.0.bias", "vector_quantizer.weight_proj.1.weight", "vector_quantizer.weight_proj.1.bias", "vector_quantizer.vars".
hello, same thing here. Any solution?
@erba994 @ramonsanabria
I think this post on Stack Overflow has the solution: https://stackoverflow.com/a/54058284.
You need to add strict=False when loading the state_dict.
but this will just ignore some of the weights right?
Most helpful comment
First remove to pip installation of the fairseq. Then only emotion the wav2vec from the source (fairseq folder), it works for me.