Hi,
Congratulations on great work!! I appreciate you all for making resources publicly available.
I was following README on finetuning BART on CNNDM task.
While I was performing 2) BPE preprocess, I faced some problems.
Here are some details of my problem :
train.bpe.source and train.bpe.target are not identical.train.source.ubuntu@server:~/fairseq/cnn_dm$ wc -l *
11490 test.source
11490 test.target
287474 train.bpe.source <= not matching
287227 train.bpe.target
287227 train.source
287227 train.target
13368 val.bpe.source
13368 val.bpe.target
13368 val.source
13368 val.target
200000 vocab
1425607 total
val.bpe.target, the first BPE encoded sentence shows up like following :32 582 287 20154 6182 318 6301 6729 2691 284 4297 287 23254 2585 13 1114 720 4531 11 339 481 4074 718 8059 286 6729 287 281 47869 42378 305 6513 321 3091 13bart.decode(), I can decode it and it shows : are pay As spellszi If km wages Women familybut Asolia Con for idea global85 in win free 51il temporarily For wages AsasAlternativelyStage W Fin 0 sites for.A man in suburban Boston is selling snow online to customers in warmer states. For $89, he will ship 6 pounds of snow in an insulated Styrofoam box.It appears like there is some point I missed.
I am checking this on
Would you share any thoughts on the matter? It would help me a lot.
Once again, thank you very much!
WonJin
@ngoyal2707 @yinhanliu
After checking, I'm also facing problem 2.
(However I don't meet problem 1)
Turn out, problem 2 is not a problem, it's normal.
After reading the source code, this is what I understood :
The Hub interface of BART's method encode() is doing 2 things :
So that's why in the preprocess there is 2 steps : BPE-encoding of the dataset, then binarization of the dataset.
In the encode method of BART, the 2 steps are done by :
Because the bpe files are only the first step, we should compare it to the first step.
And indeed :
bart.bpe.encode("A man in suburban Boston is selling snow online to customers in warmer states. For $89, he will ship 6 pounds of snow in an insulated Styrofoam box.")
gives :
32 582 287 20154 6182 318 6301 6729 2691 284 4297 287 23254 2585 13 1114 720 4531 11 339 481 4074 718 8059 286 6729 287 281 47869 42378 305 6513 321 3091 13
And similarly :
bart.bpe.decode('32 582 287 20154 6182 318 6301 6729 2691 284 4297 287 23254 2585 13 1114 720 4531 11 339 481 4074 718 8059 286 6729 287 281 47869 42378 305 6513 321 3091 13')
gives :
A man in suburban Boston is selling snow online to customers in warmer states. For $89, he will ship 6 pounds of snow in an insulated Styrofoam box.
Confirmation of my understanding by original authors would be appreciated !
Thanks, @Colanim !
I found the cause of Problem No.1 as well. It was not related to the fairseq code.
The dataset occasionally contains ASCII code 0D which means CR.
It seems like BPE encoder replaces CR with LF(Line Feed), which is normal.
For other researchers: CR should be replaced with normal blank during step 1 (1) Follow instructions here to download and process into data-files with non-tokenized cased samples.).
Check line No 35711 Starting with -- Brian Steel was taught from birth that he was "handicapped."
@Colanim yes, that's correct. It's a two stage encoding process. First BPE encode followed by encoding with the fairseq Dictionary.
@wonjininfo, glad you got it working :)
0D
Hi, I can not fix it, I replace '\r' with ' '
link: https://github.com/pytorch/fairseq/issues/1391#issuecomment-562700622
Most helpful comment
After checking, I'm also facing problem 2.
(However I don't meet problem 1)