_MaxPool1d_ have a MUST-REQUIRED parameter named _kernel_size_ , But when our input have different length in different batch. We should padding all the Batches with a fixed length to set a static size of _kernel_size_ before we feed our data to this models!
conv = nn.Sequential(
nn.Conv1d(in_channels = self.embedding_dim,
out_channels = self.content_dim,
kernel_size = self.kernel_size),
nn.ReLU(),
nn.MaxPool1d(**kernel_size = (self.max_seq_len - self.kernel_size + 1)**)
)
Instead, we may have a alternative method if we have a dynamic MaxPool1d without pre-set the max_seq_len.
conv = nn.Sequential(
nn.Conv1d(in_channels = self.embedding_dim,
out_channels = self.content_dim,
kernel_size = self.kernel_size),
nn.ReLU(),
nn.MaxPool1d()
)
We do not have any padding operation for my input. It means more efficient than before!
you are looking for AdaptiveMaxPool1d http://pytorch.org/docs/0.3.0/nn.html#torch.nn.AdaptiveMaxPool1d
Most helpful comment
you are looking for AdaptiveMaxPool1d http://pytorch.org/docs/0.3.0/nn.html#torch.nn.AdaptiveMaxPool1d