-
Notifications
You must be signed in to change notification settings - Fork 20
Receiver #69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Receiver #69
Conversation
| # c0 | ||
| c = torch.zeros([batch_size, self.hidden_size], device=device) | ||
|
|
||
| state = (h[None, ...], c[None, ...]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can remove the whole state variable since you are initializing it to zero for all cases. See https://pytorch.org/docs/stable/nn.html?highlight=lstm#torch.nn.LSTM
| nn.init.constant_(self.rnn.bias_hh_l0, val=0) | ||
| nn.init.constant_(self.rnn.bias_hh_l0[self.hidden_size:2 * self.hidden_size], val=1) | ||
|
|
||
| def forward(self, m): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't particularly like having one letter variables if they represent uncommon things or are not iterators. If I could suggest changing m to messages maybe?
No description provided.