Skip to content

Conversation

@fedetask
Copy link
Member

This pull request makes HGN work without the transformer. If transformer is not present under networks in the configuration, HGN will not use it. The latent encoding will be split into q and p and used. Therefore, the encoder and hamiltonian networks should handle dimensions appropriately.

I updated the train_config_no_transformer.yaml by making the latent encoding of 32 channels instead of 48, so that q and p are of 16 channels each.

charlio23 and others added 5 commits October 24, 2020 18:00
If transformer is not present in configuration, HGN will extract q and p
from the latent encoding. The encoder and hnn must handle dimensions
appropriately.
@fedetask fedetask requested a review from OleguerCanal January 19, 2021 17:19
Comment on lines +171 to +176
if self.transformer is not None:
latent_shape = (1, self.encoder.out_mean.out_channels, img_shape[0],
img_shape[1])
else: # TODO: Don't hardcode shape
latent_shape = (1, int(self.encoder.out_mean.out_channels), 4, 4)
latent_representation = torch.randn(latent_shape).to(self.device).requires_grad_()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if I understand this if else

Comment on lines +65 to +72
if "transformer" in params["networks"]:
transformer = TransformerNet(
in_channels=params["networks"]["encoder"]["out_channels"],
**params["networks"]["transformer"],
dtype=dtype).to(device)
decoder_in_channels = params["networks"]["transformer"]["out_channels"]
else:
transformer = None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't that what def instantiate_transformer is doing?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants