Skip to content

Conversation

@fedetask
Copy link
Member

We called Potential Learning the technique in which the Hamiltonian network only takes q as parameter. The learned value should then represent only the potential energy. We then use Leapfrog integration to perform the update steps on q and p. Note that Leapfrog integration does not require dH/dp and thus Potential Learning looks reasonable to us.

@charlio23 charlio23 self-requested a review October 22, 2020 14:53
Copy link
Collaborator

@charlio23 charlio23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work!

@@ -0,0 +1,5 @@
environment:
name: "NObjectGravity"
mass: [0.5, 0.5, 0.5]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume that this is fixed in master because of the offline data PR

(q, p),
dim=1) # Concatenate q and p to obtain a N x 2C x H x W tensor
if p is None and not self.potential_learning:
raise ValueError('Receiving p = None when the network is in potential learning mode.')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
raise ValueError('Receiving p = None when the network is in potential learning mode.')
raise ValueError('Receiving p = None when the network is not in potential learning mode.')

train.py Outdated
# Read parameters
with open(params_file, 'r') as f:
params = yaml.load(f, Loader=yaml.FullLoader)
params['potential_learning'] = args.potential_learning
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The yaml file already contrains the potential learning parameter inside the hamiltonian configuration.

params['networks']['hamiltonian']['potential_learning']

Moreover, are you sure that this parameter is passed when initializing the hgn? I saw changes in the constructor for the hamiltonian network but not in loader.py (the script that instantiates the hamiltonian net).

@OleguerCanal OleguerCanal linked an issue Oct 23, 2020 that may be closed by this pull request
Copy link
Member

@OleguerCanal OleguerCanal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice!

Maybe lets try not merge two pending PR branches into one since basically this was two PRs at once. I think its easier to review if we keep things independent.

n_filters: [32, 64, 64, 64, 64, 64] # first + hidden
kernel_sizes: [3, 3, 3, 3, 3, 3] # first + hidden + last
strides: [1, 1, 1, 1, 1, 1] # first + hidden + last
potential_learning: True
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
potential_learning: True
potential_learning: True # If true integrator must be Leapfrog

n_filters: [32, 64, 64, 64, 64, 64] # first + hidden
kernel_sizes: [3, 3, 3, 3, 3, 3] # first + hidden + last
strides: [1, 1, 1, 1, 1, 1] # first + hidden + last
potential_learning: False
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
potential_learning: False
potential_learning: False # If True integrator must be Leapfrog

train.py Outdated
train_data_loader, test_data_loader = get_offline_dataloaders(params)

else:
assert "environment" in params, "Nor environment nor train_data are specified in the " \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
assert "environment" in params, "Nor environment nor train_data are specified in the " \
assert "environment" in params, "Neither environment nor train_data are specified in the " \

jeje

train.py Outdated
params['experiment_id'] = args.name[0]
if args.epochs is not None:
params['optimization']['epochs'] = args.epochs[0]
params['networks']['hamiltonian']['potential_learning'] = args.potential_learning
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wont this always overwrite params['networks']['hamiltonian']['potential_learning'] ?

train.py Outdated
raise NotImplementedError('Resume training from command line is not implemented yet')

# Train HGN network
hgn = train(params, cpu=args.cpu)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, I think it would be better to also be able to define the device in the params.yaml.

Maybe we can have a param like:

device: 'cpu'

or

device: 'cuda:0'

Instead of the cuda_id. WHat do you think?

METHODS = ["Euler", "RK4", "Leapfrog"]

def __init__(self, delta_t, method="Euler"):
def __init__(self, delta_t, method="Leapfrog"):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice haha

create_graph=True,
retain_graph=True,
grad_outputs=torch.ones_like(energy))[0]
energy = hnn(p=p, q=q) if not hnn.potential_learning else hnn(q=q)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
energy = hnn(p=p, q=q) if not hnn.potential_learning else hnn(q=q)
energy = hnn(q=q, p=p) if not hnn.potential_learning else hnn(q=q, p=None)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although I'm not sure it should be this guy responsibility to know how to call hnn(). If hnn is in potential learning, it should ignore p automatically.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On a second thought maybe this is less error prone since the integrator checks that leapfrog and potential_learning match. Otherwise we would need to do it somewhere else. What do you think?

@fedetask fedetask marked this pull request as draft October 23, 2020 17:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Try hamiltonian net to only take q and adapt leapfrog integrator

4 participants