-
Notifications
You must be signed in to change notification settings - Fork 12
Add potential Learning option #38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
charlio23
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work!
| @@ -0,0 +1,5 @@ | |||
| environment: | |||
| name: "NObjectGravity" | |||
| mass: [0.5, 0.5, 0.5] | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume that this is fixed in master because of the offline data PR
| (q, p), | ||
| dim=1) # Concatenate q and p to obtain a N x 2C x H x W tensor | ||
| if p is None and not self.potential_learning: | ||
| raise ValueError('Receiving p = None when the network is in potential learning mode.') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| raise ValueError('Receiving p = None when the network is in potential learning mode.') | |
| raise ValueError('Receiving p = None when the network is not in potential learning mode.') |
train.py
Outdated
| # Read parameters | ||
| with open(params_file, 'r') as f: | ||
| params = yaml.load(f, Loader=yaml.FullLoader) | ||
| params['potential_learning'] = args.potential_learning |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The yaml file already contrains the potential learning parameter inside the hamiltonian configuration.
params['networks']['hamiltonian']['potential_learning']
Moreover, are you sure that this parameter is passed when initializing the hgn? I saw changes in the constructor for the hamiltonian network but not in loader.py (the script that instantiates the hamiltonian net).
OleguerCanal
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice!
Maybe lets try not merge two pending PR branches into one since basically this was two PRs at once. I think its easier to review if we keep things independent.
| n_filters: [32, 64, 64, 64, 64, 64] # first + hidden | ||
| kernel_sizes: [3, 3, 3, 3, 3, 3] # first + hidden + last | ||
| strides: [1, 1, 1, 1, 1, 1] # first + hidden + last | ||
| potential_learning: True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| potential_learning: True | |
| potential_learning: True # If true integrator must be Leapfrog |
| n_filters: [32, 64, 64, 64, 64, 64] # first + hidden | ||
| kernel_sizes: [3, 3, 3, 3, 3, 3] # first + hidden + last | ||
| strides: [1, 1, 1, 1, 1, 1] # first + hidden + last | ||
| potential_learning: False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| potential_learning: False | |
| potential_learning: False # If True integrator must be Leapfrog |
train.py
Outdated
| train_data_loader, test_data_loader = get_offline_dataloaders(params) | ||
|
|
||
| else: | ||
| assert "environment" in params, "Nor environment nor train_data are specified in the " \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| assert "environment" in params, "Nor environment nor train_data are specified in the " \ | |
| assert "environment" in params, "Neither environment nor train_data are specified in the " \ |
jeje
train.py
Outdated
| params['experiment_id'] = args.name[0] | ||
| if args.epochs is not None: | ||
| params['optimization']['epochs'] = args.epochs[0] | ||
| params['networks']['hamiltonian']['potential_learning'] = args.potential_learning |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wont this always overwrite params['networks']['hamiltonian']['potential_learning'] ?
train.py
Outdated
| raise NotImplementedError('Resume training from command line is not implemented yet') | ||
|
|
||
| # Train HGN network | ||
| hgn = train(params, cpu=args.cpu) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here, I think it would be better to also be able to define the device in the params.yaml.
Maybe we can have a param like:
device: 'cpu'or
device: 'cuda:0'Instead of the cuda_id. WHat do you think?
| METHODS = ["Euler", "RK4", "Leapfrog"] | ||
|
|
||
| def __init__(self, delta_t, method="Euler"): | ||
| def __init__(self, delta_t, method="Leapfrog"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice haha
| create_graph=True, | ||
| retain_graph=True, | ||
| grad_outputs=torch.ones_like(energy))[0] | ||
| energy = hnn(p=p, q=q) if not hnn.potential_learning else hnn(q=q) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| energy = hnn(p=p, q=q) if not hnn.potential_learning else hnn(q=q) | |
| energy = hnn(q=q, p=p) if not hnn.potential_learning else hnn(q=q, p=None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although I'm not sure it should be this guy responsibility to know how to call hnn(). If hnn is in potential learning, it should ignore p automatically.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On a second thought maybe this is less error prone since the integrator checks that leapfrog and potential_learning match. Otherwise we would need to do it somewhere else. What do you think?
We called Potential Learning the technique in which the Hamiltonian network only takes q as parameter. The learned value should then represent only the potential energy. We then use Leapfrog integration to perform the update steps on q and p. Note that Leapfrog integration does not require dH/dp and thus Potential Learning looks reasonable to us.