-
Notifications
You must be signed in to change notification settings - Fork 14
Open
Description
Hi there, I know sampling is not easy for FEP, and it seems I got a sampling issue for my systems. I'm trying to run a3fe for 100 compounds against 1 target protein. Can I ask two questions?
- for one of my lambdas, I got the following stats
================================================================================
FINAL SUMMARY: Predicted Maximum Efficiency Runtime vs Time
================================================================================
Time/Rep(ns) Total(ns) Actual(ns) Predicted(ns) Inter-run_SEM Norm SEM
--------------------------------------------------------------------------------
0.2 1.0 1.000 3.692 1.889 0.152046
0.4 2.0 2.000 5.721 2.069 0.235568
0.8 4.0 4.000 8.187 2.094 0.337114
1.6 8.0 8.000 11.407 2.063 0.469720
when I extend the simulation length, the normalized SEM and predicted runtime also increase. As the result, it takes a long time to complete simulations for this lambda (has to be run for the maximum time, 30ns per simulation). Can I ask what is the recommended way to handle this case? would it be helpful if I create more replicas per lambda? I noticed that for the 5 repeats of this lambda, the corresponding statistical inefficiency values are 1.51, 3.27, 51.57, 3.97, and 6.54 at 1.6 ns.
- I noticed that statistical inefficiency is not used to correct
sems_inter_delta_ginprocess_grads.py. And by default,normalized SEMis computed by callinggradient_data.get_time_normalised_sems(origin="inter_delta_g"). Can I double check if this is designed to be this case? Besides, can I use other SEM metrics to handle the issue in question 1?
Thank you very much.
Best,
Jay Huang
Metadata
Metadata
Assignees
Labels
No labels