Skip to content

Question about potential inconsistency in VS benchmark results #6

@gu-yaowen

Description

@gu-yaowen

First of all, thank you for this excellent and inspiring work. I found the proposed method and the analysis on protein–ligand binding affinity prediction to be very insightful and valuable for the community. While carefully reading the paper, I noticed a potential inconsistency in the reported virtual screening benchmark results, and would like to ask for clarification.

Specifically, in Supplementary Table 1, the reported EF1% values for LigUnity are:

  • DUD-E: 44.8
  • LIT-PCBA: 4.7
  • DEKOIS 2.0: 23.9

However, in Figure 2 (box plots), LigUnity appears to achieve higher performance on some benchmarks. For example, the EF1% on DEKOIS 2.0 seems to be close to ~30, and on LIT-PCBA around ~7, which does not seem to align with the values reported in Supplementary Table 1.

Although I have not yet fully reproduced the virtual screening experiments, I was wondering whether these results correspond to different experimental settings, or whether there might be an inconsistency between the table and the figure. I would appreciate it if you could clarify whether I might be missing some detail here.

In addition, regarding BIND prediction, I tested the released code and was able to fully reproduce the performance reported in the paper. In my experiments, I obtained EF1% values of 46.3 (DUD-E), 10.9 (LIT-PCBA), and 24.5 (DEKOIS 2.0). These numbers also appear to differ from the values reported in Supplementary Table 1.

Thank you very much for your time and for making the code and benchmarks publicly available. I really appreciate your effort and look forward to your clarification.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions