Skip to content

Questions about Hyperparameters and Dependencies / Unable to Reproduce Results #7

@leshanshui-yang

Description

@leshanshui-yang

Hello,

I am a PhD student closely following your recent article on methods and structures mentioned in Euler's paper for DTDG link prediction. Thanks for sharing the slides and the associated code on the github.

I am attempting to reproduce the results presented in your article with the GitHub repository (https://github.com/iHeartGraph/Euler/tree/main) with the euler_test.py file in the benchmark directory. However, I was unable to achieve the reported AUC values for the link prediction and new link prediction tasks on both the DBLP and FB datasets.

I would like to inquire whether there might be any additional dependencies or hyperparameters that are necessary for reproducing your results but were not explicitly mentioned in the code or the article? If not, could you please provide the seeds and/or dockerfile or colab notebooks for your experiments so that other researchers can reproduce the benchmarks in your article?

Details of my experiments:

I experimented with the following on Google Colab with torch 2.0.1 and torch-geometric 2.3.1 on CPU:

  1. Running your code with the default parameters (i.e., performing three runs).
  2. Running your code with the optimal learning rate mentioned in the paper while keeping all other values at their default settings.
  3. Running your code with the optimal learning rate mentioned 10 times with random seeds (i.e., for seed in range(10) ) to ensure the reproducibility/determinacy of edge masking and model initialization. I employed the following code snippet to set the random seeds in euler_test:
os.environ['PYTHONHASHSEED']=str(seed)
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.use_deterministic_algorithms(True)

Unfortunately, none of the aforementioned approaches were able to yield the reported AUC values for the 'fb' dataset. The results obtained from the third approach are as follows:

Link Prediction AUC = 88.54 ± 0.81
Link Prediction AP = 86.68 ± 1.11
New Link Prediction AUC = 86.27 ± 0.71
New Link Prediction AP = 83.58 ± 0.82

By modifying the hyperparameters, the AUC and AP scores will be improved, but they still don't match what is reported in the article.

I would greatly appreciate any guidance or assistance you can provide to help resolve this discrepancy. Thank you very much for your time and consideration. I look forward to hearing back from you.

Best regards,

Leshanshui

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions