Skip to content

Reproducibility issue #17

@seunghan96

Description

@seunghan96

Thank you for sharing the code!

However, I encountered an issue with reproducing the performance reported in the paper (reproducibility issue).

I followed the settings described in

  • Table 3 for the learning rate
  • Table 4 for rho
  • -and set the input length to 512,
    but the results I obtained are significantly worse than those reported in the paper.

For instance, in the case of ETTh1, the MSE values for horizons 96, 192, 336, and 720 are reported in the paper as 0.381, 0.409, 0.423, and 0.427, respectively. However, my results are as follows:

  • model=transformer: 0.373, 0.439, 0.466, 0.531
  • model=spectrans: 0.381, 0.424, 0.450, 0.560

Could you provide guidance on what might be causing this discrepancy?

It would be greatly appreciated if you could provide scripts for preparing the benchmark datasets to ensure reproducibility of the reported performance.

Thank you in advance!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions