Skip to content
This repository was archived by the owner on Jun 13, 2024. It is now read-only.
This repository was archived by the owner on Jun 13, 2024. It is now read-only.

Question: Training performance over multiple training sessions. #31

@SergioRAgostinho

Description

@SergioRAgostinho

Hi. I'm just looking for some feedback on your experience with variability in trained model quality between training sessions.

I'm progressively trying to retrace some important steps:

  1. validating the paper results with the trained models you provided
  2. training the models with the initial weights you provided and trying to achieve the same performance metrics as in the paper.
  3. training the initial weights, then training new models with the new initial weights, and finally verifying the paper results.

An important disclaimer is that I had to apply the modifications in #30 in order to run things.

Step 1 seems to be working just fine. The difference between the results and the paper are within 5% give or take.

For Step 2, things are looking a little bit worse. My experiment was done training the ape object, which has a low ADD score compared to the other objects. With your models I manage to achieve an ADD score of roughly 28%, with my trained model it dropped significantly to 21%. I'm training the model once more to get a feeling for the performance variability between training sessions but decided to ask if this is normal and expected.

Thanks

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions