Skip to content

Can ONNX Runtime Training Handle Models with Different Outputs for Training and Inference? #201

@Leo5050xvjf

Description

@Leo5050xvjf

Hello,

If my model's structure differs slightly between training and inference phases, is it still suitable to use ONNX Runtime Training? For instance, during training, my model has two outputs ( D_q and D_q_), but during inference, it only has one output (D_q).

In this case, can I finish training on the device, export the inference model, and then remove D_q_ and any unused nodes from the inference model?

Thanks!

Here is a sample image.
Image
(Model with two output)

Image
(Model Only One output)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions