Skip to content

Lowering to bef in Tensorflow XLA failed with error #110

@AmosChenYQ

Description

@AmosChenYQ

When I use ResNet50 with XLA enabled in tensorflow with a simple file to benchmark XLA performance under different batch sizes, but the model can run successfully in batch size like [1, 2] [2, 3] [3, 4], but fails in batch size like [1, 2, 3] with some error messages like "loc("cudnn-conv-bias-activation.2"): error: runtime error: 'cudnnCreate(&handle)': CUDNN_STATUS_INTERNAL_ERROR"

I add some logs to determine where the error comes from and it's proven from here

https://github.com/AmosChenYQ/tensorflow/blob/amoschenyq-debug/tensorflow/compiler/xla/service/gpu/gpu_executable.cc#L1324

(PS: I have to add my forked tensorflow here because new tensorflow seems to have those part of code removed)

So does anyone knows what is the meaning of this code error information? Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions