Skip to content

JIT compilation? #70

@GilesStrong

Description

@GilesStrong

PyTorch has the ability to Just In Time compile stuff to make it run quicker and be more memory efficient. I'd tried to do this a while ago with @weak_script and @weak_module decorators, however they didn't seem to do much and I had trouble automatically generating the docs. I then found that PyTorch recommended that users not use these decorators. Since then PyTorch apparently introduced @torch.jit.script decorators, which are for user use and supposedly provide noticeable improvements in speed and memory usage.

Examples could be for compiling activation functions:
Screenshot from 2020-06-08 11-33-31
Screenshot from 2020-06-08 11-33-30

Whereas LUMIN's implementation of Swish is simply: x*torch.sigmoid(x). Other possibilities could be in LUMIN's loss function (e.g. WeightedMSE). I'm not sure how far one can take this; should all things related to PyTorch be JIT complied, or perhaps only operations on tensors?

A starting point would be test out the JIT compiled Swish against the current version, and then to try to find out more about what should be JITed, and what doesn't.

Metadata

Metadata

Assignees

No one assigned

    Labels

    improvementSomething which would improve current status, but not add anything newinvestigationSomething which might require a careful studymedium priorityNot urgent but should be dealt with sooner rather than later

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions