🫙 Improving Storage #59
Replies: 5 comments 12 replies
-
Trying to log the stored tensorsI printed the storage log of three timestamps. I noticed that most of the tensors reported repeatedly have the shape of the The following tensor is repeated multiple times. |
Beta Was this translation helpful? Give feedback.
-
Storage in GCNThe logs of the GCN storage is as shown below. It is my belief that if we can remove the intermediate outputs generated by Seastar we should be able to have a memory improvement of close to 1 MB. Which will scale to larger values for larger graphs. |
Beta Was this translation helpful? Give feedback.
-
pyNVML acting oddlyIt may be noted that pyNVML does act quite oddly and seems to account for some unnecessary memory when making measurements. We need to look into this more clearly when doing benchmarking. |
Beta Was this translation helpful? Give feedback.
-
Did Pytorch update increase the memory usage of SeastarI wonder if updating to Pytorch 2.0 caused an increase in memory. This is because old logs of Seastar seem to show that it takes This reference will come in handy since it deals with the part where Pytorch interfaces with Seastar. |
Beta Was this translation helpful? Give feedback.
-
|
Closing this out. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Naive
The state stack used to push all resulting tensors from the current timestamp onto the
state stack. That gave the following benchmarkModifying the code to only store those tensors that will be used by the backend resulted in an improvement.
Further modifying the code to not account for the CUDA context object of
NaiveGraphThe modification was made here
Beta Was this translation helpful? Give feedback.
All reactions