You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Aug 25, 2025. It is now read-only.
So aws lambda now support up to 10gb and increases computational capacity in relation with the mem allocation. I was running inference with 3gb allocation and compared with 10gb but did not see any major improvements. Why could this be? Maybe the static compiled torch can not use all vcpu?