Is it possible to warm up the model before the inference ? I want to load it to gpu memory and do inference at no time.