-
Notifications
You must be signed in to change notification settings - Fork 12
Description
Running ns-train is causing an OOM error, even when using a downscale factor of 8. Is there a limit on the maximum number of images that can be used? Is the pipeline not scalable?
ns-train gaussctrl
--load-checkpoint unedited_models/unit_3/splatfacto/2025-10-17_115410/nerfstudio_models/step-000029999.ckpt
--experiment-name unit_3
--output-dir outputs
--pipeline.datamanager.data data/unit_3
--pipeline.edit_prompt "convert image to sunset"
--pipeline.reverse_prompt "image taken in daylight"
--pipeline.guidance_scale 5
--pipeline.chunk_size 1
--pipeline.ref_view_num 2
--viewer.quit-on-train-completion True
[12:15:29] Saving config to: outputs/unit_3/gaussctrl/2025-10-17_121529/config.yml experiment_config.py:136
FutureWarning: torch.cuda.amp.GradScaler(args...) is deprecated. Please use torch.amp.GradScaler('cuda', args...) instead.
Saving checkpoints to: outputs/unit_3/gaussctrl/2025-10-17_121529/nerfstudio_models trainer.py:136
Auto image downscale factor of 1 gc_dataparser_ns.py:498
UserWarning: Using torch.cross without specifying the dim arg is deprecated.
Please either pass the dim explicitly or simply use torch.linalg.cross.
The default value of dim will change to agree with that of linalg.cross in a future release. (Triggered internally at ../aten/src/ATen/native/Cross.cpp:62.)
[12:15:30] Caching / undistorting train images gc_datamanager.py:115
[12:15:31] Caching / undistorting eval images gc_datamanager.py:141
FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3609.)
final text_encoder_type: bert-base-uncased
FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
Model loaded from /home/forrealnew/.cache/huggingface/hub/models--ShilongLiu--GroundingDINO/snapshots/a94c9b567a2a374598f05c584e96798a170c56fb/groundingdino_swinb_cogcoor.pth
=> _IncompatibleKeys(missing_keys=[], unexpected_keys=['label_enc.weight', 'bert.embeddings.position_ids'])
FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True.
Loading pipeline components...: 100%|██████████████████████████████████████████████████████████████████████████████| 7/7 [00:00<00:00, 12.19it/s]
FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
Done loading Nerfstudio checkpoint from
unedited_models/unit_3/splatfacto/2025-10-17_115410/nerfstudio_models/step-000029999.ckpt
Rendering view 0
UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
0%| | 0/20 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/forrealnew/miniconda3/envs/gaussctrl/bin/ns-train", line 8, in
sys.exit(entrypoint())
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/nerfstudio/scripts/train.py", line 262, in entrypoint
main(
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/nerfstudio/scripts/train.py", line 247, in main
launch(
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/nerfstudio/scripts/train.py", line 189, in launch
main_func(local_rank=0, world_size=world_size, config=config)
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/nerfstudio/scripts/train.py", line 99, in train_loop
trainer.setup()
File "/home/forrealnew/repos/gpu-server-dev/scan-processor/gaussctrl/gaussctrl/gc_trainer.py", line 76, in setup
self.pipeline.render_reverse()
File "/home/forrealnew/repos/gpu-server-dev/scan-processor/gaussctrl/gaussctrl/gc_pipeline.py", line 142, in render_reverse
latent, _ = self.pipe(prompt=self.positive_reverse_prompt, # placeholder here, since cfg=0
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/diffusers/pipelines/controlnet/pipeline_controlnet.py", line 1234, in call
down_block_res_samples, mid_block_res_sample = self.controlnet(
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/diffusers/models/controlnet.py", line 804, in forward
sample, res_samples = downsample_block(
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/diffusers/models/unets/unet_2d_blocks.py", line 1199, in forward
hidden_states = attn(
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/diffusers/models/transformers/transformer_2d.py", line 391, in forward
hidden_states = block(
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/diffusers/models/attention.py", line 329, in forward
attn_output = self.attn1(
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/diffusers/models/attention_processor.py", line 512, in forward
return self.processor(
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/diffusers/models/attention_processor.py", line 759, in call
attention_probs = attn.get_attention_scores(query, key, attention_mask)
File "/home/forrealnew/miniconda3/envs/gaussctrl/lib/python3.8/site-packages/diffusers/models/attention_processor.py", line 588, in get_attention_scores
attention_scores = torch.baddbmm(
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 24.78 GiB. GPU 0 has a total capacity of 47.49 GiB of which 13.74 GiB is free. Process 3499880 has 588.90 MiB memory in use. Including non-PyTorch memory, this process has 32.24 GiB memory in use. Of the allocated memory 30.99 GiB is allocated by PyTorch, and 764.49 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
(gaussctrl) forrealnew@sn4622126954:~/repos/gpu-server-dev/scan-processor/gaussctrl$