Skip to content

Training continue model produces tensor error when adding a node #13

@fathyshalaby

Description

@fathyshalaby

remote@pop-os:~/repos/autofurnish/external_experiments/scene_synthesis/deep_synth$ python continue_train.py --data-dir bedroom --save-dir bedroom --train-size 500 --use-count
Building model...
Converting to CUDA...
Building dataset...
Building data loader...
Building optimizer...
=========================== Epoch 0 ===========================
torch.Size([46, 38]) 108 191
torch.Size([44, 46]) 244 320
torch.Size([40, 41]) 110 413
torch.Size([232, 56]) 376 424
torch.Size([36, 150]) 366 303
torch.Size([191, 157]) 153 307
torch.Size([65, 154]) 339 233
torch.Size([120, 55]) 149 85
torch.Size([201, 178]) 114 378
torch.Size([45, 78]) 383 167
torch.Size([142, 187]) 161 98
torch.Size([75, 32]) 196 408
Traceback (most recent call last):
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/continue_train.py", line 206, in
train()
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/continue_train.py", line 136, in train
for batch_idx, (data, target, existing) in enumerate(train_loader):
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/.venv_furnish/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 521, in next
data = self._next_data()
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/.venv_furnish/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/.venv_furnish/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/.venv_furnish/lib/python3.9/site-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/.venv_furnish/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/.venv_furnish/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/.venv_furnish/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/continue_dataset.py", line 63, in getitem
composite.add_node(node)
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/data/rendered.py", line 189, in add_node
to_add[xmin:xmin+xsize,ymin:ymin+ysize] = h
RuntimeError: The expanded size of the tensor (134) must match the existing size (178) at non-singleton dimension 1. Target sizes: [201, 134]. Tensor sizes: [201, 178]

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions