Background-Matting
Background-Matting copied to clipboard
RuntimeError with train_real_fixed.py
System :
Ubuntu 20.04 Nvidia GTX 1070
Everything works fine with background matting (all steps) until I try to make a training on my own dataset. I used prepare_real.py to create the .csv file after doing the segmentation manually (I used the segmentation I got while making my tests)
When I use the command :
CUDA_VISIBLE_DEVICES=0,1 python train_real_fixed.py -n Real_fixed -bs 4 -res 512 -init_model Models/syn-comp-adobe-trainset/net_epoch_64.pth
I get this RuntimeError :
RuntimeError: Given groups=1, weight of size [64, 3, 7, 7], expected input[4, 4, 518, 518] to have 3 channels, but got 4 channels instead
For more details, this is the whole execution log :
CUDA_VISIBLE_DEVICES=0,1 python train_real_fixed.py -n Real_fixed -bs 4 -res 512 -init_model Models/syn-comp-adobe-trainset/net_epoch_64.pth CUDA Device: 0,1
[Phase 1] : Data Preparation
[Phase 2] : Initialization
/home/pample/Bureau/Stage_Keying/Background-Matting/Background-Matting/networks.py:120: UserWarning: nn.init.xavier_uniform is now deprecated in favor of nn.init.xavier_uniform_.
init.xavier_uniform(m.weight, gain=np.sqrt(2))
/home/pample/Bureau/Stage_Keying/Background-Matting/Background-Matting/networks.py:123: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_.
init.constant(m.bias, 0)
/home/pample/Bureau/Stage_Keying/Background-Matting/Background-Matting/networks.py:130: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
init.normal(m.weight.data, 1.0, 0.2)
/home/pample/Bureau/Stage_Keying/Background-Matting/Background-Matting/networks.py:131: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_.
init.constant(m.bias.data, 0.0)
Starting Training
Traceback (most recent call last):
File "train_real_fixed.py", line 126, in
The code in dataloader :io.imread()
may read img with RGBA.
Just use cv2 and set color in RGB not BGR.
set color in RGB not BGR.
@CocoRLin what do you mean by setting color in RGB (for which images in the code exactly) ? I replaced io.imread()
by cv2.imread()
then converted from BGR to RGB for all images where cv2.imread
is used, for instance :
img = cv2.imread(self.frames.iloc[idx, 0])
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
It gives me the following error (it's due to changing io.imread
into cv2.imread
:
Execution log :
[Phase 1] : Data Preparation
[Phase 2] : Initialization
Starting Training
Traceback (most recent call last):
File "train_real_fixed.py", line 114, in <module>
for i,data in enumerate(train_loader):
File "/home/pample/anaconda3/envs/back-matting/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
data = self._next_data()
File "/home/pample/anaconda3/envs/back-matting/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 856, in _next_data
return self._process_data(data)
File "/home/pample/anaconda3/envs/back-matting/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data
data.reraise()
File "/home/pample/anaconda3/envs/back-matting/lib/python3.6/site-packages/torch/_utils.py", line 395, in reraise
raise self.exc_type(msg)
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/pample/anaconda3/envs/back-matting/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/home/pample/anaconda3/envs/back-matting/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/pample/anaconda3/envs/back-matting/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/pample/Bureau/Stage_Keying/Background-Matting/Background-Matting/data_loader.py", line 57, in __getitem__
bbox=create_bbox(seg,seg.shape[0],seg.shape[1])
File "/home/pample/Bureau/Stage_Keying/Background-Matting/Background-Matting/data_loader.py", line 230, in create_bbox
x1, y1 = np.amin(where, axis=1)
ValueError: too many values to unpack (expected 2)
Make sure your image tensor has 3 channels and not 4. This can happen if the input images has an alpha channel. This should have been resolved in using cv2.imread().
The error happens in getting a bounding box for segmentation. Make sure the Segmentation image seg
has a single channel.