Pointnet_Pointnet2_pytorch
Pointnet_Pointnet2_pytorch copied to clipboard
AttributeError: Can't pickle local object 'main.<locals>.<lambda>'
This error occurs when I execute the train_semseg.py:
PS F:\pointnet_pointnet2_pytorch-master> python train_semseg.py --model pointnet2_sem_seg --test_area 5 --log_dir pointnet2_sem_seg PARAMETER ... Namespace(batch_size=16, decay_rate=0.0001, epoch=32, gpu='0', learning_rate=0.001, log_dir='pointnet2_sem_seg', lr_decay=0.7, model='pointnet2_sem_seg', npoint=4096, optimizer='Adam', step_size=10, test_area=5) start loading training data ... 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 118/118 [00:09<00:00, 12.18it/s] [1.1122853 1.1530312 1. 2.2862618 2.3985515 2.3416872 1.6953672 2.051836 1.7089869 3.416529 1.840006 2.7374067 1.3777069] Totally 28940 samples in train set. start loading test data ... 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 46/46 [00:04<00:00, 10.26it/s] [ 1.1516608 1.2053679 1. 11.941072 2.6087077 2.0597224 2.1135178 2.0812197 2.5563374 4.5242124 1.4960177 2.9274836 1.6089553] Totally 12881 samples in test set. The number of training data is: 28940 The number of test data is: 12881 Use pretrain model
Learning rate:0.000700 BN momentum updated to: 0.050000 Traceback (most recent call last): File "train_semseg.py", line 295, in main(args) File "train_semseg.py", line 181, in main for i, (points, target) in tqdm(enumerate(trainDataLoader), total=len(trainDataLoader), smoothing=0.9): File "F:\miniconda3\envs\pytorch_1.8_wsh\lib\site-packages\torch\utils\data\dataloader.py", line 355, in iter return self._get_iterator() File "F:\miniconda3\envs\pytorch_1.8_wsh\lib\site-packages\torch\utils\data\dataloader.py", line 301, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "F:\miniconda3\envs\pytorch_1.8_wsh\lib\site-packages\torch\utils\data\dataloader.py", line 914, in init w.start() File "F:\miniconda3\envs\pytorch_1.8_wsh\lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) File "F:\miniconda3\envs\pytorch_1.8_wsh\lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "F:\miniconda3\envs\pytorch_1.8_wsh\lib\multiprocessing\context.py", line 326, in _Popen return Popen(process_obj) File "F:\miniconda3\envs\pytorch_1.8_wsh\lib\multiprocessing\popen_spawn_win32.py", line 93, in init reduction.dump(process_obj, to_child) File "F:\miniconda3\envs\pytorch_1.8_wsh\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'main..' Traceback (most recent call last): File "", line 1, in File "F:\miniconda3\envs\pytorch_1.8_wsh\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "F:\miniconda3\envs\pytorch_1.8_wsh\lib\multiprocessing\spawn.py", line 126, in _main self = reduction.pickle.load(from_parent) EOFError: Ran out of input
I don't know how to solve this problem, could anyone give me a hand?
Try to modify num_workers to 0 in train_semseg.py. You can find details in https://github.com/matterport/Mask_RCNN/issues/93 It works fine in my PC.
你好,邮件我已收到,我会尽快阅读并且给您回复。
It is related with memory overflow. you should change the file loading with hdf5 type files, then you can set num_workers >0. here is a sample code, that loads hdf5 type dataset.
import h5py import numpy as np import torch import torch.utils.data as data
BASE_DIR = os.path.dirname(os.path.abspath(file))
def _get_data_files(list_filename): with open(list_filename) as f: return [line.rstrip() for line in f]
def _load_data_file(name): f = h5py.File(name, "r") data = f["data"][:] label = f["label"][:] return data, label
class Indoor3DSemSeg(data.Dataset): def init(self, num_points, train=True, download=True, data_precent=1.0): super().init() self.data_precent = data_precent self.folder = "indoor3d_sem_seg_hdf5_data" self.data_dir = os.path.join('../', BASE_DIR, self.folder) #self.url = ( "https://shapenet.cs.stanford.edu/media/indoor3d_sem_seg_hdf5_data.zip" )
labelweights = np.zeros(13)
self.train, self.num_points = train, num_points
all_files = _get_data_files(os.path.join(self.data_dir, "all_files.txt"))
room_filelist = _get_data_files(
os.path.join(self.data_dir, "room_filelist.txt")
)
for f in all_files:
_, labels = _load_data_file(os.path.join(BASE_DIR, f))
tmp, _ = np.histogram(labels, range(14))
labelweights += tmp
labelweights = labelweights.astype(np.float32)
labelweights = labelweights / np.sum(labelweights)
self.labelweights = np.power(np.amax(labelweights) / labelweights, 1 / 3.0)
data_batchlist, label_batchlist = [], []
for f in all_files:
data, label = _load_data_file(os.path.join(BASE_DIR, f))
data_batchlist.append(data)
label_batchlist.append(label)
data_batches = np.concatenate(data_batchlist, 0)
labels_batches = np.concatenate(label_batchlist, 0)
test_area = "Area_5"
train_idxs, test_idxs = [], []
for i, room_name in enumerate(room_filelist):
if test_area in room_name:
test_idxs.append(i)
else:
train_idxs.append(i)
if self.train:
self.points = data_batches[train_idxs, ...]
self.labels = labels_batches[train_idxs, ...]
else:
self.points = data_batches[test_idxs, ...]
self.labels = labels_batches[test_idxs, ...]
def __getitem__(self, idx):
pt_idxs = np.arange(0, self.num_points)
np.random.shuffle(pt_idxs)
current_points = torch.from_numpy(self.points[idx, pt_idxs].copy()).float()
current_labels = torch.from_numpy(self.labels[idx, pt_idxs].copy()).long()
return current_points, current_labels
def __len__(self):
return int(self.points.shape[0] * self.data_precent)
def set_num_points(self, pts):
self.num_points = pts
def randomize(self):
pass
@Ameliecc Did you already solve this problem in this way?
Yes
2023년 12월 7일 (목) 오후 6:38, assissanliu @.***>님이 작성:
@Ameliecc https://github.com/Ameliecc Did you already solve this problem in this way?
— Reply to this email directly, view it on GitHub https://github.com/yanx27/Pointnet_Pointnet2_pytorch/issues/181#issuecomment-1845005229, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADTCOM7YZW7MCP2L2KJNS6TYIGFBRAVCNFSM5XMRA5D2U5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TCOBUGUYDANJSGI4Q . You are receiving this because you commented.Message ID: @.***>
Hi,
I've met similiar error when I train my model, but I get around with it by setting number_worker = 1 in the dataloader.
Hope this helps!