HairFastGAN
HairFastGAN copied to clipboard
Out of CPU Memory caused by Parallel()
First of all, thank you again for open-sourcing such excellent work. I am trying to run blending_train.py, but when I reach the following section:
class Blending_dataset(Dataset):
def __init__(self, exps, path, net_trainer):
super().__init__()
downsample_256 = BicubicDownSample(factor=4)
data = Parallel(n_jobs=1)(
delayed(prepare_item)(exp, path) for (p1, p2, p3) in tqdm(exps) for exp in [(p1, p2, p3), (p1, p3, p2)])**
data = [elem for elem in data if elem is not None]
print(f'Load: {len(data)}/{2 * len(exps)}', file=sys.stderr)
I notice that my CPU memory gradually gets filled. Can I solve this issue by configuring a parameter in the Parallel class?
class Parallel(Logger):
def __init__(self, n_jobs=None, backend=None, verbose=0, timeout=None,
pre_dispatch='2 * n_jobs', batch_size='auto',
temp_folder=None, max_nbytes='1M', mmap_mode='r',
prefer=None, require=None):
I tried reducing n_jobs, batch_size, and max_nbytes, but it doesn't seem to work.
Heartfelt thanks for any suggestions.
Hello. I also encountered a similar problem. Were you able to solve this problem?@MosbehBarhoumiRAI