threestudio icon indicating copy to clipboard operation
threestudio copied to clipboard

Global seed is the same for each GPU, in multi-GPU

Open claforte opened this issue 1 year ago • 9 comments

The same seed seems to be used by every GPU, so using multi-GPU produces the same result as just using 1.

Reproduction:

python launch.py --config configs/dreamfusion-if.yaml --train --gpu 0,1 system.prompt_processor.prompt="a zoomed out DSLR photo of a baby bunny sitting on top of a stack of pancakes" data.batch_size=2 data.n_val_views=4

The log indicates that all GPUs' global seed are set to the same value:

[rank: 0] Global seed set to 0
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/2
...
**[rank: 1] Global seed set to 0**
**[rank: 1] Global seed set to 0**
Initializing distributed: GLOBAL_RANK: 1, MEMBER: 2/2

I also compared images produced in a run with 2 GPUs, with the ones produced in 1 GPU, and the images were identical.

claforte avatar Jun 29 '23 23:06 claforte

Hi! Have you figured out any solution for this? I also find that multi-GPU training does not accelerate the training.

zqh0253 avatar Jul 04 '23 16:07 zqh0253

The issue is that all gpus use the same seed inside the dataloader.

Debug code:

    def collate(self, batch) -> Dict[str, Any]:
        # sample elevation angles
        elevation_deg: Float[Tensor, "B"]
        elevation: Float[Tensor, "B"]

        # FIXME: set different seed for different gpu
        print(f"device:{get_device()}, {torch.rand(1)}")

Output:

device:cuda:1, tensor([0.4901])
device:cuda:0, tensor([0.4901])
device:cuda:1, tensor([0.0317])
device:cuda:0, tensor([0.0317])
device:cuda:2, tensor([0.0317])

Expected output: different devices give different random outputs

I am investigating this with @zqh0253 to check how to set seed differently in loading data.

I set workers=True in pl.seed_everything(cfg.seed, workers=True), but it did not help

guochengqian avatar Jul 04 '23 20:07 guochengqian

fixed this issue by PR: https://github.com/threestudio-project/threestudio/pull/212

guochengqian avatar Jul 04 '23 22:07 guochengqian

Already fixed in #220 which inherits #212.

thuliu-yt16 avatar Jul 14 '23 06:07 thuliu-yt16

As pointed out by @MrTornado24, the sampled noises are the same across different GPUs, which is not the expected behavior. We should check this.

bennyguo avatar Jul 25 '23 03:07 bennyguo

Could kindly clarify which noise you were referred to? Noised added to the latent during guidance or the random sampled cameras? I checked sampled cameras of my PR, it worked great. Did not check the noises added in latent.

guochengqian avatar Jul 25 '23 03:07 guochengqian

@guochengqian I think it's the noise added to the latent. Could you please check this too?

bennyguo avatar Jul 25 '23 04:07 bennyguo

I can do this but only late this week, has to work on some interviews.

guochengqian avatar Jul 25 '23 04:07 guochengqian

For debug purpose only, I added this line of code

print(f"rank: {get_rank()}, random: {torch.randn(1)}, noise: {noise} \n")

under function compute_grad_sds after noise generation.

I found my PR https://github.com/threestudio-project/threestudio/pull/212 works. The generated noise is different, the random noise generated is also different.

I have been using the multi-gpu training (PR212) for weeks, works good.

Note in https://github.com/threestudio-project/threestudio/pull/220, you rely on broadcasting to make model parameters the same across device. But in your current version broadcasting is only implemented for implicit-sdf in PR220. You might have to fix this. Or just use my PR212, which simply set random seed twice without doing anything else, in the first time using same random seed to init models and second time setting random seed different across devices before training to load different cameras and add different noise to latent.

guochengqian avatar Jul 27 '23 18:07 guochengqian