torchgpipe icon indicating copy to clipboard operation
torchgpipe copied to clipboard

Checkpoint Issues

Open vibhatha opened this issue 4 years ago • 11 comments

I tried the 'never' option for checkpointing. The idea was to see how the pipeline was performing without checkpointing overhead.

What I observed was the performance is consistent for pipeline parallelism 2, 4 and 8. And also another important observation was the performance is much lower than the performance with checkpointing.

Is this expected or are there any other tunning parameters to get better performance?

I checked the backward and forward to backward time ratio?

Assuming backward time increase with checkpointing, is it a valid logic with your implementation? Meaning when I turn off checkpointing the pipeline performance must improve?

Could you clarify the implementation details on this.

vibhatha avatar Mar 03 '20 02:03 vibhatha

By the original design, the checkpointing has a trade-off between speed and memory. It slows down the speed of backward pass to give much more memory capacity by forgetting activation memory on the forward pass.

What I observed was the performance is consistent for pipeline parallelism 2, 4 and 8. And also another important observation was the performance is much lower than the performance with checkpointing.

If your term "performance" means "speed", your second observation is unexpected. torchgpipe without checkpointing is identical with typical pipeline parallelism but not GPipe. If you choose the same chunk size in both settings, the concurrency should not decrease. How did you choose the batch size and the number of chunks on both checkpoint='never' and checkpoint='except_last' settings?

sublee avatar Mar 04 '20 18:03 sublee

For this, I chose the batch size 60 with 480 data points. That was the best batch size I was able to fit the model into memory.

Then I use the checkpoint never or exept_last

I also added a few arg params to make this convenient.

#!/bin/bash
id=$1
chk=$2
dataset_size=480
epochs=10
exp_type=pipeline-${id}
version=6_checkpoint_${chk}_chunk_variation
batch_size=240

for chunk_size in 10, 20, 40, 60, 120 
do
   echo "python3 main-micro.py ${exp_type} --batch_size ${batch_size} --chunks ${chunk_size} --dataset_size ${dataset_size} --save_file stats_${exp_type}_v${version}.csv --epochs ${epochs} --checkpointing ${chk}"
   python3 main-micro.py ${exp_type} --batch_size ${batch_size} --chunks ${chunk_size} --dataset_size ${dataset_size} --save_file stats_micro_${exp_type}_v${version}.csv --epochs ${epochs} --checkpointing ${chk}
done

vibhatha avatar Mar 09 '20 16:03 vibhatha

One possibility came to my mind. When a process uses up almost all CUDA memory, CUDACachingAllocator in PyTorch might synchronize to GPU for releasing garbage blocks. Frequent synchronization between CPU and GPU is not good at the speed. Why don't you try to choose a smaller batch size and profile both options with NVIDIA Nsight Systems?

sublee avatar Mar 10 '20 05:03 sublee

Yes, I am heading that way @sublee. I observed some overheads with smaller batch sizes. I am profiling those parts.

vibhatha avatar Mar 10 '20 12:03 vibhatha

https://github.com/kakaobrain/torchgpipe/blob/master/benchmarks/unet-speed/main.py

In here, when doing the speed benchmarks, why a constant mini-batch size is not used for pipelining. Shouldn't the variable be chunks?

vibhatha avatar Mar 26 '20 13:03 vibhatha

The constant batch_sizes are used in input = torch.rand(batch_size, 3, 192, 192, device=in_device) on 168th line.

sublee avatar Mar 26 '20 13:03 sublee

def baseline(model: nn.Module, devices: List[int]) -> Stuffs:
        batch_size = 40
        device = devices[0]
        model.to(device)
        return model, batch_size, [torch.device(device)]

    @staticmethod
    def pipeline1(model: nn.Module, devices: List[int]) -> Stuffs:
        batch_size = 80
        chunks = 2
        balance = [241]

        model = cast(nn.Sequential, model)
        model = GPipe(model, balance, devices=devices, chunks=chunks)
        return model, batch_size, list(model.devices)

    @staticmethod
    def pipeline2(model: nn.Module, devices: List[int]) -> Stuffs:
        batch_size = 512
        chunks = 32
        balance = [104, 137]

        model = cast(nn.Sequential, model)
        model = GPipe(model, balance, devices=devices, chunks=chunks)
        return model, batch_size, list(model.devices)

Here the input to that line comes from these? Have I misunderstood this?

vibhatha avatar Mar 26 '20 14:03 vibhatha

Those static methods return batch_size as a result. It is used to initialize input later.

EXPERIMENTS: Dict[str, Experiment] = {
    'baseline': Experiments.baseline,
    'pipeline-1': Experiments.pipeline1,
    'pipeline-2': Experiments.pipeline2,
    'pipeline-4': Experiments.pipeline4,
    'pipeline-8': Experiments.pipeline8,
}
...
    f: Experiment = EXPERIMENTS[experiment]
    try:
        model, batch_size, _devices = f(model, devices)
...
    input = torch.rand(batch_size, 3, 192, 192, device=in_device)

sublee avatar Mar 26 '20 14:03 sublee

Yes, those values are different. I mean the batch size per each pipeline config is different. What is the reason for this?

vibhatha avatar Mar 26 '20 14:03 vibhatha

Sorry for misunderstanding what "constant" means. We adjusted the batch sizes to maximize the throughput. You can find a similar explanation in the paper v1 "4.2. Performance".

sublee avatar Mar 26 '20 17:03 sublee

That is totally fine. I just wanted to learn why the numbers were chosen like that. Thank you very much :+1:

vibhatha avatar Mar 26 '20 17:03 vibhatha