Ensemble-Pytorch icon indicating copy to clipboard operation
Ensemble-Pytorch copied to clipboard

question about expected speedup when using parallelization via joblib

Open nguyentr17 opened this issue 11 months ago • 3 comments
trafficstars

Hi,

I came across your repository while searching for ways to train multiple NN models simultaneously using 1 single GPU. My model is pretty small (just 1 layer MLP) and the VRAM used more each model is only 260mb. However, when I try to use joblib train multiple models at the same time, though they do start at the same time (according to the log), the total training time is still the same as training models sequentially. Do you happen to have any tips / quick insights / things to look at for this? I know this is not directly an issue with your package but would really appreciate any help.

My code is like this:

    with parallel_backend('loky', n_jobs=-1):
        parallel = Parallel(n_jobs=-1)
        parallel(
            delayed(process_latent_pair)(mi_estimator, iid, tid, cfg, exp_name, args, DEVICE) # process_latent_pair trains 1 NN model
            for iid in range(13)
            for tid in range(13)
        )

My environment:

python 3.9.19
torch==2.4.1
joblib==1.4.2

nguyentr17 avatar Dec 04 '24 14:12 nguyentr17