Alberto de la Ossa
Alberto de la Ossa
Hi Igor, Thank you for the fast response! Yes, I run in one node with 4 GPUs with no problems. I thought that I needed to set `PYOPENCL_CTX=':'` to get...
Hello! I have tried @berceanu's branch https://github.com/hightower8083/synchrad/pull/28 and the situation improves: stdout ``` Running on 8 devices ALL | GPU device: NVIDIA A100-SXM4-40GB ALL | GPU device: NVIDIA A100-SXM4-40GB ALL...
Well, well, it's working great now with this https://github.com/hightower8083/synchrad/pull/28 I just needed to add `-ppn 4` to `mpirun` so it is cleat that there are 4 processes per node. ```...
Thanks! The `-btype flattop --hghg` flags are arguments for `undulator_beam.py`. I will delete them from above to avoid confusion. Thanks for the offer, Igor: I'll try to catch you in...
Hello! I would like to follow up this issue with an update. Last time I reported that `synchrad` run well across multiple nodes (in the DESY Maxwell cluster) when using...
Hi Igor! As you guessed, I pass the tracks as a list to Synchrad. And yes, it is the CPU RAM the one going over the top. Thank you!
Hello Igor, This solution (using a file to input tracks) turned to be very slow and disk space consuming for a large number of particles, so in the end I...
Hi Igor, I am glad that you like the idea. I can only say that it has been very useful to run examples with order million particles and beyond. I...
Hi Maxence, I can take care of this in https://github.com/AngelFP/Wake-T/pull/173, whenever we finalize a first working version of the associated PR in LASY https://github.com/LASY-org/lasy/pull/361
Hi Jonas, You should be able to concatenate Wake-T and Ocelot with no problem. So you could do: LPA + APL with Wake-T and the rest with Ocelot. Have you...