Javier

Results 1 comments of Javier

Hey all, I encountered a similar error while running inference on a multi-GPU (2 H100s) setup on SLURM, the LLM successfully loaded but would encounter this error when performing inference....