Sebastien Kerbrat

Results 3 comments of Sebastien Kerbrat

This is caused by having an incompatible version of the graphql package. Upgrading from 14 to 16.0.1 fixed this for me.

I'm running into the same problem. After some debugging I've been able to reproduce with the following script: ``` console.log(process.pid + ': Starting'); const shutdown = (code) => { console.log(process.pid...

I'm getting a similar error running llama2 7B on 4 L4 GPUs in stage 3 ``` deepspeed: train_micro_batch_size_per_gpu: 4096 eval_micro_batch_size_per_gpu: 2048 prescale_gradients: false bf16: enabled: true gradient_clipping: 10.0 optimizer: type:...