EarthXP

Results 4 comments of EarthXP

> not GPT-Q int8, just FT weight int8, see following link . > > https://github.com/NVIDIA/FasterTransformer/blob/f8e42aac45815c5be92c0915b12b9a6652386e8c/examples/pytorch/gpt/bloom_lambada.py#L165-L170 > > thank you for the llm-awq link, I will check it. @Xingxiangrui thanksf for...

> > > > I plan to do it. Did you get started ? how is it going now ? any progress?

> Hey, how did you write your tensor_parallelize function if you followed our gpt2 example? there is a tensor_parallelize func in [gpt exmaple](https://github.com/hpcaitech/ColossalAI/blob/main/examples/language/gpt/gemini/train_gpt_demo.py#L142), when it needs people to implement their...

I met the same issue. and also I found that merge the lora on 3090 is different from A100, by different I mean the two merged models have different MD5.