forcekkk

Results 4 comments of forcekkk

my torch version is 1.10, python 3.6, cuda 11.6, cudnn 8.6, and that's error is when run 'python setup.py bdist_wheel' for installing spconv1.2.1

> sorry, I have not solved this problem yet

I want to use 8 4090(24G) to quantize the W4A4 for llama-7b, but it will have this error. python main.py \ --model ./llama-7b --epochs 1 --output_dir ./log/llama-7b-w4a4 --eval_ppl --wbits 4...

Hi! I am reproducing the w4a4 results using the setting 'n_samples_per_class = 2, ddim_steps = 20, ddim_eta = 0, scale = 3, Epoch=160' but I cannot have the right images....