xDAN-AI
xDAN-AI
yes. I try to quantize it with autoawq. Its a mixtral model based on several 34b yi finetuned models.
> Oh wait you might have to update pip I installed it with pip .So you mean I should upgrade pip individually again? But my pip version is the newest...
furthermore, whats the info here meaning? `/root/miniconda3/envs/unsloth/lib/python3.10/site-packages/unsloth/__init__.py:23: UserWarning: Unsloth: 'CUDA_VISIBLE_DEVICES' is currently 0,1,2,3,4,5,6,7 but we require 'CUDA_VISIBLE_DEVICES=0' We shall set it ourselves.`
Nope its on lambda lab | | xdan_dev | | ***@***.*** | ---- Replied Message ---- | From | ***@***.***> | | Date | 12/19/2023 16:41 | | To |...
LM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if...
Is here fixed?
> Hi, can you share a repo that allows me to easily reproduce this? Because we have supported MPT which has a different prompt format, and it is unclear where...
> Rebased! For those using this branch earlier, you will need to delete and repull due to rebase. > > Breaking change: We do not need the `type: sharegpt.load_multirole` anymore!...
`python main.py $MODEL_PATH $DATASET_PATH --nsamples=1024 \ --num_codebooks=1 --nbits_per_codebook=16 --in_group_size=8 \ --relative_mse_tolerance=0.01 --finetune_relative_mse_tolerance=0.001 \ --finetune_batch_size=32 --local_batch_size=1 --offload_activations \ --wandb --save $SAVE_PATH`
could you share a example script for quantizing a 70b model on 8*A100 ?