Denis Kuznedelev

Results 26 comments of Denis Kuznedelev

Hi, @JianbangZ. At the moment we have not released the inference code. We intend to release in the nearest future.

@tavisshore good to know. Anyway, dumb solution with editing `ctime` works with a newer version of compilers, but it is cleaner solution.

@VirtualRoyalty you may try and see how shorter sequences affect the quality. When I was tuning Mixtral, i used 7k instead of 8k to fit into memory and this seems...

Hi, @l33tx0. Could you, please, provide more information about your environment? Seems like your Python version is either too new or too old.

Hi, @tsengalb99 We have re-run the fine-tuning following mostly the QuIP# fine-tuning protocol from your arxiv paper. Specifically, we split the calibration data into train and val-set and perform block-finetuning...

@deciding The current Llama-2-7b checkpoint with wikitext2 ppl=5.91 was obtained as follows. Quantization with blockwise finetuning yields 6.22 ppl. Compared to the version in the `main` branch it has early...

@deciding I do not remember exact numbers, I think the first part took 1 day on 2 A 100 and the second one 6 hours on single A100

@yuhaoliu7456 could you provide more context? I tried myself and the outputs of SD v1.4 `VAEDecoder` and `ConsistencyDecoder ` differ significantly. Code: ```python vae = AutoencoderKL.from_pretrained( "CompVis/stable-diffusion-v1-4", subfolder="vae", revision="fp16", torch_dtype=torch.float16,...

@yuhaoliu7456 Sure, you can upload it from this url: `https://img.championat.com/c/900x900/news/big/p/l/real-madrid_1651732892490413154.jpg`

@Kallamamran if you have directory with model weights and `config.json` you can try: `AutoencoderKL.from_pretrained(path_to_local_dir, subfolder="vae" or no subfolder)`