candle icon indicating copy to clipboard operation
candle copied to clipboard

Minimalist ML framework for Rust

Results 407 candle issues
Sort by recently updated
recently updated
newest added

(`cls.predictions.decoder.weight`, `bert.embeddings.word_embeddings.weight`) and (`cls.predictions.decoder.bias`, `cls.predictions.bias`) are shared tensor in `BertForMaskedLM` model. They may not exist in `model.safetensors` file.

I'm trying to run some examples with Intel's mkl library. I'm running on Debian 12 so since their version of mkl is ancient (2020.4.304) I added the official Intel repository...

When using `candle/mkl` right now, it forces the use of static linking. However, this is not optimal in some circumstances, for instance we need to hotpatch the library to enable...

need this three mode `['linear', 'bilinear', 'bicubic' ]` ref: https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html

Hi team, I'd like to have an example on mulit-node inference/training of candle, how can I find it? Thanks :) -- Klaus

can anyone teach me how I can use candle to run llms locally on mobile Im using TauriV2 as my framework

This pull request implements two zero-shot classification examples: - A native implementation - A WebAssembly implementation Additionally, this PR modifies the `ModernBertClassifier`, removing the softmax function call in its forward...

## Description I'm encountering a tensor reshape error when trying to run inference on an audio model using `candle-onnx`. The error occurs during model evaluation despite the input tensor seemingly...

It seems that attention mask should be reversed first in `distilbert::MultiHeadSelfAttention` https://github.com/huggingface/transformers/blob/main/src/transformers/models/distilbert/modeling_distilbert.py#L218

Using tensor-tools to quantize a flux finetine at little-lake-studios/demoncore-flux on hf hub https://huggingface.co/little-lake-studios/demoncore-flux/ ``` tensor-tools quantize demonCORESFWNSFW_fluxV13.safetensors --out-file Q8_0.gguf --quantization q8_0 ``` gets an error ``` Error: unsupported safetensor dtype...