Alexandru Coca
Alexandru Coca
The DistributedKernelShap repo has some benchmark results one can refer to in the meantime!
Hey @Gimba, @jacekblaz, Note that although a background dataset is not provided, the expected values is computed using the node cover information, stored at each node. The computation proceeds recursively,...
Have you tried installing the c compiler? If you are on Ubuntu, try uninstalling `shap`, running `sudo apt update` and `sudo apt install build-essential` and then installing `shap` again.
@patrickvonplaten is there an elegant mechanism for loading models that have been cached before? For example, something happened with the hub https://twitter.com/huggingface/status/1655760648926642178 and the normal `.from_pretrained` machinery does not work....
@Wauplin , this is a great idea and I had tried it but passing the `local_files_only=True` to the `from_pretrained` call above doesn't solve the issue. There is a cache containing...
``` self._tokenizer = AutoTokenizer.from_pretrained(model_name, local_files_only=True) self._model = AutoModel.from_pretrained(model_name, local_files_only=True) ``` with `model_name="sentence-transformers/all-distilroberta-v1"`
like I can't see how this would work ... the code ends up in `configuration_utils.py` at L628... and then we later end in `hub.py` ... It seemed there may have...
@Wauplin, yes, worked for me because I had the cached files handy 👍 :)
@stas00 your solution is great, tested it a bit. Is there any timeline for this feature and could one help with integration? Would be interested to know what are the...
Any update on this issue @scofield7419? :)