Benjamin Bossan
Benjamin Bossan
> Previously I tested that load model with float16, enable fp16 and modules_to_save equals to none. If lora layers inherit float16 from model type, I should expect to see similar...
Thanks for providing an example. I tried it (using opt) and it crashed even with `modules_to_save=None`. Checking the dtypes of the learnable parameters, they are fp16, so the crash is...
Thanks @RonanKMcGovern for sending me here. Let's set up CI using PEFT and unsloth main to prevent this in the future. Do you want to set it up on your...
> * Check if models trained with `scikit-learn` can be served with `scikit-learn-intelex` and vice versa. I started with this point and found it's possible to load sklearn (w/o intelex)...
> yeah the speedups are only in a few models and some specific cases, not always. According to [this article](https://medium.com/intel-analytics-software/save-time-and-money-with-intel-extension-for-scikit-learn-33627425ae4), logistic regression should be faster though. I didn't study the...
> patch_sklearn() function should be called prior to sklearn exports to make it work Ah, you mean prior to sklearn _imports_, right? Thanks, good catch. I changed the scripts to...
@napetrov thank you for clarifying > First - to see if extension have been used you can enable [verbose mode](https://intel.github.io/scikit-learn-intelex/verbose.html) by setting variable SKLEARNEX_VERBOSE=INFO @adrinjalali Do we want to enable...
Should be addressed by #398
@lazarust Could you please solve the merge conflict? Regarding the uncovered new line: Would that be solved by adding tests for scikeras?
> Yeah, I believe it would be but I was unsure if we wanted to have tests that included another library like that. Should I add one? Yes, it would...